id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2308.08339
Denoising Diffusion Probabilistic Model for Retinal Image Generation and Segmentation
Experts use retinal images and vessel trees to detect and diagnose various eye, blood circulation, and brain-related diseases. However, manual segmentation of retinal images is a time-consuming process that requires high expertise and is difficult due to privacy issues. Many methods have been proposed to segment images, but the need for large retinal image datasets limits the performance of these methods. Several methods synthesize deep learning models based on Generative Adversarial Networks (GAN) to generate limited sample varieties. This paper proposes a novel Denoising Diffusion Probabilistic Model (DDPM) that outperformed GANs in image synthesis. We developed a Retinal Trees (ReTree) dataset consisting of retinal images, corresponding vessel trees, and a segmentation network based on DDPM trained with images from the ReTree dataset. In the first stage, we develop a two-stage DDPM that generates vessel trees from random numbers belonging to a standard normal distribution. Later, the model is guided to generate fundus images from given vessel trees and random distribution. The proposed dataset has been evaluated quantitatively and qualitatively. Quantitative evaluation metrics include Frechet Inception Distance (FID) score, Jaccard similarity coefficient, Cohen's kappa, Matthew's Correlation Coefficient (MCC), precision, recall, F1-score, and accuracy. We trained the vessel segmentation model with synthetic data to validate our dataset's efficiency and tested it on authentic data. Our developed dataset and source code is available at https://github.com/AAleka/retree.
Alnur Alimanov, Md Baharul Islam
2023-08-16T13:01:13Z
http://arxiv.org/abs/2308.08339v1
# Denoising Diffusion Probabilistic Model for Retinal Image Generation and Segmentation ###### Abstract Expert's use retinal images and vessel trees to detect and diagnose various eye, blood circulation, and brain-related diseases. However, manual segmentation of retinal images is a time-consuming process that requires high expertise and is difficult due to privacy issues. Many methods have been proposed to segment images, but the need for large retinal image datasets limits the performance of these methods. Several methods synthesize deep learning models based on Generative Adversarial Networks (GAN) to generate limited sample varieties. This paper proposes a novel Denoising Diffusion Probabilistic Model (DDPM) that outperformed GANs in image synthesis. We developed a Retinal Trees (ReTree) dataset consisting of retinal images, corresponding vessel trees, and a segmentation network based on DDPM trained with images from the ReTree dataset. In the first stage, we develop a two-stage DDPM that generates vessel trees from random numbers belonging to a standard normal distribution. Later, the model is guided to generate fundus images from given vessel trees and random distribution. The proposed dataset has been evaluated quantitatively and qualitatively. Quantitative evaluation metrics include Frechet Inception Distance (FID) score, Jaccard similarity coefficient, Cohen's kappa, Matthew's Correlation Coefficient (MCC), precision, recall, F1-score, and accuracy. We trained the vessel segmentation model with synthetic data to validate our dataset's efficiency and tested it on authentic data. Our developed dataset and source code is available at [https://github.com/AAlika/retree](https://github.com/AAlika/retree). Computational Photography, Retinal Images, Vessel Trees, Dataset, Denoising Diffusion Probabilistic Models, Segmentation. ## 1 Introduction Retinal images play a vital role in diagnosing various diseases. By analyzing the retinal images, ophthalmologists can detect numerous health conditions related to eye, blood circulation, and brain-related disorders, e.g., retinal tear and detachment, diabetic and hypertensive retinopathy, papiledema, optic atrophy, microaneuryss, etc. These issues may lead to more severe health conditions, e.g., diabetic retinopathy is a consequence of diabetes, hypertensive retinopathy caused by hypertension, papiledema may lead to a brain tumor, meningitis, and stroke. These diseases can be detected by analyzing various characteristics of retinal vessel trees, such as length, width, curvature, and shape. Thanks to detecting these health conditions in advance medical treatment is greatly assisted. However, retinal image segmentation is a challenging task due to several factors. One of the most demanding difficulties are the fundus segmentation datasets that are required for efficient training of learning-based models. The models may often produce false positive results because there is a limited difference between vessel trees and background. Three popular retinal vessel segmentation datasets are available in the literature, including DRIVE, STARE, and CHASE DB1 [1, 2, 3]. These datasets have only 40, 20, and 14 image pairs, respectively. Many deep-learning-based retinal vessel segmentation models are reported in the literature. The authors focused on increasing the size and complexity of the models, leading to lower computational efficiency. For example, IterNet [4] is composed of one iteration of UNet [5] and several iterations of mini-UNet. Several learning-based methods [6, 7, 8] have been proposed to increase the training data. The authors utilized GANs to synthesize retinal images. However, it suffers from several challenging problems, including difficulty in training as the models are susceptible to training parameters, inability to generate diverse data (mode collapse), vanishing gradient due to adversarial training, and non-convergence [9]. In this work, we intend to develop a computationally efficient way of using Denoising Diffusion Probabilistic Model (DDPM) for retinal image generation and vessel segmentation. To do so, we propose a novel lightweight architecture with a new training technique leading to faster convergence than the original DDPM [10] method. Our method has two main sub-models: (a) we perform a two-stage retinal image and corresponding vessel tree generation, (b) image super-resolution and segmentation. As shown in Fig. 1, the model initially learns to generate vessel trees from the normal distribution before translating images from vessel trees to corresponding fundus images using our guided DDPM. In the third and fourth steps, we increase the resolution of generated images and train a biomedical segmentation UNet [5] to perform the retinal image segmentation task. Therefore, we propose a novel lightweight DDPM and a retinal segmentation dataset (ReTree). The proposed dataset has been evaluated using Frechet Inception Distance [11] (FID). The ReTree dataset's efficiency is rated by comparing training UNet with real and our data and evaluating them with real testing images. We used the Jaccard similarity coefficient, Cohen's kappa, Matthew's Correlation Coefficient (MCC), F1-score, precision, recall, and accuracy to evaluate segmentation results. Super-resolution model was tested using Structural Similarity Index Measure [12] (SSIM), Peak Signal-to-Noise Ratio (PSNR), and Binary Cross Entropy (BCE) loss. The main contributions of our work are summarized below. * Develop a new Retinal Tree ReTree dataset for more efficient retinal segmentation using Denoising Diffusion Probabilistic Model (DDPM). Our dataset has 30,000 retinal images with corresponding vessel trees. * Propose a novel lightweight architecture of DDPM that leads to low computational cost, reducing the time and memory complexity of DDPM. In addition, it allows increasing the resolution of generated images by factors of \(\times\)2 and \(\times\)4. * Design a new Repetitive Training Technique (RTT) for faster DDPM convergence. Experimental results show that RTT is capable of increasing the quantitative metrics as well as reducing the training time. In this training technique, the model gets re-trained if the loss in the current train step is higher than the global lowest train loss. * Perform extensive quantitative and qualitative evaluation of the proposed model and dataset using three publicly available real retinal segmentation datasets. The organization of this paper is as follows. In section 2, retinal image generation, single image super-resolution, and segmentation methods available in the literature have been briefly reviewed. A detailed description of the proposed methodologies and implementation is provided in Section 3. In Section 4, we share information about utilized datasets, experimental setup, and model evaluation techniques. Section 5 demonstrates the results obtained by our method and the comparison with the state-of-the-art methods. Finally, we conclude the proposed method with future work plans in Section 6. ## 2 Related Works ### _Image Generation_ In several methods [13, 14, 15, 16, 17, 18, 19, 20, 6, 7, 8, 9, 10], authors developed deep learning models based on GANs [21] to generate retinal images with corresponding label-maps. RetiGAN [6] developed a model with embedded Visual Geometry Group (VGG) network [22] to improve the generated retinal images and trained it using vessel trees acquired from segmented fundus images using UNet [5]. Kim et al. [13] trained a GAN with large amount of training data for retinal image synthesis. Andreini et al. [7] proposed a two-stage progressively Growing GAN that generates semantic label-maps initially. Later, it performs the image-to-image translation from the vessel tree to the retinal image. Patho-GAN [8] built a model that generates retinal images with symptoms related to diabetic retinopathy (DR) using vessel trees. Two pairs of images are fed into a discriminator model, and a pair of real and generated images are fed into the Perceptual network [22]. Finally, a pair of images are passed to the DR detector to calculate the severity of it. The authors of [14], utilized a combination of DCGAN and Wasserstein GAN to generate retinal images with diseases for their classification. In another work [15], authors utilized Deep Convolutional GAN (DCGAN) to generate retinal images for glaucoma assessment. Yu et al. [16] used a pre-processing pipeline multiple-channels-multiple-landmarks (MCML) that produces images from a combination of vessel trees, optic disc, and cup images. First, they segment original fundus images and feed the output to a generator model. The generated result and a segmentation output are passed to the discriminator model with a real pair of the retinal and segmented images. On the other hand, Appan et al. [17] utilized GAN to generate retinal images with lesions of various severity levels. They also developed a computer-aided diagnosis system to detect hemorrhage, which was trained using previously generated data. Costa et al. [19] utilized GAN architecture to generate vessel trees and then fundus images from these vessel trees. First, they built two autoencoders using adversarial learning to generate vessel trees and their corresponding retinal images. In another work [20], they developed a retinal GAN to generate retinal images from related vessel trees. In the available literature, authors utilized only GAN-based methods for retinal image synthesis. However, GANs are proven to have such issues as vanishing gradients, mode collapse, and non-convergence. Therefore, this work focused on solving these issues by utilizing a new DDPM-based technique that has achieved state-of-the-art performance in generating retinal images. The first original generative DDPM was proposed by Sohl-Dickstein et al. [10] and Ho et al. [23]. Song et al. [24] proposed denoising diffusion implicit models (DDIMs), a new class of diffusion models that uses non-Markovian diffusion processes. In another work [25], the authors proposed a DDPM with a novel Iterative Latent Variable Refinement (ILVR), that conditions the generative process to synthesize high-quality images based with a given reference image. In this work, we compare the proposed model with Improved DDPM [26] (IDDPM), which is the current state-of-the-art method in image generation task. It utilizes Vision Transformer (ViT) encoder blocks with Multi-Head Self-Attention after each down- and up-sample block. This approach reshapes the features by a factor of 2 using 4 down- and up-sample blocks. Due to the architecture of IDDPM [26], it is impossible to generate images with resolution other than 64\(\times\)64, unlike with the proposed model. Therefore, we up-scale the generated images using Enhanced Super-Resolution GAN (ESRGAN) proposed by Wang et al. [27]. ESRGAN [27] is based on SRGAN [28], that was the first GAN used in single-image super-resolution. ESRGAN [27] utilizes Residual-in-Residual Dense Block (RRDB) as the main building block of the model. Recently, a new work has been proposed [29] for retinal image super-resolution using ViT and Convolutional Neural Network (CNN). Qiu et al. [30] proposed Improved GAN with a novel residual attention block for a more accurate generation. ### _Retinal Image Segmentation_ Li et al. [4] proposed a UNet-based [5] IterNet that adopts several iterations of a mini-UNet. In each iteration, the features from previous steps are shared using short and long skip connections. In Attention Guided Network (AGNet) [31], the authors proposed a method based on M-Net [32] with attention-guided filter [33] that transfers structural information from low-level feature maps to high-level. Jiang et al. [34] proposed Multi-Scale and Multi-Branch Network (MSMB-Net) for retinal image segmentation, utilizing atrous convolutions and skip connections for more efficient feature extraction. Li et al. [35] in the Triple Attention Network (TA-Net) embedded a channel with a self-attention encoder and spatial attention up-sampling blocks to improve the representation of target features and to train the model to pay more attention to the essential pixels. Zhang et al. [36] utilized pyramid UNet with pyramid-scale aggregation blocks for feature aggregation at all levels. Recently, a few methods [37, 38] utilized the DDPM in image segmentation. For example, Amit et al. [38] employed DDPM for natural image segmentation. After passing them to convolutional encoders, the authors used feature addition instead of concatenation for input images. In another work [39], UNet with Convolutional Block Attention Module (CBAM-UNet) has been proposed for more accurate retinal vessel segmentation. ## 3 Methodology ### _Background_ The DDPM uses two stages to generate samples: forward and backward processes. In the forward diffusion process, we gradually add Gaussian noise to the input image through a series of diffusion time steps \(T\). In the backward process, the model tries to denoise the input at the given diffusion step \(t\). In the forward diffusion, at each diffusion step \(t\), we sample \(x_{t}\) from data distribution \(q(x)\) (\(x_{t}\in q(x)\)). The noise is added to the input variable \(x_{t-1}\) using the variance schedule \(\beta_{t}\) to acquire \(x_{t}\) with \(q(x_{t}|x_{t-1})\). To do so, we use the following formula. \[q(x_{t}|x_{t-1})=\mathcal{N}(x_{t};\mu_{t}=\sqrt{1-\beta_{t}}x_{t-1},\Sigma_{ t}=\beta_{t}I) \tag{1}\] where \(\mathcal{N}\) is the Gaussian distribution, \(\mu_{t}\) is the mean, \(\Sigma_{t}\) is the standard deviation, a diagonal matrix of \(\beta_{t}\) values because \(I\) is the identity matrix. With equation 1, it is possible to apply noise to \(x_{0}\) in a tractable way (1:\(T\)) to get \(x_{T}\). This posterior probability is defined as: \[q(x_{1:T}|x_{0})=\prod_{t=1}^{T}q(x_{t}|x_{t-1}) \tag{2}\] However, with these equations, to sample \(x_{T}\), we would have to apply \(q(x_{1:T}|x_{0})\) a total of \(T\) number of times repeatedly. Therefore, a reparametrization trick was used to sample \(x\) at any time step \(t\), which is defined in the following formula: \[\begin{split} x_{t}&=\sqrt{1-\beta_{t}}x_{t-1}+ \sqrt{\beta_{t}}\epsilon_{t-1}\\ &=\sqrt{\alpha_{t}}x_{t-2}+\sqrt{1-\alpha_{t}}\epsilon_{t-2}\\ &=\dots\\ &=\sqrt{\bar{\alpha}_{t}}x_{0}+\sqrt{1-\bar{\alpha}_{t}} \epsilon_{0}\end{split} \tag{3}\] where \(\alpha_{t}=1-\beta_{t}\), \(\bar{\alpha}_{t}=\prod_{s=0}^{t}\alpha_{s}\) and noise \(\epsilon_{0},\dots,\epsilon_{t-2},\epsilon_{t-1}\sim\mathcal{N}(0,I)\). This reparametrization trick allows us to precompute \(\alpha_{t}\) and \(\bar{\alpha}_{t}\) for any diffusion step \(t\) to sample \(x_{t}\) in one operation, to produce \(x_{t}\) we can use the following distribution: \[x_{t}\sim q(x_{t}|x_{0})=\mathcal{N}(x_{t};\sqrt{\bar{\alpha}_{t}}x_{0},(1- \bar{\alpha}_{t})I) \tag{4}\] In our work, \(\beta_{1}\) and \(\beta_{T}\) are 0.0001 and 0.02, respectively. In addition, we utilized cosine noise scheduler [26] instead of the linear one used in the original DDPM work [10] as it has shown to lead to faster convergence. In the reverse diffusion process, the model is trying to learn distribution \(p_{\theta}(x_{t-1}|x_{t})\), which is an approximation of original distribution \(q(x_{t-1}|x_{t})\). In this process, a deep Fig. 1: Workflow of the proposed framework. In the first (I) step, we train DDPM to generate vessel trees from noise. Next (II), another DDPM learns to generate retinal images from noise and vessel semantic label-map. In the third (III) step, we performed super-resolution of generated vessel trees and retinal images. Finally (IV), the biomedical segmentation model is utilized to segment generated up-sampled retinal images for validation. learning method is utilized to gradually remove noise at each diffusion step from \(T\) to 1. This process can be defined as follows: \[p_{\theta}(x_{t-1}|x_{t})=\mathcal{N}(x_{t-1};\mu_{\theta}(x_{t},t),\Sigma_{ \theta}(x_{t},t)) \tag{5}\] This reverse formula is applied to all diffusion steps to compute the trajectory \(p_{\theta}(x_{0:T})\) using equation 6: \[p_{\theta}(x_{0:T})=p_{\theta}(x_{T})\prod_{t=1}^{T}q(x_{t-1}|x_{t}) \tag{6}\] However, the output of a DDPM is not an image itself. The model takes the noisy image as input and predicts the noise. To sample an image from complete noise input \(x_{T}\), we use the following formula 7: \[x_{t-1}=\frac{x_{t}-\frac{1-\alpha_{t}}{\sqrt{1-\alpha_{t}}}\epsilon_{\theta} (x_{t},t)}{\sqrt{\alpha_{t}}}+\sqrt{\beta_{t}}z \tag{7}\] where \(\epsilon_{\theta}(x_{t},t)\) is predicted noise by DDPM and if \(t\) is not equal to \(1\) then \(z\sim\mathcal{N}(0,I)\), otherwise \(z\) is 0. Equation 7 is repeated \(T\) times, from \(t=T\) until \(t=1\) to acquire \(x_{0}\). ### _Overview of Proposed Framework_ Our work focused on building a framework for retinal images with vessel tree generation, super-resolution, and segmentation. The workflow of the proposed method is shown in Fig. 1, which has four major steps. Firstly, we develop the denoising diffusion probabilistic model (DDPM) to generate semantic label maps from noise with one channel. Secondly, we applied an example-based generation technique for guided training of another DDPM to generate colorful fundus images from given vessel trees. In this step, the model synthesizes retinal images using noise with red, green, and blue (RGB) channels concatenated with a grayscale vessel tree; concatenation is performed at each diffusion time step \(t\) as shown in Fig. 2. After generating retinal images and vessel maps, the resulting images require up-scaling to improve the image quality for efficient image segmentation. The output of DDPMs is 64\(\times\)64-pixel images. Therefore, we performed super-resolution of retinal images up to 512\(\times\)512 pixels using ESRGAN [27] with modifications for a specific task (RGB and binary images super-resolution). To up-sample the retinal images, we added SSIM loss in equation 12 during training of the ESRGAN model for preserving the image structure. To perform super-resolution of vessel maps, we had to remove the Perceptual loss computed using VGG-19 [22] because it requires input images with 3 channels. However, we included a BCE loss, shown in equation 14 to evaluate the performance of binary image super-resolution. In the final step, we performed vessel segmentation of the proposed dataset using UNet [5]. ### _ReTree Image Generation Models_ Retinal image generation consists of two stages: retinal vessel generation (1) and retinal image synthesis (2). In both tasks, we utilized DDPM, the architectures of proposed DDPMs are presented in Fig. 2 and in Fig. 3. Both models comprise initial down-sample, ViT encoder, bottleneck, and up-sample blocks. The initial block includes 2D convolution with channel number, kernel size, and padding equal to 128, 3, and 1, respectively. After that, the input \(x_{t}\) is normalized and activated using group normalization and Gaussian Error Linear Unit (GELU) [40]. Next, \(x_{t}\) and the diffusion step are sent to one down-sample block. The input \(x_{t}\) is down-scaled using one max pooling layer with a window size of 4 and two convolutional blocks. Each block is built with convolution, group normalization, GELU, convolution, and group normalization layers. Also, diffusion step \(t\) is activated using Sigmoid Linear Unit [41] (SiLU) fed into a linear layer. The resulting embeddings are added to \(x_{t}\) and processed by ViT encoder block with locality self-attention (LSA) [42]. The procedure with down-sampling and ViT blocks are repeated. The bottleneck is composed of three consecutive convolutional blocks. The up-scaling is Fig. 2: The general architecture of the proposed DDPMs. The first model generates semantic label maps from noise in a given number of time steps (\(T\)). At each time step \(t\), the model predicts output from time steps \(t-1\). In the second step, the model takes RGB noise and the output of the first model to generate fundus images. The image containing vessel trees is concatenated with noisy input at each diffusion step \(t\). Each DDPM model includes initial, down-sample, ViT encoder, bottleneck, and up-sample blocks. performed using the bilinear up-sampling with a factor of 2 and two convolutional blocks with a residual connection. Since down-sampling was performed twice with a factor of 4, the model includes four up-sampling blocks. During up-scaling \(x_{t}\), we need to add time embeddings acquired from \(t\) in the same way as in down-sampling. Finally, \(x_{t}\) is fed into the last convolutional layer with a kernel size of 1. The LSA was first proposed as a solution for small datasets by Lee et al. [42]. Nonetheless, when trained on medium-sized datasets, it has also outperformed multi-head self-attention. Thanks to learnable temperature scaling. We employ LSA to train our model with the mid-sized EyeQ dataset effectively. The queries and keys matrices are multiplied in LSA to obtain \(R_{i,j}\). It uses diagonal masking to improve the attention process. The masking is shown in equation 8. The diagonal elements are set to \(-\infty\). Using equation 9, we get attention values for \(R\). \[R_{i,j}^{Masked}=\begin{cases}R_{i,j}&i\neq j\\ -\infty&i=j\end{cases} \tag{8}\] where \(R_{i,j}\) represents the components of the dot product between the matrix of queries and transposed matrix of keys \(R=q\times k^{T}\). \[LSA=Softmax(\frac{R^{Masked}}{\tau}) \tag{9}\] where \(\tau\) is learnable temperature scaling, another feature of LSA that helps the softmax calculate its temperature during training. ### _Repetitive Training Technique_ The proposed Repetitive Training Technique is a simple yet effective approach that experimentally demonstrates a reduction in training time and an improve in quantitative results. This technique aims to enhance the robustness of the proposed model against randomness and outliers. Specifically, when outlier noise is introduced, the backward diffusion process may yield unrealistic results. Therefore, re-training the model with the same data is beneficial until the loss decreases. By employing this technique, we observed a decrease in the number of epochs and total training time, along with a reduction in the generation of unrealistic images, leading to a lower FID score. In our experiments, the maximum number of repetitions is set to 5. If the current loss value surpasses the global minimum average loss across all epochs, the model is re-trained with the current data, repeating this process up to 5 times. ### _Loss Functions_ DDPMs were trained with a combination of 2 loss functions which is the sum of L1, or Mean Absolute Error (MAE), and Mean Squared Error (MSE) between predicted noise and applied noise, as shown below: \[Gen_{Loss}(\epsilon_{\theta}(x_{t},t),z)=\frac{1}{N}\sum_{i=1}^{N}(|\epsilon_{ \theta}-z|+(\epsilon_{\theta}-z)^{2}) \tag{10}\] where \(N\) is the total number of image pixels, \(\epsilon_{\theta}\) is the predicted noise and \(z\) is the actual applied noise. The combined loss produced more accurate results and performance. Super-resolution model was trained with a combination of 5 loss functions, such as L1, Structural Similarity Index Measure (SSIM), adversarial loss, Binary Cross Entropy (BCE), and Perceptual loss. SSIM computes the structural similarity between two images, up-sampled \(X\) and ground truth \(Y\), using the following equation: \[SSIM(X,Y)=\frac{(2\mu_{X}\mu_{Y}+C_{1})(2\sigma_{XY}+C_{2})}{(\mu_{X}^{2}+ \mu_{Y}^{2}+C_{1})(\sigma_{X}^{2}+\sigma_{Y}^{2}+C_{2})} \tag{11}\] where \(\mu\) is the mean, \(\sigma\) is the variance, and \(C_{1}\) and \(C_{2}\) are variables that stabilize the division. Using equation 11 we can define the SSIM loss function in the following way: \[SSIM_{Loss}(X,Y)=\frac{1-SSIM(X,Y)}{2} \tag{12}\] Adversarial loss is calculated using the output of a discriminator model, which is responsible for classifying the input images as real or fake. The discriminator model used to filter the generated images contains an initial Convolutional layer with LeakyReLU activation function, 3 Convolutional down-sampling blocks with Instance Normalization and LeakyReLU layers along with the ViT encoder blocks, and a final Convolutional layer. Each down-sampling block down-scales the input with a factor of 2. The number of kernels is 4, and the number of strides is 2, 1, 2, 2, and 1, respectively. The output prediction is activated using the Sigmoid function. The discriminator loss is defined below: \[\begin{split} Adv_{Loss}(Y,X)=&-\mathbb{E}_{Y}[log( 1-D(Y,X))]\\ &-\mathbb{E}_{X}[log(D(Y,X))]\end{split} \tag{13}\] where \(D(x_{r},x_{f})\) is the prediction given by the discriminator model. Perceptual loss, which measures the distance between activated features, is implemented using VGG19-54 [22]. The distance is minimized using the MSE loss function. Fig. 3: The detailed architecture of the proposed DDPM. It consists of down-sampling, ViT encoder, up-sampling, and bottleneck blocks. The segmentation model is trained with Binary Cross-Entropy (BCE) loss. It is done using the following equation: \[BCE_{Loss}= -\frac{1}{N}\sum_{i=1}^{N}(x_{i_{t-1}}log(x_{i_{t-1}}^{\theta}) \tag{14}\] \[+(1-x_{i_{t-1}})log(1-x_{i_{t-1}}^{\theta}))\] where \(N\) represents the number of pixels in the image. ## 4 Datasets and Experimental Setup ### _Datasets_ In this work, we utilized a total of 4 retinal datasets for image generation, super-resolution, and segmentation tasks. At first, we trained UNet [5] with DRIVE [1], STARE [2], and CHASE DB1 [3] combined to segment retinal images. Next, we used this model to segment the EyeQ dataset [43] that comprises 28,792 512\(\times\)512 pixels RGB retinal images belonging to 3 classes according to their quality. The classes include "Good", "Usable", and "Bad". There are 16,818" Good", 6,434" Usable," and 5,538 "Bad" images; additionally, the dataset contains images with diabetic retinopathy, glaucoma, and age-related macular degeneration. In step 1, the DDPM is trained with segmented images from the EyeQ dataset that belong to the "Good" quality class. In the next step, the same segmented images and corresponding RGB fundus images were utilized for training the second DDPM to generate retinal images. In the next step, Enhanced Super-Resolution GAN (ESRGAN) [27] was trained using the same image pairs from the "Good" quality EyeQ dataset. Using these DDPMs and ESRGAN, we generated and up-scaled pairs of images for the ReTree dataset. Finally, the segmentation UNet [5] was trained with generated image pairs and tested using DRIVE, STARE, and CHASE DB1 to validate the efficiency of the proposed dataset. Table I shows detailed information about the used datasets for the experiment. ### _Experimental Setup_ The model has been implemented using the PyTorch framework. The hardware configurations are an Intel Core i7-10700f CPU, 32 GB RAM, and an NVIDIA GeForce RTX 2080 SUPER 8 GB GPU. In our experiments, we used the Adam optimization function, batch size of 32, and learning rate of \(10^{-5}\). ### _Evaluation Metrics_ We computed the FID score [11] to evaluate the quality and select realistic synthetic images. Additionally, we trained classification networks for generated vessel trees and retinal images using original and generated pictures combined. The output of these classification networks is a binary number, where 0 is a fake, and 1 is an actual image. The threshold was experimentally set to 0.8, meaning the prediction should be above 0.8, to select the most likely results for the ReTree dataset. The FID score is an evaluation metric for generative models. It uses the following formula for calculating the distance: \[FID=|\mu_{1}-\mu_{2}|+Tr(\sigma_{1}+\sigma_{2}-2*\sqrt{\sigma_{1}*\sigma_{2}}) \tag{15}\] where \(\mu_{1}\) and \(\sigma_{1}\) refer to the mean and covariance of the generated images, while \(\mu_{1}\) and \(\sigma_{1}\) refer to the mean and covariance of the real images. Super-resolution network has been tested using SSIM (11) and PSNR on the EyeQ dataset. The PSNR between generated image \(X\) and real image \(Y\) is computed using equation 16. \[PSNR(X,Y)=10\log_{10}\left(\frac{MAX_{X}^{2}}{MSE}\right) \tag{16}\] where \(MAX_{X}^{2}\) is the maximum value of image pixels. A segmentation step was validated using statistical metrics, such as the Jaccard similarity index, precision, recall, F1-score, and accuracy. The Jaccard index calculates the similarity between two sets and is defined in equation 17. \[J(X,Y)=\frac{|X\cap Y|}{|X\cup Y|} \tag{17}\] F1-score, recall, precision, and accuracy are calculated in the following ways: \[Recall =\frac{TP}{TP+FN} \tag{18}\] \[Precision =\frac{TP}{TP+FP}\] (19) \[F1 =\frac{2*Precision*Recall}{Precision+Recall}\] (20) \[Accuracy =\frac{TP+TN}{TP+TN+FP+FN} \tag{21}\] TP is true positive values, TN means true negative results, FP is false positive outcomes, and FN is false negative outputs. In addition, we evaluated segmentation results using MCC and kappa because they are more accurate at assessing the classification methods since they do not suffer from sensitivity to class imbalance and asymmetricity, unlike precision, recall, F1, and accuracy. They are calculated using the following equation 22 and equation 23. \[MCC=\frac{TP*TN-FP*FN}{\sqrt{(TP+FP)(TP+FN)(TN+FN)(TN+FN)}} \tag{22}\] \[kappa=\frac{Accuracy-Accuracy^{R}}{1-Accuracy^{R}} \tag{23}\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Dataset Name** & **No. of Images** & **Resolution** & **Class** \\ \hline \hline \multirow{3}{*}{EyeQ [43]} & 16,818 & 512\(\times\)512 & Good \\ & 6,434 & 512\(\times\)512 & Usable \\ & 5,538 & 512\(\times\)512 & Bad \\ \hline DRIVE [1] & 20 & 584\(\times\)565 & Train \\ & 20 & 584\(\times\)565 & Test \\ \hline \multirow{3}{*}{STARE [2]} & 10 & 700\(\times\)605 & Train \\ & 10 & 700\(\times\)605 & Test \\ \hline \multirow{3}{*}{CHASE DB1 [3]} & 14 & 999\(\times\)960 & Train \\ & 14 & 999\(\times\)960 & Test \\ \hline \end{tabular} \end{table} TABLE I: Details of datasets used for the experiment where random accuracy \(Accuracy^{R}\) is calculated in the following way: \[Accuracy^{R} =p_{1}\times p_{2}+(1-p_{1})\times(1-p_{2}) \tag{24}\] \[p_{1} =\frac{TP+FN}{TP+TN+FP+FN}\] (25) \[p_{2} =\frac{TP+FP}{TP+TN+FP+FN} \tag{26}\] ## 5 Results and Discussion ### _ReTree dataset_ We are proposing a retinal segmentation dataset (ReTree), which consists of images belonging to 2 classes: colored fundus images and corresponding binary vessel maps. The total number of images in each class is 30,000. The dataset is split into the train, validation, and test sets with 28,800, 200, and 1,000 images, respectively. The image resolution is 512\(\times\)512 pixels. The proposed dataset has been extensively evaluated using quantitative and qualitative measures. To validate the effectiveness of our dataset, we trained the biomedical segmentation UNet [5] model with it and tested the model with real images. The results indicate the improvement over training with solely the real images since the number of real images needs to be significantly increased to train the deep learning model efficiently. Fig. 4 presents the qualitative comparison with EyeQ [43], where we can see the similarity and realism of generated images, having similar qualities such as vessel structure, accurate disk/cup region, and realistic color. ### _Qualitative Results_ In our experiments, we compared our method with GAN by Costa et al. [19], since the authors of most of the other methods that focused on retinal image generation did not publish their implementations and generated datasets. As a result, we could not compare our method with these works as we did not receive any positive reply from the corresponding authors. However, we compared the proposed method with Improved DDPM (IDDPM) [26] in the ablation study. In addition, the proposed model does not suffer from the mode collapse that is present in GANs; as a result, the images obtained by our model are diverse in terms of color and overall structure. Fig. 5 shows the qualitative comparison with another GAN-based method [19]; as we can observe, the GAN-based model has failed to converge Fig. 4: The generated results of the proposed method (on the right) along with real images from the EyeQ [43] dataset (on the left). The generated images by our ReTree method have similar qualities as real retinal images. Fig. 5: The qualitative comparison of the proposed method (on the right) with another GAN-based [19] solution (on the left). The GAN could not converge, leading to low-quality results compared to ours. Alternatively, images generated by our DDPM have an accurate vessel and disk/cup structure and realistic and diverse colors. since the generated images lack clear visual vessel structure. In addition, the generated vessel maps need more visual clarity to be used for retinal segmentation tasks. Additionally, we validated the proposed ReTree dataset using UNet [5]. To do so, we trained UNet with generated images from the ReTree dataset. Then we tested it using real images from three publicly available datasets such as DRIVE [1], STARE [2], and CHASE DB1 [3]. The qualitative results are presented in Fig. 6. As we can observe, using ReTree images and images from original datasets is best, leading to better qualitative results. As we can also observe, the results of GAN method [19] failed in segmentation, while GAN [19] + original dataset could improve the segmentation process. When training with the ReTree dataset, UNet still performs well in vessel segmentation; however, it is outperformed by ReTree + original dataset. ### _Quantitative Results_ Table II shows the FID scores along with single image generation time (SIGT) obtained by GAN-based [19] and our models. As we can observe, the proposed method outperforms the other techniques in having lower FID; in particular, the FI distance between ReTree and EyeQ images is more than 3 times less than that of the GAN-based method. In addition, GAN takes 6.90 seconds to generate a single image pair, while ours generates a single pair in 6.23 seconds. The proposed architecture significantly decreased the generation time by a factor of 2 compared to IDDPM [26]. Table III represents the second quantitative evaluation stage for the proposed ReTree dataset. In this stage, we performed retinal vessel segmentation using UNet [5]. First, it was trained with original images from three publicly available datasets, such as DRIVE [1], STARE [2], and CHASE DB1 [3]. Next, it was trained with generated images only and then generated + original images from each dataset. The three UNet versions are tested with original images from three datasets during the evaluation. As we can observe from Table III, the proposed dataset performs best when combined with original datasets. ### _Ablation Study_ To validate the efficiency and performance of the proposed DDPM, we performed an extensive ablation study in which we compared our method with improved DDPM [26] that was trained in the original setup using the same two-stage generation; next, we trained this model with the proposed Repetitive Training Technique (RTT). Additionally, we trained the proposed DDPM without RTT. The training of UNet [5] has been performed using generated up-scaled images with resolutions of \(256\times\)256 and \(512\times 512\) pixels alone and combined with training data from original corresponding retinal segmentation datasets. Table IV shows the FID values, SIGT, and total training time. As we can see, embedding RTT in the training process significantly reduces the total training time; for instance, RTT reduces the training time by 32.7% from 16.2 to 10.9 hours for IDDPM [26] and by 41.8% from 14.27 to 8.3 hours for the proposed model. Additionally, the proposed model takes less time to train and to generate a single sample; IDDPM requires 12.53 seconds which is two times higher than the proposed model. RTT also reduced the FID values for both models; however, the proposed model has the lowest FID score. The models have also been trained and tested using segmentation datasets. Table V compares segmentation results using three datasets. We performed \(\times 4\) and \(\times 8\) super-resolution of generated images. The same models have been trained in combination with three segmentation datasets. As we can observe, the proposed model and training technique significantly outperforms the other methods in both super-resolution factors. In particular, the proposed model with RTT obtained the highest values of Jaccard similarity score, MCC, Cohen's kappa, F1-score, precision, recall, and accuracy, which means our method results in the highest similarity and positive correlation between segmentation classes. IDDPM [26] without RTT obtained second-best \begin{table} \begin{tabular}{|c c c|} \hline **Method** & **FID** & **SIGT (sec)** \\ \hline GAN [19] & 162.50 & 6.90 \\ \hline Ours & **48.45** & **6.23** \\ \hline \end{tabular} \end{table} TABLE II: Quantitative comparison with GAN [19] in terms of Frechet Inception Distance (FID) and Single Image Generation Time (SIGT) in seconds. Fig. 6: The qualitative comparison of the segmentation results. From left to right: original retinal image, Ground Truth, segmentation results of UNet [5] trained with the original dataset, images generated by GAN [19], images generated by our method, GAN [19] + original dataset combined, and ours + original dataset combined. As we can observe, our method performs best when combined with the original dataset. quantitative results in DRIVE [1] and STARE [2]. However, it is outperformed by the same model with RTT in CHASE DB1 [3]. Fig. 7 shows the qualitative comparison between these methods; as we can observe, utilizing our architecture along with RTT leads to better qualitative results than others. Therefore, these results present the effectiveness of the proposed DDPM architecture and RTT, which lead to higher quantitative, qualitative, and computational results. Furthermore, we conducted experiments with RTT to examine its impact on the training of other deep learning methods. Specifically, we applied RTT to train UNet [5] for retinal image segmentation. The results are presented in Table VI. As observed, RTT has a positive effect on UNet, leading to an increase in almost all quantitative metrics. The most notable improvement can be seen in the STARE dataset [2], where all quantitative metrics show an average increase of 3.42% while it reduces training time by an average of 28.47% across all datasets. ### _Limitations_ The proposed DDPMs can produce realistic retinal images along with vessel maps. However, since DDPMs use random noise to generate samples, they may generate unrealistic images, which is caused by the fact that DDPMs have the potential to simplify the assumptions made about the probability distribution underlying the input data, which could involve assuming a Gaussian distribution or a specific degree of smoothness. However, these assumptions may not \begin{table} \begin{tabular}{|c|c c c|} \hline **Method** & **FID** & **SIGT (sec)** & **Train time (hours)** \\ \hline \hline IDDPM [26] w\(\backslash\)o RTT & 68.959 & 6.156 + 6.375 & 8.8 + 7.4 \\ \hline IDDPM [26] w RTT & 64.32 & 6.156 + 6.375 & 5.4 + 5.1 \\ \hline Ours w\(\backslash\)o RTT & 55.49 & 2.97 + 3.26 & 7.07 + 7.2 \\ \hline Ours w RTT & **48.45** & **2.97 + 3.26** & **4.29 + 4.01** \\ \hline \end{tabular} \end{table} TABLE IV: Quantitative comparison with original IDDPM [26], IDDPM + RTT, ours without RTT and ours + RTT in terms of Frechet Inception Distance (FID), Single Image Generation Time (SIGT) and training time (in hours) for two stages of generation. \begin{table} \begin{tabular}{|c c c c c c c c c|} \hline **Test data** & **Train data** & **Jaccard \(\uparrow\)** & **MCC \(\uparrow\)** & **Kappa \(\uparrow\)** & **F1-score \(\uparrow\)** & **Precision \(\uparrow\)** & **Recall \(\uparrow\)** & **Accuracy \(\uparrow\)** \\ \hline \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & DRIVE [1] + GAN [19] & 0.4419 & 0.6233 & 0.5833 & 0.6065 & **0.9017** & 0.4662 & 0.9496 \\ & DRIVE [1] + ReTree & **0.6161** & **0.7449** & **0.7414** & **0.7620** & 0.8601 & 0.7162 & **0.9723** \\ & GAN [19] & 0.0885 & 0.2252 & 0.1430 & 0.1585 & 0.7497 & 0.0928 & 0.9190 \\ & Refee & 0.6023 & 0.7899 & 0.7305 & 0.7506 & 0.6714 & **0.896** & 0.9621 \\ & DRIVE [1] & 0.5042 & 0.6639 & 0.6445 & 0.6681 & 0.8517 & 0.5563 & 0.9532 \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & STARE [2] + GAN [19] & 0.5094 & 0.6570 & 0.6410 & 0.6616 & **0.7912** & 0.5998 & 0.9581 \\ & STARE [2] + ReTree & **0.5864** & **0.7240** & **0.7162** & **0.7355** & 0.7900 & **0.7303** & **0.9632** \\ & GAN [19] & 0.0589 & 0.1521 & 0.0928 & 0.1084 & 0.5548 & 0.0654 & 0.9238 \\ & Refee & 0.4629 & 0.6324 & 0.6218 & 0.6455 & 0.7015 & 0.6324 & 0.9531 \\ & STARE [2] & 0.4557 & 0.9918 & 0.5840 & 0.6123 & 0.6474 & 0.6068 & 0.9456 \\ \hline \multirow{8}{*}{ \begin{tabular}{} \end{tabular} } & CHASE DB1 [3] + GAN [19] & 0.4593 & 0.6052 & 0.6040 & 0.6291 & 0.6413 & 0.6210 & 0.9530 \\ & CHASE DB1 [3] + ReTree & **0.5394** & **0.6686** & **0.6655** & **0.6992** & **0.7194** & **0.7007** & **0.9649** \\ & GAN [19] & 0.0913 & 0.2277 & 0.1519 & 0.1663 & 0.6320 & 0.0972 & 0.9380 \\ & Refee & 0.3580 & 0.5146 & 0.5005 & 0.5256 & 0.6700 & 0.4355 & 0.9497 \\ & CHASE DB1 [3] & 0.4319 & 0.5793 & 0.5716 & 0.6030 & 0.5308 & 0.7033 & 0.9405 \\ \hline \end{tabular} \end{table} TABLE III: Quantitative comparison for vessel segmentation using DRIVE [1], STARE [2], CHASE DB1 [3] datasets. First, we combined original training sets with GAN [19] and ours to train UNet [5]. Next, we trained UNet [5] with images generated by GAN [19] and our methods without original data. Finally, we trained UNet [5] with original datasets. The results were evaluated with testing data from corresponding original datasets. The **bold** and underlined numbers represent the best and second-best results. Fig. 7: The qualitative comparison of the segmentation results. Left to right: original retinal image, Ground Truth (G.T.), segmentation results of UNet [5] trained with images generated by IDDPM [26], IDDPM [26] + original dataset combined, ours, and ours + original dataset combined. As we can observe, our method performs best when combined with the original dataset. necessarily hold in practical scenarios, and this can lead to the generation of unrealistic images. As shown in Fig. 8, some images may have two retinal cups, uneven illumination, color distortion, and missing retinal cups. However, we trained a discriminative model using real images from EyeQ [43] dataset and a subset of 10,000 generated images from the proposed dataset to classify real and generated images to overcome these limitations. The output of this discriminator is a continuous prediction value between 0 and 1, where 0 is generated and 1 is a real image. The threshold has been set to 0.8, which allowed us to keep only realistically-looking retinal images in the dataset. It helped to remove the unrealistic images from the dataset. ## 6 Conclusion In this work, our main objective was to develop a novel retinal image segmentation dataset generated with the current state-of-the-art class of generation models, namely Denoising Diffusion Probabilistic Models. Additionally, we propose a novel lightweight architecture and a training \begin{table} \begin{tabular}{|c c c c c c c c c|} \hline **Dataset** & **RT** & **Dataset** & **NCC** & **Keying** & **F1** & **Recall** & **Precision** & **Accuracy** & **Time (train)** \\ \hline \hline \multirow{2}{*}{DIVE [1]} & ✗ & 0.6303 & 0.5244 & **0.7584** & **0.7706** & **0.7991** & 0.7753 & 0.8645 & 17.273 \\ & & **0.6417** & **0.7840** & 0.7525 & 0.7771 & 0.7860 & **0.7985** & **0.8635** & **13.48** \\ \hline \multirow{2}{*}{STAR [2]} & ✗ & 0.5712 & 0.7426 & 0.6991 & 0.7721 & 0.7744 & 0.7848 & 0.5579 & 17.988 \\ & & **0.6402** & **0.7314** & **0.7278** & **0.7544** & **0.7799** & **0.8635** & **13.27** \\ \hline \multirow{2}{*}{CHASE Deli [3]} & ✗ & 0.5960 & 0.7323 & 0.7299 & 0.7424 & 0.7948 & 0.7986 & 0.8656 & 14.73 \\ & & **0.6432** & **0.5144** & **0.7290** & **0.5577** & **0.8279** & 0.7305 & **0.8900** & **10.36** \\ \hline \end{tabular} \end{table} TABLE VI: Quantitative evaluation of UNet [5] with and without Repetitive Training Technique and total training time in minutes. \begin{table} \begin{tabular}{|c|c c c c c c c c|} \hline technique for DDPM that significantly improves the computational, qualitative, and quantitative performance of DDPMs. The proposed DDPM can be trained with higher resolution images, such as 128\(\times\)128 and 256\(\times\)256, compared to the original DDPM that generates 64\(\times\)64 pixels images. This work involves four steps: retinal vessel generation (1), retinal image synthesis (2), single image super-resolution (3), and retinal vessel segmentation (4). The proposed Re-Tree dataset was extensively evaluated quantitatively and qualitatively. In addition, it was compared with real retinal segmentation datasets. The results prove the superiority of our dataset over other manually collected datasets. In future works, we intend to improve the generation ability of DDPM, overcoming its limitations, mentioned in 5.5. The proposed dataset and the source code of this work are available online for further evaluation and re-generating of the test results. #### Acknowledgments This work is partially supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under the 2232 Outstanding Researchers program, Project No. 118C301. ### Availability of Source Code and Data Our developed dataset and source code is available at [https://github.com/A](https://github.com/A) Aleka/retree. ### Compliance with Ethical Standards This article does not contain any studies with human participants and/or animals performed by any of the authors ### Conflict of Interest The authors confirm that no actual or potential conflict of interest is related to this article.
2303.12013
phi-FEM for the heat equation: optimal convergence on unfitted meshes in space
Thanks to a finite element method, we solve numerically parabolic partial differential equations on complex domains by avoiding the mesh generation, using a regular background mesh, not fitting the domain and its real boundary exactly. Our technique follows the phi-FEM paradigm, which supposes that the domain is given by a level-set function. In this paper, we prove a priori error estimates in l2(H1) and linf(L2) norms for an implicit Euler discretization in time. We give numerical illustrations to highlight the performances of phi-FEM, which combines optimal convergence accuracy, easy implementation process and fastness.
Michel Duprez, Vanessa Lleras, Alexei Lozinski, Killian Vuillemot
2023-03-21T16:50:21Z
http://arxiv.org/abs/2303.12013v1
# \(\phi\)-FEM for the heat equation: optimal convergence on unfitted meshes in space ###### Abstract Thanks to a finite element method, we solve numerically parabolic partial differential equations on complex domains by avoiding the mesh generation, using a regular background mesh, not fitting the domain and its real boundary exactly. Our technique follows the \(\phi\)-FEM paradigm, which supposes that the domain is given by a level-set function. In this paper, we prove _a priori_ error estimates in \(l^{2}(H^{1})\) and \(l^{\infty}(L^{2})\) norms for an implicit Euler discretization in time. We give numerical illustrations to highlight the performances of \(\phi\)-FEM, which combines optimal convergence accuracy, easy implementation process and fastness. ## 1 Introduction The classical finite element method for elliptic and parabolic problems (see e.g. [1]) needs a computational mesh fitting the boundary of the physical domain. In some applications in engineering or bio-mechanics, the construction of such meshes may be very time-consuming or even impossible. Alternative approaches, such as Fictitious Domain [2] or Immersed Boundary Methods (IBM) (see e.g. [3] for a review), can work on unfitted meshes but are usually not very precise. More recent variants, such as CutFEM [4], demonstrate optimal convergence orders but are less straightforward to implement than the original IBM. In particular, CutFEM needs special quadrature rules on the cells cut by the boundary. Finally, we can also mention the Shifted Boundary Method [5] that avoids the non-trivial integration by introducing a boundary correction based on a Taylor expansion. A new Finite Element Method on unfitted meshes, named \(\phi\)-FEM, combining the optimal convergence and the ease of implementation, was recently proposed in [6, 7]. Initially developed for stationary elliptic PDEs, it has been extended in [8] to a broader class of equations, including the time-dependent parabolic problems, without any theoretical analysis. The goal of the present note is to provide such an analysis in the case of the Heat-Dirichlet problem \[\partial_{t}u-\Delta u=f\ \text{in}\ \Omega\times(0,T),\ \ u=0\ \text{on}\ \Gamma\times(0,T),\ \ u_{|t=0}=u^{0}\ \text{in}\ \Omega, \tag{1}\] where \(T>0\), \(\Omega\subset\mathbb{R}^{d}\), \(d=2,3\) is a bounded domain with a smooth boundary \(\Gamma\) given by a level-set function on \(\mathbb{R}^{d}\) \[\Omega:=\{\phi<0\}\qquad\text{and}\qquad\Gamma:=\{\phi=0\}\,. \tag{2}\] (Note that some FEM on unfitted meshes have been developed for such problems for example, in [9, 10]). For the discretization in time, we use the implicit Euler scheme. The Dirichlet boundary conditions are imposed via a product with the level-set function \(\phi\). An appropriate stabilization is introduced to the finite element discretization to obtain well-posed problems. A somewhat unexpected feature of this stabilization is that it works under the constraint on the steps in time and space of the type \(\Delta t\geqslant ch^{2}\). This does not affect the practical interest of the scheme since it is normally intended to be used in the regime \(\Delta t\sim h\). We shall provide _a priori_ error estimates for this scheme in \(l^{2}(H^{1})\) norms of similar orders as for the standard FEM, cf. [1]. We also study the \(l^{\infty}(L^{2})\) convergence and prove a slightly suboptimal theoretical bound for it, while it turns out to be optimal numerically. ## 2 Definitions, assumptions, description of the scheme and the main result. We assume that \(\Omega\) lies inside a box \(\mathcal{O}\subset\mathbb{R}^{d}\) and that \(\Omega\) and \(\Gamma\) are given by (2). The box \(\mathcal{O}\) is covered by a simple quasi-uniform simplicial (typically Cartesian) background mesh denoted by \(\mathcal{T}_{h}^{\mathcal{O}}\). We introduce the active computational mesh \(\mathcal{T}_{h}:=\left\{T\in\mathcal{T}_{h}^{\mathcal{O}}:T\cap\{\phi_{h}<0\} \neq\emptyset\right\}\) on \(\Omega_{h}=\left(\cup_{T\in\mathcal{T}_{h}}T\right)^{o}\), the subdomain of \(\mathcal{O}\) composed of mesh cells intersecting \(\Omega\), cf. Fig. 1 (right). Here, \(\phi_{h}\) is a piecewise polynomial interpolation of \(\phi\) in finite element space of degree \(l\in\mathbb{N}^{*}\) on \(\mathcal{T}_{h}^{\mathcal{O}}\). We shall also need a submesh \(\mathcal{T}_{h}^{\Gamma}\), containing the elements of \(\mathcal{T}_{h}\) that are cut by the approximate boundary \(\Gamma_{h}:=\{\phi_{h}=0\}\): \(\mathcal{T}_{h}^{\Gamma}=\{T\in\mathcal{T}_{h}:T\cap\Gamma_{h}\neq\emptyset\}\). Finally, we denote by \(\mathcal{F}_{h}^{\Gamma}\) the set of the internal facets \(E\) of mesh \(\mathcal{T}_{h}\) belonging to the cells of the set \(\mathcal{T}_{h}^{\Gamma}\), \(\mathcal{F}_{h}^{\Gamma}:=\{E\) (internal facet of \(\mathcal{T}_{h}\)) such that \(\exists\,T\,\in\mathcal{T}_{h}:T\cap\Gamma_{h}\neq\emptyset\,\text{and}\,\,\,E \in\partial T\}\). Introduce a uniform partition of \([0,T]\) into time steps \(0=t_{0}<t_{1}<\ldots<t_{N}=T\) with \(t_{n}=n\Delta t\). The basic idea of \(\phi\)-FEM is to introduce the new unknown \(w=w(x,t)\) and to set \(u=\phi w\) so that the Dirichlet condition \(u=0\) is automatically satisfied on \(\Gamma\) since \(\phi\) vanishes there. Using an implicit Euler scheme to discretize (1) in time and denoting \(f^{n}(\cdot)=f(\cdot,t_{n})\), we get the following discretization in time: given \(u^{n}=\phi w^{n}\) find \(u^{n+1}=\phi w^{n+1}\) such that \[\frac{\phi w^{n+1}-\phi w^{n}}{\Delta t}-\Delta(\phi w^{n+1})=f^{n+1}\,. \tag{3}\] To discretize in space, we introduce the finite element space of degree \(k\) on \(\Omega_{h}\), \[V_{h}^{(k)}=\{v_{h}\in H^{1}(\Omega_{h})\,:\,v_{h}|_{T}\in\mathbb{P}_{k}(T), \,\forall\,T\in\mathcal{T}_{h}\}\,,\] for some \(k\geqslant 1\). Supposing that \(f\) and \(u^{0}\) are actually well defined on \(\Omega_{h}\) (rather than on \(\Omega\) only), we can finally introduce the \(\phi\)-FEM scheme for (1) as follows: find \(w_{h}^{n+1}\in V_{h}^{(k)}\) \(n=0,1,\ldots,N-1\) such that for all \(v_{h}\in V_{h}^{(k)}\) \[\int_{\Omega_{h}}\frac{\phi_{h}w_{h}^{n+1}}{\Delta t}\phi_{h}v_{h}+ \int_{\Omega_{h}}\nabla(\phi_{h}w_{h}^{n+1})\cdot\nabla(\phi_{h}v_{h})-\int_{ \partial\Omega_{h}}\frac{\partial}{\partial n}(\phi_{h}w_{h}^{n+1})\phi_{h}v_{h} \\ +\sigma h\sum_{E\in\mathcal{F}_{h}^{\Gamma}}\int_{E}\left[\frac{ \partial(\phi_{h}w_{h}^{n+1})}{\partial n}\right]\left[\frac{\partial(\phi_{h}v _{h})}{\partial n}\right]-\sigma h^{2}\sum_{K\in\mathcal{T}_{h}^{\Gamma}}\int_ {K}\left(\frac{\phi_{h}w_{h}^{n+1}}{\Delta t}-\Delta(\phi_{h}w_{h}^{n+1}) \right)\Delta(\phi_{h}v_{h})\\ =\int_{\Omega_{h}}\left(\frac{u_{h}^{n}}{\Delta t}+f^{n+1}\right) \phi_{h}v_{h}-\sigma h^{2}\sum_{K\in\mathcal{T}_{h}^{\Gamma}}\int_{K}\left( \frac{u_{h}^{n}}{\Delta t}+f^{n+1}\right)\Delta(\phi_{h}v_{h}) \tag{4}\] with \(u_{h}^{n}=\phi_{h}w_{h}^{n}\) for \(n\geqslant 1\) and \(u_{h}^{0}\in V_{h}^{(k)}\) an interpolant of \(u^{0}\). Moreover, \(\phi_{h}\) is the piecewise polynomial interpolation of \(\phi\) in \(V_{h}^{(l)}\), with \(l\geqslant k\). This scheme contains two stabilization terms: the ghost stabilization (the sum on the facets in \(\mathcal{F}_{h}^{\Gamma}\)) as in [11], and a least-square stabilization (the terms multiplied by \(\sigma h^{2}\)) that reinforces (3) on the cells of \(\mathcal{T}_{h}^{\Gamma}\). **Remark 1**.: _Our approach can be easily generalized to non-homogeneous Dirichlet boundary conditions \(u=u_{D}\) on \(\Gamma\times(0,T)\). We can pose then \(u_{h}^{n}=\phi_{h}w_{h}^{n}+I_{h}u_{g}(\cdot,t_{n})\) where \(u_{g}\) is some lifting of \(u_{D}\) from \(\Gamma\) to \(\Omega_{h}\) and \(I_{h}\) stands for a finite element interpolation to \(V_{h}^{(k)}\). Scheme (4) should then be modified accordingly, replacing \(\phi_{h}w_{h}^{n+1}\) by \(\phi_{h}w_{h}^{n+1}+I_{h}u_{g}(\cdot,t_{n+1})\) which results in some additional terms on the right-hand side._ We recall from [6] the assumptions on the domain and on the mesh required in the theoretical study of the convergence of the \(\phi\)-FEM scheme. These assumptions are satisfied if the boundary \(\Gamma\) is regular enough and the mesh \(\mathcal{T}_{h}\) is fine enough. **Assumption 1**.: _The boundary \(\Gamma\) can be covered by open sets \(\mathcal{O}_{i}\), \(i=1,\ldots,I\) on which ones we can introduce local coordinates \(\xi_{1},\ldots,\xi_{d}\) with \(\xi_{d}=\phi\) and such that, up to order \(k+1\), all the partial derivatives \(\partial^{\alpha}\xi_{i}/\partial x^{\alpha}\) and \(\partial x^{\alpha}/\partial^{\alpha}\xi_{i}\) are bounded by a constant \(C_{0}>0\). Thus, on \(\mathcal{O}\), \(\phi\) is of class \(C^{k+1}\) and there exists \(m>0\) such that on \(\mathcal{O}\setminus\cup_{i=1,\ldots,I}\mathcal{O}_{i}\), \(|\phi|\geqslant m\)._ **Assumption 2**.: _The approximate boundary \(\Gamma_{h}=\{\phi_{h}=0\}\) can be covered by element patches \(\{\Pi_{k}\}_{r=1,\ldots,N_{\Pi}}\) such that :_ * _Each patch_ \(\Pi_{r}\) _can be written_ \(\Pi_{r}=\Pi_{r}^{\Gamma}\cup T_{r}\) _with_ \(\Pi_{r}^{\Gamma}\subset\mathcal{T}_{h}^{\Gamma}\) _and_ \(T_{r}\in\mathcal{T}_{h}\setminus\mathcal{T}_{h}^{\Gamma}\)_. Moreover_ \(\Pi_{r}\) _contains less than_ \(M\) _elements and these elements are connected;_ * \(\mathcal{T}_{h}^{\Gamma}=\cup_{r=1,\ldots,N_{\Pi}}\Pi_{r}^{\Gamma}\)_;_ * _Two patches_ \(\Pi_{r}\) _and_ \(\Pi_{s}\) _are disjoint if_ \(r\neq s\)_._ **Theorem 1**.: _Assume \(\Omega\subset\Omega_{h}\), \(l\geq k\), Assumption 1-2, \(f\in H^{1}(0,T;H^{k-1}(\Omega_{h}))\) and \(u\in H^{2}(0,T;H^{k-1}(\Omega))\) being the exact solution to (1), \(u^{n}(\cdot)=u(\cdot,t_{n})\) and \(w_{h}^{n}\) be the solution to (4) for \(n=1,\ldots,N\). For \(\sigma\) large enough, there exist \(c,C>0\) depending only on the regularity of mesh \(\mathcal{T}_{h}\) and on the constants of Ass. 1-2 (with \(C\) also depending on \(T\)), such that if \(\Delta t\geqslant ch^{2}\) then_ \[\left(\sum_{n=0}^{N}\Delta t|u^{n}-\phi_{h}w_{h}^{n}|_{H^{1}( \Omega)}^{2}\right)^{\frac{1}{2}}\leqslant C\|u^{0}-u_{h}^{0}\|_{L^{2}(\Omega_{h})}\\ +C(h^{k}+\Delta t)\left(\|u\|_{H^{2}(0,T;H^{k-1}(\Omega))}+\|f\|_{ H^{1}(0,T;H^{k-1}(\Omega_{h}))}\right)\] _and_ \[\max_{1\leqslant n\leqslant N}\|u^{n}-\phi_{h}w_{h}^{n}\|_{L^{2}( \Omega)}\leqslant C\|u^{0}-u_{h}^{0}\|_{L^{2}(\Omega_{h})}\\ +C(h^{k+\frac{1}{2}}+\Delta t)\left(\|u\|_{H^{2}(0,T;H^{k-1}(\Omega ))}+\|f\|_{H^{1}(0,T;H^{k-1}(\Omega_{h}))}\right)\,.\] **Remark 2**.: * _If_ \(k=1\)_, the norms on the right hand side of the estimates above can be replaced by the norm of_ \(f\) _alone in_ \(H^{1}(0,T;L^{2}(\Omega_{h}))\)_. Indeed, recalling_ \(\Omega\subset\Omega_{h}\)_, this assumption on_ \(f\) _implies_ \(u\in H^{2}(0,T;L^{2}(\Omega))\cap H^{1}(0,T;H^{2}(\Omega))\)_, see e.g._ _[_12_, Theorems 5 and 6, Chapter 7.1]__. On the other hand, imposing such regularity on_ \(u\) _over_ \(\Omega\)_, would not suffice to control the extension of_ \(f\) _outside of_ \(\Omega\)_, so that the regularity of_ \(f\) _on_ \(\Omega_{h}\) _should be postulated any way. This contrasts with the usual a priori estimates for standard FEM (see e.g._ _[_1_]__)._ * _If_ \(k>1\)_, we need to suppose the regularity of both_ \(u\) _and_ \(f\) _as stated above._ In the rest of the paper, the letter \(C\), eventually with subscripts, will stand for various constants depending on the mesh regularity, the constants from Ass. 1-2, and also on \(T\) (when specifically mentioned). Before the proof of Theorem 1, we recall some results from [6] about \(\phi\)-FEM for the Poisson equation with Dirichlet boundary conditions. **Lemma 1** (cf. [6, Lemma 3.7]).: _Consider the bilinear form_ \[a_{h}(u,v)=\int_{\Omega_{h}}\nabla u\cdot\nabla v-\int_{\partial\Omega_{h}} \frac{\partial u}{\partial n}v+\sigma h\sum_{E\in\mathcal{F}_{h}^{\Gamma}} \int_{E}\left[\frac{\partial u}{\partial n}\right]\left[\frac{\partial v}{ \partial n}\right]+\sum_{K\in\mathcal{T}_{h}^{\Gamma}}\sigma h^{2}\int_{K} \Delta u\,\Delta v.\] _Provided \(\sigma\) is chosen big enough, there exists an \(h\)-independent constant \(\alpha>0\) such that_ \[a_{h}(\phi_{h}v_{h},\phi_{h}v_{h})\geqslant\alpha|\phi_{h}v_{h}|_{H^{1}( \Omega_{h})}^{2},\quad\forall v_{h}\in V_{h}^{(k)}.\] **Lemma 2** (cf. [6, Theorem 2.3]).: _For any \(f\in H^{k-1}(\Omega_{h})\), let \(w_{h}\in V_{h}^{(k)}\) be the solution to_ \[a_{h}(\phi_{h}w_{h},\phi_{h}v_{h})=\int_{\Omega_{h}}f\phi_{h}v_{h}-\sigma h^{2 }\sum_{K\in\mathcal{T}_{h}^{\Gamma}}\int_{K}f\Delta(\phi_{h}v_{h})\] _and \(u\in H^{k+1}(\Omega)\) be the solution to_ \[-\Delta u=f\text{ in }\Omega,\quad u=0\text{ on }\Gamma\] _extended to \(\tilde{u}\in H^{k+1}(\Omega_{h})\) so that \(u=\tilde{u}\) on \(\Omega\) and \(\|\tilde{u}\|_{H^{k+1}(\Omega_{h})}\leqslant C\|u\|_{H^{k+1}(\Omega)} \leqslant C\|f\|_{H^{k-1}(\Omega_{h})}\). Provided \(\sigma\) is chosen big enough, there exists an \(h\)-independent constant \(C>0\) such that_ \[|\tilde{u}-\phi_{h}w_{h}|_{H^{1}(\Omega_{h})}\leqslant Ch^{k}\|f\|_{H^{k-1}( \Omega_{h})}\quad\text{and}\quad\|\tilde{u}-\phi_{h}w_{h}\|_{L^{2}(\Omega_{h} )}\leqslant Ch^{k+\frac{1}{2}}\|f\|_{H^{k-1}(\Omega_{h})}.\] **Remark 3**.: _This result is proven in [6] under the more stringent assumption \(f\in H^{k}(\Omega_{h})\) which was used to assure \(\tilde{u}\in H^{k+2}(\Omega_{h})\) and to provide an interpolation error of \(\tilde{u}\) by a product \(\phi_{h}w_{h}\). However, in [13, Lemma 6] we have proven a better interpolation estimate \(\|\tilde{u}-\phi_{h}I_{h}w\|_{H^{s}(\Omega_{h})}\leqslant Ch^{k+1-s}\|f\|_{H^ {k-1}(\Omega_{h})}\) (\(s=0,1\)) for \(\tilde{u}=\phi w\) and the Scott-Zhang interpolant \(I_{h}\). Thus, \(f\in H^{k-1}(\Omega_{h})\) is actually sufficient._ **Lemma 3**.: _For all \(v_{h}\in V_{h}^{(k)}\), there holds_ \[\|\phi_{h}v_{h}\|_{L^{2}(\Omega_{h})}\leqslant C_{P}|\phi_{h}v_{h}|_{H^{1}( \Omega_{h})}.\] Proof.: Let \(\tilde{\Omega}_{h}=\{\phi_{h}<0\}\). By the Poincare inequality, \[\|\phi_{h}v_{h}\|_{L^{2}(\tilde{\Omega}_{h})}\leqslant C\mathrm{diam}(\tilde{ \Omega}_{h})|\phi_{h}v_{h}|_{H^{1}(\tilde{\Omega}_{h})},\] and \(\mathrm{diam}(\tilde{\Omega}_{h})\leqslant\mathrm{diam}(\mathcal{O})\). Moreover, thanks to [6, Lemma 3.4], it holds \[\|\phi_{h}v_{h}\|_{L^{2}(\Omega_{h}\setminus\tilde{\Omega}_{h})}\leqslant\| \phi_{h}v_{h}\|_{L^{2}(\Omega_{h}^{\Gamma})}\leqslant Ch|\phi_{h}v_{h}|_{H^{1} (\Omega_{h}^{\Gamma})},\] where \(\Omega_{h}^{\Gamma}\) is the domain occupied by the mesh \(\mathcal{T}_{h}^{\Gamma}\). We conclude noting \(\Omega\subset\tilde{\Omega}_{h}\cup\Omega_{h}^{\Gamma}\). Proof of Theorem 1.: There exists a function \(\tilde{u}\in H^{2}(0,T;H^{k-1}(\Omega_{h}))\), an extension of \(u\) to \(\Omega_{h}\), such that \[\|\tilde{u}\|_{H^{2}(0,T;H^{k-1}(\Omega_{h}))}\leqslant C\|u\|_{H^{2}(0,T;H^{ k-1}(\Omega))}. \tag{5}\] Let \(w_{h}^{n}\) be the solution to our scheme, which we rewrite as \[\int_{\Omega_{h}}\phi_{h}\frac{w_{h}^{n+1}-w_{h}^{n}}{\Delta t} \phi_{h}v_{h}+a_{h}(\phi_{h}w_{h}^{n+1},\phi_{h}v_{h})-\sum_{T\in\mathcal{T}_{ h}^{\Gamma}}\sigma h^{2}\int_{T}\phi_{h}\frac{w_{h}^{n+1}-w_{h}^{n}}{\Delta t }\Delta(\phi_{h}v_{h})\\ =\int_{\Omega_{h}}f^{n+1}\phi_{h}v_{h}-\sum_{T\in\mathcal{T}_{h}^ {\Gamma}}\sigma h^{2}\int_{T}f^{n+1}\Delta(\phi_{h}v_{h}) \tag{6}\] for \(n\geqslant 1\) while \(\phi_{h}w_{h}^{0}\) should be replaced with \(u_{h}^{0}\) for \(n=0\). For any time \(t\in[0,T]\), introduce \(\tilde{w}_{h}(\cdot,t)=\tilde{w}_{h}\in V_{h}^{(k)}\), as in Lemma 2, with \(f\) replaced by \(f-\partial_{t}\tilde{u}\) evaluated at time \(t\): \[a_{h}(\phi_{h}\tilde{w}_{h},\phi_{h}v_{h})=\int_{\Omega_{h}}(f-\partial_{t} \tilde{u})\phi_{h}v_{h}-\sigma h^{2}\sum_{K\in\mathcal{T}_{h}^{\Gamma}}\int_{ K}(f-\partial_{t}\tilde{u})\Delta(\phi_{h}v_{h}). \tag{7}\] Let \(\tilde{w}_{h}^{n}=\tilde{w}_{h}(t_{n})\) and \(e_{h}^{n}:=\phi_{h}(w_{h}^{n}-\tilde{w}_{h}^{n})\) for \(n\geqslant 1\) and \(e_{h}^{0}:=u_{h}^{0}-\phi_{h}\tilde{w}_{h}^{0}\). Taking the difference between (6) and (7) at time \(t_{n+1}\), we get \[\int_{\Omega_{h}}\frac{e_{h}^{n+1}-e_{h}^{n}}{\Delta t}\phi_{h}v_{h}+a_{h}(e_{ h}^{n+1},\phi_{h}v_{h})-\sum_{T\in\mathcal{T}_{h}^{\Gamma}}\sigma h^{2}\int_{T} \frac{e_{h}^{n+1}-e_{h}^{n}}{\Delta t}\Delta(\phi_{h}v_{h})\\ =\int_{\Omega_{h}}\left(\partial_{t}\tilde{u}^{n+1}-\phi_{h} \frac{\tilde{w}_{h}^{n+1}-\tilde{w}_{h}^{n}}{\Delta t}\right)\phi_{h}v_{h}-\sum _{T\in\mathcal{T}_{h}^{\Gamma}}\sigma h^{2}\int_{T}\left(\partial_{t}\tilde{u} ^{n+1}-\phi_{h}\frac{\tilde{w}_{h}^{n+1}-\tilde{w}_{h}^{n}}{\Delta t}\right) \Delta(\phi_{h}v_{h}).\] Taking \(v_{h}=w_{h}^{n+1}-\tilde{w}_{h}^{n+1}\), i.e. \(\phi_{h}v_{h}=e_{h}^{n+1}\), applying the equality \[\|e_{h}^{n+1}\|_{L^{2}(\Omega_{h})}^{2}-(e_{h}^{n},e_{h}^{n+1})_{L^{2}(\Omega_{ h})}=\frac{\|e_{h}^{n+1}\|_{L^{2}(\Omega_{h})}^{2}-\|e_{h}^{n}\|_{L^{2}(\Omega_{h})}^{ 2}+\|e_{h}^{n+1}-e_{h}^{n}\|_{L^{2}(\Omega_{h})}^{2}}{2}\,,\] and estimating the terms in the RHS by Cauchy-Schwarz and inverse inequalities \[\|\Delta e_{h}^{n+1}\|_{L^{2}(T)}\leqslant Ch^{-2}\|e_{h}^{n+1}\|_{L^{2}(T)}\] we deduce that \[\frac{\|e_{h}^{n+1}\|_{L^{2}(\Omega_{h})}^{2}-\|e_{h}^{n}\|_{L^{2}( \Omega_{h})}^{2}+\|e_{h}^{n+1}-e_{h}^{n}\|_{L^{2}(\Omega_{h})}^{2}}{2\Delta t}+ \overbrace{a_{h}(e_{h}^{n+1},e_{h}^{n+1})}^{(I)}-\overbrace{\sigma h^{2}\int_ {\Omega_{h}^{\Gamma}}\frac{e_{h}^{n+1}-e_{h}^{n}}{\Delta t}\Delta e_{h}^{n+1}} ^{(II)}\\ \leqslant\underbrace{C\left\|\partial_{t}\tilde{u}^{n+1}-\phi_{h} \frac{\tilde{w}_{h}^{n+1}-\tilde{w}_{h}^{n}}{\Delta t}\right\|_{L^{2}(\Omega_{ h})}\|e_{h}^{n+1}\|_{L^{2}(\Omega_{h})}}_{(III)}. \tag{8}\] Thanks to the coercivity lemma 1, the term \((I)\) can be bounded from below by \(\alpha|e_{h}^{n+1}|_{H^{1}(\Omega_{h})}^{2}\). We now use the Young inequality (with some \(\varepsilon>0\)) and the inverse inequality \(\|\Delta e_{h}^{n+1}\|_{L^{2}(T)}\leqslant C_{I}h^{-1}|e_{h}^{n+1}|_{H^{1}(T)}\) to bound the term \((II)\): \[(I)-(II)\geqslant\alpha|e_{h}^{n+1}|_{H^{1}(\Omega_{h})}^{2}- \frac{\sigma h^{2}}{2\epsilon(\Delta t)^{2}}\|e_{h}^{n+1}-e_{h}^{n}\|_{L^{2}( \Omega_{h}^{\Gamma})}^{2}-\frac{\epsilon\sigma C_{I}^{2}}{2}|e_{h}^{n+1}|_{H^ {1}(\Omega_{h}^{\Gamma})}^{2}\\ \geqslant\frac{3}{4}\alpha|e_{h}^{n+1}|_{H^{1}(\Omega_{h})}^{2}- \frac{1}{2\Delta t}\|e_{h}^{n+1}-e_{h}^{n}\|_{L^{2}(\Omega_{h}^{\Gamma})}^{2}, \tag{9}\] where we have chosen \(\epsilon\) so that \(\epsilon\sigma C_{I}^{2}/2=\alpha/4\) and then assumed \(\sigma h^{2}/(\epsilon\Delta t)\leqslant 1\). This will allow us to control the negative term above by the similar positive term in (8), and leads to the restriction \(\Delta t\geqslant ch^{2}\) with \(c=\sigma/\epsilon\). We turn now to the RHS of (8), i.e. term \((III)\). By triangle inequality \[\left\|\partial_{t}\tilde{u}^{n+1}-\phi_{h}\frac{\tilde{w}_{h}^{ n+1}-\tilde{w}_{h}^{n}}{\Delta t}\right\|_{L^{2}(\Omega_{h})}\leqslant\left\| \partial_{t}\tilde{u}^{n+1}-\frac{\tilde{u}^{n+1}-\tilde{u}^{n}}{\Delta t} \right\|_{L^{2}(\Omega_{h})}\\ +\left\|\frac{\tilde{u}^{n+1}-\tilde{u}^{n}}{\Delta t}-\phi_{h} \frac{\tilde{w}_{h}^{n+1}-\tilde{w}_{h}^{n}}{\Delta t}\right\|_{L^{2}(\Omega_{ h})}. \tag{10}\] By Taylor's theorem with integral remainder \[\tilde{u}^{n}(\cdot)=\tilde{u}^{n+1}(\cdot)-\Delta t\partial_{t}\tilde{u}^{n+ 1}(\cdot)-\int_{t_{n}}^{t_{n+1}}\partial_{tt}\tilde{u}(t,\cdot)(t_{n}-t)\text{ \rm dt}\] so that \[\left\|\partial_{t}\tilde{u}^{n+1}-\frac{\tilde{u}^{n+1}-\tilde{u} ^{n}}{\Delta t}\right\|_{L^{2}(\Omega_{h})}=\frac{1}{\Delta t}\left\|\int_{t_ {n}}^{t_{n+1}}\partial_{tt}\tilde{u}(t,\cdot)(t_{n}-t)\text{\rm dt}\right\|_{L ^{2}(\Omega_{h})}\\ \leqslant\sqrt{\Delta t}\|\partial_{tt}\tilde{u}\|_{L^{2}(t_{n}, t_{n+1};L^{2}(\Omega_{h}))}.\] Differentiating \(-\Delta u=f-\partial_{t}u\) and (7) in time, we obtain thanks to Lemma 2, \[\|\partial_{t}(\tilde{u}(t)-\phi_{h}\tilde{w}_{h})(t)\|_{L^{2}(\Omega_{h})} \leqslant Ch^{k+\frac{1}{2}}\|(\partial_{t}f-\partial_{tt}\tilde{u})(t)\|_{ H^{k-1}(\Omega_{h})}.\] Thus, for the second term in (10), we get by the last interpolation estimate: \[\left\|\frac{\tilde{u}^{n+1}-\tilde{u}^{n}}{\Delta t}-\phi_{h} \frac{\tilde{w}_{h}^{n+1}-\tilde{w}_{h}^{n}}{\Delta t}\right\|_{L^{2}(\Omega_{ h})} =\frac{1}{\Delta t}\left\|\int_{t_{n}}^{t_{n+1}}\partial_{t}( \tilde{u}(t,\cdot)-\phi_{h}\tilde{w}_{h}(t,\cdot))\text{\rm dt}\right\|_{L^{2}( \Omega_{h})}\] \[\leqslant\frac{Ch^{k+\frac{1}{2}}}{\sqrt{\Delta t}}\|\partial_{t}f -\partial_{tt}\tilde{u}\|_{L^{2}(t_{n},t_{n+1};H^{k-1}(\Omega_{h}))}.\] Collecting these estimates and applying the Young inequality with some \(\delta>0\) and Poincare inequality from Lemma 3, we get \[(III)\leqslant\frac{C}{\delta}\left(\Delta t\|\partial_{tt}\tilde{u} \|_{L^{2}(t_{n},t_{n+1};L^{2}(\Omega_{h}))}^{2}+\frac{h^{2k+1}}{\Delta t}\| \partial_{t}f-\partial_{tt}\tilde{u}\|_{L^{2}(t_{n},t_{n+1};H^{k-1}(\Omega_{h}) )}^{2}\right)\\ +\frac{\delta C_{P}^{2}}{2}|e_{h}^{n+1}|_{H^{1}(\Omega_{h})}^{2}. \tag{11}\] Substituting (9) and (11) to (8) and taking \(\delta\) so that \(\delta C_{P}^{2}=\alpha/2\) yields \[\frac{\|e_{h}^{n+1}\|_{L^{2}(\Omega_{h})}^{2}-\|e_{h}^{n}\|_{L^{2 }(\Omega_{h})}^{2}}{2\Delta t}+\frac{\alpha}{2}|e_{h}^{n+1}|_{H^{1}(\Omega_{h })}^{2}\\ \leqslant C\left(\Delta t\|\partial_{tt}\tilde{u}\|_{L^{2}(t_{n},t_{n+1};L^{2}(\Omega_{h}))}^{2}+\frac{h^{2k+1}}{\Delta t}\|\partial_{t}f- \partial_{tt}\tilde{u}\|_{L^{2}(t_{n},t_{n+1};H^{k-1}(\Omega_{h}))}^{2}\right).\] Multiplying this by \(2\Delta t\) and summing on \(n=0,\ldots,N-1\), we get \[\|e_{h}^{N}\|_{L^{2}(\Omega_{h})}^{2} +\alpha\Delta t\sum_{n=1}^{N}|e_{h}^{n}|_{H^{1}(\Omega_{h})}^{2}\] \[\leqslant\|e_{h}^{0}\|_{L^{2}(\Omega_{h})}^{2}+C(\Delta t^{2}\| \partial_{tt}\tilde{u}\|_{L^{2}(0,T;L^{2}(\Omega_{h}))}^{2}+h^{2k+1}\|\partial _{t}f-\partial_{tt}\tilde{u}\|_{L^{2}(0,T;H^{k-1}(\Omega_{h}))}^{2}).\] Thus, observing that the sum above can be stopped at any number \(n\leqslant N\), we get \[\max_{n=1,\ldots,N}\|e_{h}^{n}\|_{L^{2}(\Omega_{h})}+\left(\Delta t \sum_{n=1}^{N}|e_{h}^{n}|_{H^{1}(\Omega_{h})}^{2}\right)^{\frac{1}{2}}\\ \leqslant C\|e_{h}^{0}\|_{L^{2}(\Omega_{h})}+C\left(\Delta t\| \partial_{tt}\tilde{u}\|_{L^{2}(0,T;L^{2}(\Omega_{h}))}+h^{k+\frac{1}{2}}\| \partial_{t}f-\partial_{tt}\tilde{u}\|_{L^{2}(0,T;H^{k-1}(\Omega_{h}))}\right).\] Lemma 2 applied to \(-\Delta u=f-\partial_{t}u\) in \(\Omega\) at times \(t_{n}\) gives \[\max_{n=0,\ldots,N}\|\tilde{u}^{n}-\phi_{h}\tilde{w}_{h}^{n}\|_{L ^{2}(\Omega_{h})}\leqslant Ch^{k+1/2}\|f-\partial_{t}\tilde{u}\|_{C([0,T],H^{k -1}(\Omega_{h}))},\] \[\left(\Delta t\sum_{n=1}^{N}|\tilde{u}^{n}-\phi_{h}\tilde{w}_{h}^ {n}|_{H^{1}(\Omega_{h})}^{2}\right)^{\frac{1}{2}}\leqslant Ch^{k}\|f- \partial_{t}\tilde{u}\|_{C([0,T],H^{k-1}(\Omega_{h}))}.\] In particular, \[\|e_{h}^{0}\|_{L^{2}(\Omega_{h})}\leqslant\|u^{0}-u_{h}^{0}\|_{L ^{2}(\Omega_{h})}+\|u^{0}-\phi_{h}\tilde{w}_{h}^{0}\|_{L^{2}(\Omega_{h})}\\ \leqslant\|u^{0}-u_{h}^{0}\|_{L^{2}(\Omega_{h})}+Ch^{k+1/2}\|f- \partial_{t}\tilde{u}\|_{C([0,T],H^{k-1}(\Omega_{h}))}.\] Combining this with the regularity of \(f\) and \(\tilde{u}\), cf. (5), together with the bound \(\|\cdot\|_{C([0,T],\cdot)}\leqslant C\|\cdot\|_{H^{1}(0,T;\cdot)}\) (with \(C\) depending on \(T\)) gives the announced result. ## 3 Numerical experiments In this section, we illustrate the performance of our approach on two test cases1. We have implemented \(\phi\)-FEM in _FEniCS_[14], the codes of the simulations are available in the github repository [https://github.com/KVuillemot/PhiFEM_Heat_Equation](https://github.com/KVuillemot/PhiFEM_Heat_Equation) In our numerical simulations, if the expected convergence is of order \(C_{1}h^{p}+C_{2}\Delta t^{m}\), we will fix \(\Delta t=h^{p/m}\) in such a way we only need to observe if the error is of order \(h^{p}\) numerically. **Remark 4** (Norms for the simulations).: _To illustrate the convergence of the methods with the simulations, since it is numerically complex to compute the error on the exact domain \(\Omega\), we will use the following formula_ \[\frac{\|u_{h}-u_{\text{ref}}\|_{l^{2}(0,T,H_{0}^{1}(\Omega_{\text{ref}}))}^{2} }{\|u_{\text{ref}}\|_{l^{2}(0,T,H_{0}^{1}(\Omega_{\text{ref}}))}^{2}}\approx \frac{\sum_{n=0}^{N}\Delta t\int_{\Omega_{\text{ref}}}|\nabla u_{h}(.,t_{n})- \nabla u_{\text{ref}}(.,t_{n})|^{2}\mathrm{d}x}{\sum_{n=0}^{N}\Delta t\int_{ \Omega_{\text{ref}}}|\nabla u_{\text{ref}}(.,t_{n})|^{2}\mathrm{d}x}\,,\] _and_ \[\frac{\|u_{h}-u_{\text{ref}}\|_{l^{\infty}(0,T,L^{2}(\Omega_{\text{ref}}))}^{ 2}}{\|u_{\text{ref}}\|_{l^{\infty}(0,T,L^{2}(\Omega_{\text{ref}}))}^{2}}\approx \frac{\max_{n=0,\ldots,N}\int_{\Omega_{\text{ref}}}(u_{h}(.,t_{n})-u_{\text{ ref}}(.,t_{n}))^{2}\mathrm{d}x}{\max_{n=0,\ldots,N}\int_{\Omega_{\text{ ref}}}(u_{\text{ref}}(.,t_{n}))^{2}\mathrm{d}x}\,,\] _where \(u_{h}\) denotes an approximation of the \(L^{2}\)-orthogonal projection of the solution on the reference mesh \(\Omega_{\text{ref}}\) and \(u_{\text{ref}}\) the reference solution._ First test case : the source term is deduced from a manufactured solution and the FEM solution is compared to this manufactured solution.For this case, we will consider a simple smooth domain : the circle centered in \((0,0)\), with radius \(1\) as represented in Fig. 1. The level-set function is given using the equation of the circle, i.e. \(\phi(x,y)=-1+x^{2}+y^{2}\). Its approximation \(\phi_{h}\) will be the interpolation of \(\phi\) with \(\mathbb{P}_{k+1}\) finite elements, except for Fig. 6 (right). Moreover, we consider the manufactured solution given by \(u_{\text{ref}}=\cos\left(\frac{1}{2}\pi(x^{2}+y^{2})\right)\exp(x)\sin(t)\) so that \(u_{\text{ref}}\) satisfies \(u_{\text{ref}}(t=0)=u_{\text{ref}}^{0}=0\) and \(u_{\text{ref}}=0\) on \(\Gamma\times(0,T)\). Here, \(\Omega_{\text{ref}}=\Omega_{h}\). We represent the errors in \(l^{2}(H^{1})\) norm on Fig. 2 and in \(l^{\infty}(L^{2})\) norm on Fig. 3, both with \(\mathbb{P}_{1}\) and \(\mathbb{P}_{2}\) finite elements (\(k=1\) and \(k=2\)). Here, the numerical results fit well the theoretical convergence order of Theorem 1 and behaves even better since we observe a convergence of orders two and three for the \(l^{\infty}(L^{2})\) norm instead of \(1.5\) and \(2.5\) respectively. We remark that the theoretical constraint \(\Delta t\geqslant ch^{2}\) is not satisfied for the \(\mathbb{P}^{2}\) finite elements but it does not affect the practical convergence. We also represent the \(l^{2}(H^{1})\) and \(l^{\infty}(L^{2})\) errors with respect to the computation time (here, the computation time is the sum of time needed to assemble the finite element matrix Figure 1: Left: considered domain for the first test case. Center: a conforming mesh for the standard FEM. Right: a uniform Cartesian mesh for \(\phi\)-FEM. and to solve the finite element systems at each time step, without the time used to construct the meshes) in Fig. 4. We observe that in this case, \(\phi\)-FEM is significantly faster than a standard FEM to obtain a solution with the same precision. In Fig. 5 (left), we represent the \(l^{2}(H^{1})\) error and in Fig. 5 (right) the \(l^{\infty}(L^{2})\) error, both with respect to \(\sigma\). This allows us to emphasize the influence of \(\sigma\) on the stability of the errors and validates our choice of \(\sigma=1\) in the other simulations. Finally, in Fig. 6, we justify our choice for the degree of interpolation of \(\phi\) since in our theoretical result, \(\mathbb{P}_{k}\) is sufficient but we observe here that the error decreases for \(l=2\). Furthermore, in our previous paper [7], our theoretical results in the Neumann case hold true only for \(l\geq k+1\). Here, since the interpolation is exact from \(l=2\) we do not need to compute highest degrees of interpolation for the level-set function to compare the results. Second test case : the source term is given and the FEM solution is compared to a standard FEM solution on a very fine mesh.We now consider a more realistic test case Figure 3: First test case. \(l^{\infty}(0,T;L^{2}(\Omega))\) relative errors with respect to \(h\) with \(\mathbb{P}_{1}\) elements and \(\Delta t=h^{2}\) (left) and with \(\mathbb{P}_{2}\) elements and \(\Delta t=h^{3}\) (right). Standard FEM (red squares) and \(\phi\)-FEM (blue dots), \(\sigma=1\). Figure 2: First test case. \(l^{2}(0,T;H^{1}(\Omega))\) relative errors with respect to \(h\) with \(\mathbb{P}_{1}\) elements and \(\Delta t=h\) (left) and with \(\mathbb{P}_{2}\) elements and \(\Delta t=h^{2}\) (right). Standard FEM (red squares) and \(\phi\)-FEM (blue dots), \(\sigma=1\). Figure 4: First test case. \(l^{2}(0,T;H^{1}(\Omega))\) with \(\Delta t=h\) (left) and \(l^{\infty}(0,T;L^{2}(\Omega))\) with \(\Delta t=h^{2}\) (right) relative errors with respect to the computation time. Standard FEM (red squares) and \(\phi\)-FEM (blue dots), \(\mathbb{P}_{1}\) elements, \(\sigma=1\). Figure 5: First test case. \(l^{2}(0,T;H^{1}(\Omega))\) relative errors with respect to \(\sigma\) for different mesh sizes, with \(\Delta t=h\) (left) and \(l^{\infty}(0,T;L^{2}(\Omega))\) relative errors with respect to \(\sigma\), \(\Delta t=h^{2}\) (right), both with \(\mathbb{P}_{1}\) elements. since we will apply some forces and consider the resulting distribution of heat in the considered domain. More precisely, this time, we impose \(u=0\) on \(\Gamma\times(0,T)\), the initial condition is \(u^{0}=0\) in \(\Omega\) and we define a source term given by \(f(x,y,z,t)=\exp\left(-\frac{(x-\mu_{1})^{2}+(y-\mu_{2})^{2}+(z-\mu_{3})^{2}}{2 \sigma_{0}^{2}}\right)\) for each \((x,y,z,t)\in\Omega\times(0,T)\), with \((\mu_{1},\mu_{2},\mu_{3},\sigma_{0})=(0.2,0.3,-0.1,0.3)\). The final time is fixed to \(T=1\). Moreover, for this test case, we will consider a more complex and 3D domain from [15], given by \[\phi(x,y,z)=x^{2}+y^{2}+z^{2}-r_{0}^{2}-A\sum_{k=0}^{11}\exp\left(-\frac{(x-x_{ k})^{2}+(y-y_{k})^{2}+(z-z_{k})^{2}}{\sigma_{0}^{2}}\right)\,,\] with \[(x_{k},y_{k},z_{k}) =\frac{r_{0}}{\sqrt{5}}\left(2\cos\left(\frac{2k\pi}{5}\right),2 \sin\left(\frac{2k\pi}{5}\right),1\right)\,,\quad 0\leqslant k\leqslant 4\,,\] \[(x_{k},y_{k},z_{k}) =\frac{r_{0}}{\sqrt{5}}\left(2\cos\left(\frac{(2(k-5)-1)\pi}{5} \right),2\sin\left(\frac{(2(k-5)-1)\pi}{5}\right),-1\right)\,,\quad 5\leqslant k \leqslant 9\,,\] \[(x_{k},y_{k},z_{k}) =(0,0,r_{0})\,\quad k=10\,,\] \[(x_{k},y_{k},z_{k}) =(0,0,-r_{0})\,\quad k=11\,,\] with \(r_{0}=0.6\), \(\sigma=0.3\) and \(A=1.5\). The resulting domain and meshes are given in Fig. 7. Here, \(u_{\rm ref}\) denotes the solution of a classical finite element method on \(\Omega_{\rm ref}\) that is a very fine conforming mesh. In this case, to be more precise, we introduce a partition of the interval \([0,T]\) into time steps \(0=t_{0}^{\rm ref}<t_{1}^{\rm ref}<\cdots<t_{M}^{\rm ref}=T\) with \(t_{n}^{\rm ref}=n\Delta t^{\rm ref}\) and \(\Delta t^{\rm ref}=h_{\rm ref}^{p/m}\), where \(h_{\rm ref}\) denotes the size of cells of \(\Omega_{\rm ref}\). Then, in the numerical simulations each discretization is built so that \(\left\{t_{n}\right\}_{n=0,\ldots,N}\) is a subset of \(\left\{t_{n}^{\rm ref}\right\}_{n=0,\ldots,M}\). In Fig. 8, we consider \(\mathbb{P}_{1}\) finite elements (\(k=1\)), and \(\mathbb{P}_{2}\) finite elements for the interpolation \(\phi_{h}\) of \(\phi\) (\(l=2\)). We compare here the \(l^{2}(H^{1})\), \(l^{\infty}(L^{2})\) relative errors between the solution of the \(\phi\)-FEM scheme (4) and a standard FEM. The numerical results fit well the theoretical convergence order announced in Theorem 1, namely, order one for the \(l^{2}(H^{1})\) norm and order two for the \(l^{\infty}(L^{2})\) error. Figure 8: Second test case. \(l^{2}(0,T;H^{1}(\Omega))\) relative errors with respect to \(h\) with \(\Delta t=h\) (left) and \(l^{\infty}(0,T;L^{2}(\Omega))\) relative errors with respect to \(h\) with \(\Delta t=h^{2}\) (right), both with \(\mathbb{P}_{1}\) elements. Standard FEM (red squares) and \(\phi\)-FEM (blue dots), \(\sigma=1\). Figure 7: Left: considered domain for the second test case. Center: a conforming mesh for the standard FEM. Right: a uniform Cartesian mesh for \(\phi\)-FEM. Conclusion In the present work, we proposed a FEM scheme following the \(\phi\)-FEM paradigm to approximate the solution of the heat equation and proved its convergence, which is optimal in the \(l^{2}(0,T;H^{1}(\Omega))\) norm and quasi-optimal in the \(l^{\infty}(0,T;L^{2}(\Omega))\) norm. We remark that, in comparison with [6], we need less regularity on the exact solution in the a priori error estimates. A first advantage of the \(\phi\)-FEM paradigm is its ease of implementation. Indeed, it uses standard shape functions contrary to the XFEM approach. Moreover, it uses standard integration tools contrary to cutFEM needing an integration on the real boundary and some integrations on cut cells. A second interesting aspect of our approach is the computational time of the simulation. The low cost (computational time) of \(\phi\)-FEM can be explained by the fact that the boundary of the geometry in the classical finite element method is approximated by some linear functions while, in the \(\phi\)-FEM paradigm, the boundary is taken into account thanks to the level set function \(\phi\), which can be of high degree without increasing the size of the finite element matrix. In the mathematical analysis, we supposed that the boundary of the considered domain is regular enough. The case of less regular domains will be the aim of future work. ## Funding This work was supported by the Agence Nationale de la Recherche, Project PhiFEM, under grant ANR-22- CE46-0003-01.
2310.15644
Linear-in-Complexity Computational Strategies for Modeling and Dosimetry at TeraHertz
This work presents a fast direct solver strategy allowing full-wave modeling and dosimetry at terahertz (THz) frequencies. The novel scheme leverages a preconditioned combined field integral equation together with a regularizer for its elliptic spectrum to enable its compression into a non-hierarchical skeleton, invertible in quasi-linear complexity. Numerical results will show the effectiveness of the new scheme in a realistic skin modeling scenario.
Viviana Giunzioni, Giuseppe Ciacco, Clément Henry, Adrien Merlini, Francesco P. Andriulli
2023-10-24T09:04:36Z
http://arxiv.org/abs/2310.15644v1
# Linear-in-Complexity Computational Strategies for Modeling and Dosimetry at TeraHertz ###### Abstract This work presents a fast direct solver strategy allowing full-wave modeling and dosimetry at terahertz (THz) frequencies. The novel scheme leverages a preconditioned binned field integral equation together with a regularizer for its elliptic spectrum to enable its compression into a non-hierarchical skeleton, invertible in quasi-linear complexity. Numerical results will show the effectiveness of the new scheme in a realistic skin modeling scenario. integral equations, dosimetry, terahertz, fast solver ## I Introduction With the technological advances in THz technology, a growing number of interdisciplinary applications in the THz range have emerged and gained popularity in the last two decades within areas ranging from security, to communications, or biomedicine [1]. As the impact of THz devices in our societies grows, accurately assessing the effects of THz waves on the human body gains crucial importance [2]. Hence the need for exposure analyses that aim at quantifying the amount of energy absorbed by biological tissues subject to electromagnetic radiations [3]. Preliminary dosimetry assessments are a fundamental phase during the design of THz equipments, to guarantee their compliance with the limits on the power absorbed by human tissues set by international agencies [4]. Exposure measurements are often challenging to perform, especially in the near field, but this challenge can be, in part, sidestepped by reliable and accurate numerical dosimetric assessments, when they are within reach. However numerical modeling at THz also comes with its own set of complications. On the one hand, many of the solvers proposed in the literature employ approximations of the Maxwell's system, suitable to the high frequency regime considered, or apply geometrical simplification to make use of proper analytic solutions. However, application of these approximations can degrade the solution accuracy, and potentially compromise the reliability of the dosimetric analyses. On the other hand, full-wave models leverage the original Maxwell system and can be applied to arbitrarily complex geometries, but the higher computational costs incurred can become prohibitive. In addition, they suffer from numerical issues, such as ill-conditioning or spurious resonances at high frequencies [5] that need to be handled to obtain reliable results. We propose here a novel full-wave approach, well suited to modeling reflection and absorption of THz waves by biological samples. Being a fast direct solution strategy, this approach allows for the efficient solution of the THz problems for multiple exposures at once, with a complexity which grows only quasi-linearly with the number of unknowns, that is, with increasing frequency. This is obtained by first defining a proper set of boundary integral equations and leveraging a tailored preconditioning scheme, resulting in a well-conditioned system of linear equations freed from spurious resonances. This formulation is then coupled with a recently proposed fast inversion strategy [6], that relies on the compression of the elliptic spectrum of the boundary operator into a rank-deficient skeleton form and on the use of the Woodbury matrix identity [7]. ## II Background and Notation Dosimetry analyzes aim at assessing the amount of energy absorbed by the human body when exposed to an electromagnetic radiation. This estimation can be performed by numerically simulating the response of the biological tissue to the impinging field through a full-wave electromagnetic solver. In this work we employ the two-dimensional approximation, that assumes tha invariance of the geometries and field along an axis \(\hat{\mathbf{z}}\). This lends itself well to the case under study given the large dimensions of some body parts compared to THz wavelengths. However, this approximation is not suited to modeling all body parts. Based on the representation theorem [8], different boundary integral equations (BIEs) can be set up to numerically model the time-harmonic electromagnetic scattering and absorption of a penetrable body. Given a two-dimensional domain \(\mathcal{D}\) with boundary \(\Gamma\coloneqq\partial\mathcal{D}\) characterized by the outgoing normal field \(\hat{\mathbf{n}}\), the boundary integral operators [8] \[\left(\mathcal{S}_{k}^{T}\psi\right)(\mathbf{r}) \coloneqq\int_{\Gamma}G_{k}(\mathbf{r}-\mathbf{r}^{\prime})\psi(\mathbf{r}^{ \prime})\mathrm{d}S(\mathbf{r}^{\prime})\,, \tag{1}\] \[\left(\mathcal{D}_{k}^{T}\psi\right)(\mathbf{r}) \coloneqq\mathrm{p.v.}\int_{\Gamma}\partial_{\mathbf{n}^{\prime}}G_{k} (\mathbf{r}-\mathbf{r}^{\prime})\psi(\mathbf{r}^{\prime})\mathrm{d}S(\mathbf{r}^{\prime})\,,\] (2) \[\left(\mathcal{D}_{k}^{\ast T}\psi\right)(\mathbf{r}) \coloneqq\mathrm{p.v.}\int_{\Gamma}\partial_{\mathbf{n}}G_{k}(\mathbf{r}- \mathbf{r}^{\prime})\psi(\mathbf{r}^{\prime})\mathrm{d}S(\mathbf{r}^{\prime})\,,\] (3) \[\left(\mathcal{N}_{k}^{T}\psi\right)(\mathbf{r}) \coloneqq-\mathrm{f.p.}\int_{\Gamma}\partial_{\mathbf{n}}\partial_{ \mathbf{n}^{\prime}}G_{k}(\mathbf{r}-\mathbf{r}^{\prime})\psi(\mathbf{r}^{\prime})\mathrm{d}S (\mathbf{r}^{\prime})\,, \tag{4}\] which are respectively the single layer, double layer, adjoint double layer, and hypersingular operator, constitute the building blocks of any of these formulations. The notations p.v. and f.p. indicate the Cauchy principal value and the Hadamard finite part. We denote by \(G_{k}\) the two-dimensional Green's function in free-space \[G_{k}(\mathbf{r}-\mathbf{r}^{\prime})=-\frac{j}{4}H_{0}^{(2)}\left(k||\mathbf{r}-\mathbf{r}^{ \prime}||\right)\,, \tag{5}\] where \(H_{0}^{(2)}\) is the Hankel function of the second kind with order zero as defined in [9]. Moreover, numerical exposure assessments also require the _a priori_ definition of a realistic model of the tissue under study, both in terms of geometry and dielectric permittivity. Research on THz external dosimetry is often focused on the skin [3, 10], as THz impinging field is absorbed by this organ. Different geometrical models of the skin have been proposed [11] to accurately reproduce the human anatomy. They usually aim at modeling the stratification of compartments with different physical properties, such as the stratum corneum, the epidermis, and the dermis layers, sometimes even modeling anisotropies and depth-varying water percentage [11], at the cost of increasing model complexity. For the sake of simplicity, in this work we employ a single-dielectric model. Following the double Debye model [12], the permittivity of the skin as a function of the frequency is modeled as \[\epsilon_{\mathbf{r}}(\omega)=\epsilon_{\infty}+\frac{\epsilon_{s}-\epsilon_{2}} {1+j\omega\tau_{1}}+\frac{\epsilon_{2}-\epsilon_{\infty}}{1+j\omega\tau_{2}}\,, \tag{6}\] with parameters \(\epsilon_{\infty}\) = 3, \(\epsilon_{s}\) = 60, \(\epsilon_{2}\) = 3.6, \(\tau_{1}\) = 10 ps, and \(\tau_{2}\) = 0.2 ps [13]. The validity of this approximation has been demonstrated in previous works [14], which however have also highlighted a limitation of the model when applied to dry skin and, in general, to tissues characterized by low water contents. ## III Fast Direct Solver Strategy for THz Dosimetry We propose here a fast direct solver strategy for modeling the electromagnetic response of a biological tissue of boundary \(\Gamma_{s}\) to an excitation realized by means of a metallic body of boundary \(\Gamma_{m}\). It is based on a composite formulation made up of the combined field integral equation (CFIE) for perfect electric conductor (PEC) materials [15] and of the Poggio-Miller-Chang-Harrington-Wu-Tsai (PMCHWT) equation for penetrable media [16]. As is sometimes done in the literature, we assume that the coupling terms between the metallic and the dielectric objects can be neglected [17, 18]. In the case where both objects (i.e., the metallic and the dielectric ones) are subject to a TM polarized field, the resulting system of integral equations is given in eq. (7) at the bottom of the page. Similar results can be found for different polarizations. In these equations, we denote by the subscript \({}_{0}\) the quantities related to the exterior medium, which can be assumed to be the air, and by the subscript \({}_{1}\) the ones related to the interior, penetrable, medium. The exterior and interior wavenumbers are denoted as \(k_{0}=\omega\sqrt{\epsilon_{0}\mu_{0}}\) and \(k_{1}=\omega\sqrt{\epsilon_{1}\mu_{1}}\), while \(\eta_{0/1}=\sqrt{\mu_{0/1}/\epsilon_{0}}\) are the characteristic impedances of the exterior or interior medium. \((E^{\text{inc},m},H^{\text{inc},m})\) and \((E^{\text{inc},s},H^{\text{inc},s})\) are the electromagnetic fields incident over \(\Gamma_{m}\) and \(\Gamma_{s}\) respectively, further separated into the transversal and longitudinal components, denoted by \({}_{t}\) and \({}_{z}\). In the following, we will denote by the subscripts \({}_{m}\) and \({}_{s}\) quantities related to \(\Gamma_{m}\) and \(\Gamma_{s}\) respectively. The unknowns in eq. (7) are the surface equivalent currents defined on the metallic and dielectric boundaries. They are of electric type only in the former case, \(j_{z,m}\), and of both electric and magnetic type in the latter, \(j_{z,s}\) and \(m_{t,s}\). In particular, by superimposing the radiation provided by these currents, it is possible to retrieve the scattered electromagnetic field, to be summed to the incident field in order to determine the resulting electric and magnetic fields everywhere and, in particular, inside the biological sample. The application of a Galerkin discretization scheme, based on the approximation of the unknown currents as linear combinations of \(N_{m/s}\) piecewise linear basis functions \(\lambda_{i}(\mathbf{r})\) defined on a mesh of the boundary \(\Gamma_{m/s}\), as \(j_{z,m/s}\simeq\sum_{i=1}^{N_{m/s}}(j_{z,m/s})_{i}\lambda_{i}\) and \(m_{t,s}\simeq\sum_{i=1}^{N_{m/s}}(\mathbf{m}_{t,s})_{i}\lambda_{i}\), results in the linear system of equations \[\begin{pmatrix}\mathbf{C}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{P}_{11}&\mathbf{P}_{12}\\ \mathbf{0}&\mathbf{P}_{21}&\mathbf{P}_{22}\\ \end{pmatrix}\begin{pmatrix}j_{z,m}\\ j_{z,s}\\ \mathbf{m}_{t,s}\\ \end{pmatrix}=\begin{pmatrix}\mathbf{e}_{z,m}/(j_{k}\eta_{0})+\mathbf{h}_{t,m}\\ \mathbf{e}_{z,s}\\ \mathbf{h}_{t,s}\\ \end{pmatrix}\,. \tag{8}\] In the above system, \[(\mathbf{e}_{z,m/s})_{i} =\left(\lambda_{i},E_{z}^{\text{inc},m/s}\right)_{L^{2}(\Gamma_{s /m})} \tag{9}\] \[(\mathbf{h}_{t,m/s})_{i} =\left(\lambda_{i},H_{t}^{\text{inc},m/s}\right)_{L^{2}(\Gamma_{s /m})}\,, \tag{10}\] and the matrices \(\mathbf{C}\), \(\mathbf{P}_{11}\), \(\mathbf{P}_{12}\), \(\mathbf{P}_{21}\), and \(\mathbf{P}_{22}\) are defined as \[\mathbf{C} =\mathbf{S}_{k_{0}}^{T_{m}}+\frac{1}{2}\mathbf{G}^{T_{m}}+\mathbf{D}_{k_{0}}^{ \epsilon_{T_{m}}} \tag{11}\] \[\mathbf{P}_{11} =-jk_{0}\eta_{0}\mathbf{S}_{k_{0}}^{T_{s}}-jk_{1}\eta_{1}\mathbf{S}_{k_{1 }}^{T_{s}}\] (12) \[\mathbf{P}_{12} =\mathbf{D}_{k_{0}}^{T_{s}}+\mathbf{D}_{k_{1}}^{T_{s}}\] (13) \[\mathbf{P}_{21} =-\left(\mathbf{D}_{k_{0}}^{\epsilon_{T_{s}}}+\mathbf{D}_{k_{1}}^{ \epsilon_{T_{s}}}\right)\] (14) \[\mathbf{P}_{22} =-1/(jk_{0}\eta_{0})\mathbf{N}_{k_{0}}^{T_{s}}-1/(jk_{1}\eta_{1})\mathbf{N }_{k_{1}}^{T_{s}}\,, \tag{15}\] where we have used the generic notation \((\mathbf{O}_{k}^{T})_{i}j=\left(\lambda_{i},\mathbf{O}_{k}^{T}\lambda_{j}\right)_{L^{2}( \Gamma)}\), where \(\mathbf{O}\) stands for one of \(\{\mathcal{S},\mathcal{D},\mathcal{D}^{*},\mathcal{N}\}\). The gram matrix \(\mathbf{G}^{T}\) is obtained as \((\mathbf{G}^{T})_{ij}=(\lambda_{i},\lambda_{j})_{L^{2}(\Gamma)}\). \[\begin{cases}\mathbf{S}_{k_{0}}^{T_{m}}j_{z,m}(\mathbf{r})+\left(\frac{1}{2}I+\mathbf{D}_{k_ {0}}^{\ast\Gamma_{m}}\right)j_{z,m}(\mathbf{r})=\frac{1}{jk_{0}\eta_{0}}\mathbf{E}_{z}^{ \text{inc},m}(\mathbf{r})+H_{t}^{\text{inc},m}(\mathbf{r}),&\mathbf{r}\in\Gamma_{m}\\ \left(-jk_{0}\eta_{0}\mathbf{S}_{k_{0}}^{T_{s}}-jk_{1}\eta_{1}\mathbf{S}_{k_{1}}^{T_{s} }\right)j_{z,s}(\mathbf{r})+\left(\mathcal{D}_{k_{0}}^{T_{s}}+\mathcal{D}_{k_{1}}^{T _{s}}\right)m_{t,s}(\mathbf{r})=E_{z}^{\text{inc},s}(\mathbf{r}),&\mathbf{r}\in\Gamma_{s} \\ -\left(\mathcal{D}_{k_{0}}^{\ast\Gamma_{s}}+\mathcal{D}_{k_{1}}^{\ast\Gamma_ {s}}\right)j_{z,s}(\mathbf{r})+\left(-1/(jk_{0}\eta_{0})\mathbf{N}_{k_{0}}^{T_{s}}-1 /(jk_{1}\eta_{1})\mathbf{N}_{k_{1}}^{T_{s}}\right)m_{t,s}(\mathbf{r})=H_{t}^{\text{ inc},s}(\mathbf{r}),&\mathbf{r}\in\Gamma_{s}\\ \end{cases} \tag{7}\] As a consequence of the fact that the metallic radiator is electrically much larger than the biological sample under study, a significantly higher number of basis functions is required for the discretization of the unknown currents on its boundary \(\Gamma_{m}\) (following the Nyquist sampling principle). Hence, we infer that the numerical solution of the linear system resulting from the discretization of the CFIE is the bottleneck, in terms of time and memory required, towards the solution of the entire system (8), both directly or iteratively. To alleviate this computational burden, we propose here to extend the Calderon preconditioned scheme presented in [6, 19] for \(\mathbf{C}\) and to extend the fast direct solver tailored for the resulting well-conditioned operator recently proposed in [6]. In particular, we define the Calderon stabilized operator matrix as \[\mathbf{C}_{p} \coloneqq \mathbf{N}_{k_{0}}^{\Gamma_{m}}\left(\mathbf{G}^{\Gamma_{m}}\right)^{-1} \mathbf{S}_{k_{0}}^{\Gamma_{m}}+ \tag{16}\] \[\left(\frac{1}{2}\mathbf{G}^{\Gamma_{m}}-\mathbf{D}_{k_{0}}^{\Gamma_{m}} \right)\left(\mathbf{G}^{\Gamma_{m}}\right)^{-1}\left(\frac{1}{2}\mathbf{G}^{\Gamma_{m }}+\mathbf{D}_{k_{0}}^{\ast\Gamma_{m}}\right)\] where, following the approach introduced in [20], \(\tilde{k}_{0}\coloneqq k_{0}-j0.4k_{0}^{1/3}a^{-2/3}\), with \(a\) evaluated as a suitable average of the radius of curvature along \(\Gamma_{m}\). Then, following the procedure described in [6], we express \(\mathbf{C}_{p}\) as the sum \(\mathbf{C}_{p}=\mathbf{C}_{p,\mathrm{c}}+\mathbf{C}_{p,\mathrm{ext}}\), where \(\mathbf{C}_{p,\mathrm{c}}\) is the circular counterpart of \(\mathbf{C}_{p}\) discretized over an equi-perimeter circular boundary. We employ at this point an adaptive randomized algorithm, such as the one presented in [21], to compute a skeleton form of \(\mathbf{C}_{p,\mathrm{ext}}\) as \[\mathbf{C}_{p,\mathrm{ext}}=\mathbf{C}_{p}-\mathbf{C}_{p,\mathrm{c}}\simeq\mathbf{U}\mathbf{V}^{ \mathrm{T}}\,. \tag{17}\] Given the spectral properties of matrix \(\mathbf{C}_{p,\mathrm{ext}}\), the rank of the skeleton \(\mathbf{U}\mathbf{V}^{\mathrm{T}}\) grows only approximately as \(k_{0}^{1/3}\) toward the high frequency limit. As a consequence, by applying a proper acceleration technique such as the fast multiple method (FMM) [22], the solution of the system, for any number of right hand sides, can be obtained efficiently, in quasi-linear complexity, by directly evaluating the inverse [7] \[\mathbf{C}_{p}^{-1}=\mathbf{C}_{p,\mathrm{c}}^{-1}-\mathbf{C}_{p,\mathrm{c}}^{-1}\mathbf{U} \left(\mathbf{I}+\mathbf{V}^{\mathrm{T}}\mathbf{C}_{p,\mathrm{c}}^{-1}\mathbf{U}\right)^{-1} \mathbf{V}^{\mathrm{T}}\mathbf{C}_{p,\mathrm{c}}^{-1}\,. \tag{18}\] In particular, after noticing that all operations involving circulant matrices are computed rapidly via the use of the fast Fourier transform (FFT) algorithm, we recognize that the complexity of evaluating (18) scales in frequency approximately as \(k_{0}^{4/3}\), with an overhead complexity with respect to the linear one determined by the skeleton rank increase. ## IV Numerical results In this section, we first aim at assessing the efficiency of the fast direct solver. The rank of the skeleton form \(\mathbf{U}\mathbf{V}^{\mathrm{T}}\) is the key parameter to observe, as it directly determines the computational complexity of the method, affecting both time and memory required. The first geometry analyzed is the ellipse. Figure 1 shows the rank of the skeleton of the operator, for both TE and TM formulations, evaluated over an ellipse with aspect ratio 1.5 and perimeter \(2\pi\) m. Secondarily, we have considered an airfoil geometry, resulting from the application of the Joukowsky conformal mapping from the circle, with perimeter \(2\pi\) m (fig. 2). In both cases, we observe that the rank grows less than linearly with frequency and tends to stabilize to the expected behaviour of \(k_{0}^{1/3}\) in the high frequency limit. Consistently, the compression time (i.e., the time required for the skeleton evaluation), dominating the overall inversion time, scales quasi-linearly, as shown in table I. Then, we applied the solver to the evaluation of the electromagnetic scattering from a skin sample (fig. 3). In particular, we considered an ellipse of perimeter approximately of \(5.85\,\mathrm{mm}\) excited by a time-harmonic field at the frequency of \(1\,\mathrm{THz}\). We employed the double Debye model (eq. (6)) to approximate the permittivity of the skin, corresponding to a penetration length of approximately \(62\,\mathrm{\SIUnitSymbolMicro m}\). ## V Conclusion This paper presented a fast direct solver strategy for full-wave modeling and dosimetry at terahertz frequencies. This has been obtained by leveraging a preconditioned version of the combined field integral equation, free of spurious high Fig. 1: Rank of the skeleton form \(\mathbf{U}\mathbf{V}^{\mathrm{T}}\) evaluated over an ellipse with aspect ratio 1.5 and perimeter \(2\pi\) m as a function of the free-space wavenumber \(k_{0}\). Fig. 2: Rank of the skeleton form \(\mathbf{U}\mathbf{V}^{\mathrm{T}}\) evaluated over an airfoil with perimeter \(2\pi\) m as a function of the free-space wavenumber \(k_{0}\). frequency resonances, and a suitable compression technique for its elliptic spectrum, resulting in an operator matrix invertible in quasi-linear complexity. The direct nature of the solver makes its use convenient to solve multiple sources problems, where the scatterer response to many different exposures should be analyzed, as it can be the case in dosimetry studies. ## Acknowledgment The work of this paper has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 724846, project 321), from the Horizon Europe Research and innovation programme under the EIC Pathfinder grant agreement n\({}^{\circ}\) 101046748 (project CEREBRO), and from the ANR Labex CominLabs under the project "CYCLE".
2305.11694
QUEST: A Retrieval Dataset of Entity-Seeking Queries with Implicit Set Operations
Formulating selective information needs results in queries that implicitly specify set operations, such as intersection, union, and difference. For instance, one might search for "shorebirds that are not sandpipers" or "science-fiction films shot in England". To study the ability of retrieval systems to meet such information needs, we construct QUEST, a dataset of 3357 natural language queries with implicit set operations, that map to a set of entities corresponding to Wikipedia documents. The dataset challenges models to match multiple constraints mentioned in queries with corresponding evidence in documents and correctly perform various set operations. The dataset is constructed semi-automatically using Wikipedia category names. Queries are automatically composed from individual categories, then paraphrased and further validated for naturalness and fluency by crowdworkers. Crowdworkers also assess the relevance of entities based on their documents and highlight attribution of query constraints to spans of document text. We analyze several modern retrieval systems, finding that they often struggle on such queries. Queries involving negation and conjunction are particularly challenging and systems are further challenged with combinations of these operations.
Chaitanya Malaviya, Peter Shaw, Ming-Wei Chang, Kenton Lee, Kristina Toutanova
2023-05-19T14:19:32Z
http://arxiv.org/abs/2305.11694v2
# Quest: A Retrieval Dataset of Entity-Seeking Queries ###### Abstract Formulating selective information needs results in queries that implicitly specify set operations, such as intersection, union, and difference. For instance, one might search for "shorebirds that are not sandpipers" or "science-fiction films shot in England". To study the ability of retrieval systems to meet such information needs, we construct Quest, a dataset of 3357 natural language queries with implicit set operations, that map to a set of entities corresponding to Wikipedia documents. The dataset challenges models to match multiple constraints mentioned in queries with corresponding evidence in documents and correctly perform various set operations. The dataset is constructed semi-automatically using Wikipedia category names. Queries are automatically composed from individual categories, then paraphrased and further validated for naturalness and fluency by crowdworkers. Crowdworkers also assess the relevance of entities based on their documents and highlight attribution of query constraints to spans of document text. We analyze several modern retrieval systems, finding that they often struggle on such queries. Queries involving negation and conjunction are particularly challenging and systems are further challenged with combinations of these operations.1 Footnote 1: The dataset is available at [https://github.com/google-research/language/tree/master/language/quest](https://github.com/google-research/language/tree/master/language/quest). ## 1 Introduction People often express their information needs with multiple preferences or constraints. Queries corresponding to such needs typically implicitly express set operations such as intersection, difference, and union. For example, a movie-goer might be looking for a _science-fiction film from the 90s which does not feature aliens_ and a reader might be interested in a _historical fiction novel set in France_. Similarly, a botanist attempting to identify a species based on their recollection might search for _shrubs that are evergreen and found in Panama_. Further, if the set of entities that satisfy the constraints is relatively small, a reader may like to see and explore an exhaustive list of these entities. In addition, to verify and trust a system's recommendations, users benefit from being shown evidence from trusted sources (Lamm et al., 2021). Addressing such queries has been primarily studied in the context of question answering with structured knowledge bases (KBs), where query constraints are grounded to predefined predicates and symbolically executed. However, KBs can be incomplete and expensive to curate and maintain. Meanwhile, advances in information retrieval may enable developing systems that can address such queries without relying on structured KBs, by Figure 1: The dataset construction process for Quest. First, (1) we sample Wikipedia category names and find their corresponding set of relevant entities. (2) Then, we compose a query with set operations and have this query paraphrased by crowdworkers. (3) These queries are then validated for fluency and naturalness. (4) Finally, crowdworkers mark the entities’ relevance by highlighting attributable spans in their documents. matching query constraints directly to supporting evidence in text documents. However, queries that combine multiple constraints with implicit set operations are not well represented in existing retrieval benchmarks such as MSMarco Nguyen et al. (2016) and Natural Questions Kwiatkowski et al. (2019). Also, such datasets do not focus on retrieving an exhaustive document set, instead limiting annotation to the top few results of a baseline information retrieval system. To analyze retrieval system performance on such queries, we present Quest, a dataset with natural language queries from four domains, that are mapped to relatively comprehensive sets of entities corresponding to Wikipedia pages. We use categories and their mapping to entities in Wikipedia as a building block for our dataset construction approach, but do not allow access to this semi-structured data source at inference time, to simulate text-based retrieval. Wikipedia categories represent a broad set of natural language descriptions of entity properties and often correspond to selective information need queries that could be plausibly issued by a search engine user. The relationship between property names and document text is often subtle and requires sophisticated reasoning to determine, representing the natural language inference challenge inherent in the task. Our dataset construction process is outlined in Figure 1. The base queries are semi-automatically generated using Wikipedia category names. To construct complex queries, we sample category names and compose them by using pre-defined templates (for example, \(A\cap B\setminus C\)). Next, we ask crowdworkers to paraphrase these automatically generated queries, while ensuring that the paraphrased queries are fluent and clearly describe what a user could be looking for. These are then validated for naturalness and fluency by a different set of crowdworkers, and filtered according to those criteria. Finally, for a large subset of the data, we collect scalar relevance labels based on the entity documents and fine-grained textual attributions mapping query constraints to spans of document text. Such annotation could aid the development of systems that can make precise inferences from trusted sources. Performing well on this dataset requires systems that can match query constraints with corresponding evidence in documents and handle set operations implicitly specified by the query (see Figure 2), while also efficiently scaling to large collections of entities. We evaluate several retrieval systems by finetuning pretrained models on our dataset. Systems are trained to retrieve multi-document sets given a query. We find that current dual encoder and cross-attention models up to the size of T5-Large Raffel et al. (2020) are largely not effective at performing retrieval for queries with set operations. Queries with conjunctions and negations prove to be especially challenging for models and systems are further challenged with combinations of set operations. Our error analysis reveals that non-relevant false positive entities are often caused by the model ignoring negated constraints, or ignoring the conjunctive constraints in a query. ## 2 Related Work Previous work in question answering and information retrieval has focused on QA over knowledge bases as well as open-domain QA and retrieval over a set of entities or documents. We highlight how these relate to our work below. Knowledge Base QASeveral datasets have been proposed for question answering over knowledge bases Berant et al. (2013); Yih et al. (2016); Talmor and Berant (2018); Keysers et al. (2020); Gu et al. (2021), _inter alia_). These benchmarks require retrieval of a set of entities that exist as nodes Figure 2: An example of a query and relevant entity from Quest. The attribution for different query constraints can come from different parts of the document. or relations in an accompanying knowledge base. Questions are optionally supplemented with logical forms. Lan et al. (2021) provide a comprehensive survey of complex KBQA datasets. Previous work has simultaneously noted that large curated KBs are incomplete Watanabe et al. (2017). Notably, KBQA systems operate over a constrained answer schema, which limits the types of queries they can handle. Further, these schema are expensive to construct and maintain. For this reason, our work focuses on a setting where we do not assume access to a KB. We note that KBQA datasets have also been adapted to settings where a KB is incomplete or unavailable Watanabe et al. (2017); Sun et al. (2019). This was done by either removing some subset of the data from the KB or ignoring the KB entirely. A key difference from these datasets is also that we do not focus on multi-hop reasoning over multiple documents. Instead, the relevance of an entity can be determined solely based on its document. Open-Domain QA and RetrievalMany open-domain QA benchmarks, which consider QA over unstructured text corpora, have been proposed in prior work. Some of these, such as TREC Craswell et al. (2020), MSMarco Nguyen et al. (2016) and Natural Questions Kwiatkowski et al. (2019) are constructed using "found data", using real user queries on search engines. Thakur et al. (2021) present a benchmark where they consider many such existing datasets. Datasets such as HotpotQA Yang et al. (2018), and MultiRC Khashabi et al. (2018) have focused on multi-hop question answering. Other work has explored e-commerce datasets (for example, Kong et al. (2022)), but these have not been released publicly. Notably, the focus of these datasets differs from ours as we focus on queries that contain implicit set operations over exhaustive answer sets. Such queries are not well represented in existing datasets because they occur in the tail of the query distributions considered. Multi-Answer RetrievalRelated work Min et al. (2021); Amouyal et al. (2022) also studies the problem of _multi-answer retrieval_, where systems are required to predict multiple distinct answers for a query. Min et al. (2021) adapt existing datasets (for example, WebQuestionsSP Yih et al. (2016)) to study this setting and propose a new metric, MRecall@K, to evaluate exhaustive recall of multiple answers. We also consider the problem of multi-answer set retrieval, but consider queries that implicitly contain set constraints. In concurrent work, RomQA Zhong et al. (2022) proposes an open-domain QA dataset, focusing on combinations of constraints extracted from Wikiadata. RomQA shares our motivation to enable answering queries with multiple constraints, which have possibly large answer sets. To make attribution to evidence feasible without human annotation, RomQA focuses on questions whose component constraints can be verified from single entity-linked sentences from Wikipedia abstracts, annotated with relations automatically through distant supervision, with high precision but possibly low recall (T-Rex corpus). In Quest, we broaden the scope of query-evidence matching operations by allowing for attribution through more global, document-level inference. To make human annotation for attribution feasible, we limit the answer set size and the evidence for an answer to a single document. ## 3 Dataset Generation Quest consists of 3357 queries paired with up to 20 corresponding entities. Each entity has an associated document derived from its Wikipedia page. The dataset is divided into 1307 queries for training, 323 for validation, and 1727 for testing. The task for a system is to return the correct set of entities for a given query. Additionally, as the collection contains 325,505 entities, the task requires retrieval systems that can scale efficiently. We do not allow systems to access additional information outside of the text descriptions of entities at inference time. Category labels are omitted from all entity documents. ### Atomic Queries The base atomic queries (i.e., queries without any introduced set operations) in our dataset are derived from Wikipedia category names2. These are hand-curated natural language labels assigned to groups of related documents in Wikipedia3. Category assignments to documents allow us to automatically determine the set of answer entities for queries with high precision and relatively high recall. We compute transitive closures of all relevant categories to determine their answer sets. Footnote 2: We use the Wikipedia version from 06/01/2022. Footnote 3: Note that these category labels can sometimes be conjunctive themselves, potentially increasing complexity. However, repurposing these categories for constructing queries poses challenges: 1) lack of evi dence in documents: documents may not contain sufficient evidence for judging their relevance to a category, potentially providing noisy signal for relevance attributable to the document text, 2) low recall: entities may be missing from categories to which they belong. For about half of the dataset, we crowdsource relevance labels and attribution based on document text, and investigate recall through manual error analysis (SS5). We select four domains to represent some diversity in queries: films, books, animals and plants. Focusing on four rather than all possible domains enables higher quality control. The former two model a general search scenario, while the latter two model a scientific search scenario. ### Introducing set operations To construct queries with set operations, we define templates that represent plausible combinations of atomic queries. Denoting atomic queries as A, B and C, our templates and corresponding examples from different domains are listed in Table 1. Templates were constructed by composing three basic set operations (intersection, union and difference). They were chosen to ensure unambiguous interpretations of resulting queries by omitting those combinations of set operations that are non-associative. Below we describe the logic behind sampling atomic queries (i.e., \(A\), \(B\), \(C\)) for composing complex queries, with different set operations. In all cases, we ensure that answer sets contain between 2-20 entities so that crowdsourcing relevance judgements is feasible. We sample 200 queries per template and domain, for a total of 4200 initial queries. The dataset is split into train + validation (80-20 split) and testing equally. In each of these sets, we sampled an equal number of queries per template. Intersection.The intersection operation for a template \(A\cap B\) is particularly interesting and potentially challenging when both \(A\) and \(B\) have large answer sets but their intersection is small. We require the minimum answer set sizes of each \(A\) and \(B\) to be fairly large (>50 entities), while their intersection to be small (2-20 entities). Difference.Similar to intersection, we require the answer sets for both \(A\) and \(B\) to be substantial (>50 entities), but also place maximum size constraints on both \(A\) (<200 entities) and \(B\) (<10000 entities) as very large categories tend to suffer from recall issues in Wikipedia. We also limit the intersection of \(A\) and \(B\) (see reasoning in Appendix B). Union.For the union operation, we require both \(A\) and \(B\) to be well-represented through the entities in the answer set for their union \(A\cup B\). Hence, we require both \(A\) and \(B\) to have at least 3 entities. Further, we require their intersection to be non-zero but less than 1/3rd of their union. This is so that \(A\) and \(B\) are somewhat related queries. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Domain** & **Template** & **Example** & **Num. Queries** \\ \hline \multirow{10}{*}{ Films} & \(A\) & Biographical Italian bandits films & 125 \\ & \(A\cup B\) & Dutch crime comedy or romantic comedy films & 135 \\ & \(A\cap B\) & Italian crime films set in the 1970’s & 143 \\ & \(A\setminus B\) & Indian sport films that are not about cricket & 126 \\ & \(A\cup B\cup C\) & Dutch or Swiss war films, or war films from 1945 & 122 \\ & \(A\cap B\cap C\) & 2020’s drama films shot in cleeland & 124 \\ & \(A\cap B\setminus C\) & Epic films about Christianity not set in Israel & 121 \\ & \(A\)\(\leftarrow\) & 2004 German novels & 125 \\ & \(A\cup B\) & 1925 Russian novels or Novels by Ivan Bunin & 125 \\ & \(A\cap B\) & 1991 Novels set in Iceland & 133 \\ Books & \(A\setminus B\) & Novels set in the 1900s not based on real events & 123 \\ & \(A\cup B\cup C\) & Novels set in Nanjing, Hebei, or Jiangsu & 125 \\ & \(A\cap B\cap C\) & English language Harper \& Brothers Children’s fiction books & 124 \\ & \(A\cap B\setminus C\) & Novels that take place in Vietnam that aren’t about war & 115 \\ & \(A\)\(\rightarrow\)\(A\) & plants only from Gabon & 115 \\ & \(A\cup B\) & Trees of Manitoba or Subarctic America & 125 \\ & \(A\cap B\) & Shrubes used in traditional Native American medicine & 135 \\ Plants & \(A\setminus B\) & Trees from the Northwestern US that can’t be found in Canada & 61 \\ & \(A\cup B\cup C\) & Monts of Incects of Arthropods of Outeloeloupe & 121 \\ & \(A\cap B\cap C\) & Plants the Arctic, the United Kingdom, and the Caucas hus in common & 123 \\ & \(A\cap B\setminus C\) & Ochids of Indonesia and Malaysia but not Thailand & 122 \\ & \(A\)\(\rightarrow\)\(A\) & what are the Rodents of Camfoolia & 15 \\ & \(A\cup B\) & Animals from Cuba or Jama that are extinct & 121 \\ & \(A\cap B\) & Neogene mammals of Africa that are Odd-tood ungulates & 111 \\ Animals & \(A\setminus B\) & Non-Palearactic birds of Mongolia & 110 \\ & \(A\cup B\cup C\) & Cenozoic birds of Asia or Africa on Paleogene birds of Asia & 114 \\ & \(A\cap B\cap C\) & Birds of Chile that are also Birds of Peru and Fauna of the Guianas & 104 \\ & \(A\cap B\setminus C\) & mammals found in the Atlantic Ocean and Colombia, but not in Brazil & 114 \\ \hline \end{tabular} \end{table} Table 1: Templates used for construction of queries with set operations and examples from the four domains considered, along with the count of examples per each domain and template. For all other templates that contain compositions of the above set operations, we apply the same constraints recursively. For example, for \(A\cap B\setminus C\), we sample atomic queries \(A\) and \(B\) for the intersection operation, then sample \(C\) based on the relationship between \(A\cap B\) and \(C\). ### Annotation Tasks Automatically generating queries based on templates results in queries that are not always fluent and coherent. Further, entities mapped to a query may not actually be relevant and don't always have attributable evidence for judging their relevance. We conduct crowdsourcing to tackle these issues. The annotation tasks aim at ensuring that 1) queries are fluent, unambiguous and contain diverse natural language logical connectives, (2) entities are verified as being relevant or non-relevant and (3) relevance judgements are attributed to document text for each relevant entity. Crowdsourcing is performed in three stages, described below. More annotation details and the annotation interfaces can be found in Appendix C. #### 3.3.1 Paraphrasing Crowdworkers were asked to paraphrase a templatically generated query so that the paraphrased query is fluent, expresses all constraints in the original query, and clearly describes what a user could be looking for. This annotation was done by one worker per query. #### 3.3.2 Validation This stage is aimed at validating the queries we obtain from the paraphrasing stage. Crowdworkers were given queries from the first stage and asked to label whether the query is 1) fluent, 2) equivalent to the original templatic query in meaning, and 3) rate its naturalness (how likely it is to be issued by a real user). This annotation was done by 3 workers per query. We excluded those queries which were rated as not fluent, unnatural or having a different meaning than the original query, based on a majority vote. Based on the validation, we removed around around 11% of the queries from stage 1. #### 3.3.3 Relevance Labeling Next, crowdworkers were asked to provide relevance judgements for the automatically determined answer sets of queries. Specifically, they were given a query and associated entities/documents, and asked to label their relevance on a scale of 0-3 (definitely not relevant, likely not relevant, likely relevant, definitely relevant). They were asked to ensure that relevance should mostly be inferred from the document, but they could use some background knowledge and do minimal research. We also asked them to provide attributions for document relevance. Specifically, we ask them to first label whether the document provides sufficient evidence for the relevance of the entity (complete/partial/no). Then, for different phrases in the query (determined by the annotator), we ask them to mark sentence(s) in the document that indicate its relevance. The attribution annotation is broadly inspired by Rashkin et al. (2021). For negated constraints, we ask annotators to mark attributable sentences if they provide counter-evidence. Since this annotation was time-intensive, we collected these annotations for two domains (films and books). We found that relevance labeling was especially difficult for the plants and animals domains, as they required more specialized scientific knowledge. In our pilot study prior to larger scale data collection, we collected 3 relevance ratings from different annotators for 905 query and document pairs from the films domain. In 61.4% of cases, all 3 raters judged the document to be "Definitely relevant" or "Likely relevant" or all 3 raters judged the document to be "Definitely not relevant" or "Likely not relevant". The Fleiss' kappa metric on this data was found to be K=0.43. We excluded all entities which were marked as likely or definitely not relevant to a query based on the document text from its answer set. Around 23.7% of query-document pairs from stage 2 were excluded. \begin{table} \begin{tabular}{l r r r r r} \hline \hline & Films & Books & Plants & Animals & **All** \\ \hline Num. Queries & 896 & 870 & 802 & 789 & 3357 \\ Num. Entities & 146368 & 50784 & 83672 & 44681 & 325505 \\ Avg. Query Len. & 8.68 & 7.93 & 8.94 & 9.09 & 8.64 \\ Avg. Doc. Len. & 532.2 & 655.3 & 258.1 & 293.1 & 452.2 \\ Avg. Ans. Set Size & 8.8 & 8.6 & 12.2 & 12.6 & 10.5 \\ \hline \hline \end{tabular} \end{table} Table 2: Statistics of examples in Quest across different domains. ### Dataset Statistics Basic dataset statistics are reported in Table 2. The dataset contains more entities from the films domain, because this domain is more populated in Wikipedia. The average length of queries is 8.6 words and the average document length is 452 words. Documents from the films and books domains are longer on average, as they often contain plots and storlines. Around \(\sim\)69% of entities have complete evidence and \(\sim\)30% have partial evidence. Evidence was labeled as partial when not all phrases in the query had explicit evidence in the document (i.e., they may require background knowledge or reasoning). There are on average 33.2 words attributed for each entity with the maximum attribution text span ranging up to length 1837 words. Finally, the average answer set size is 10.5 entities. ### Additional Training Examples Beyond the annotated data, we generated additional synthetic examples for training. We found including such examples improved model performance, and we include these examples for the experiments in SS4. To generate these examples, we sample 5000 atomic queries from all domains, ensuring that they do not already appear as sub-queries in any of the queries in Quest and use their corresponding entities in Wikipedia as their relevant entity set. ## 4 Experimental Setup We evaluate modern retrieval systems to establish baseline performances. We also perform extensive error analysis to understand patterns of model errors and the quality of the labels in Quest. ### Task Definition We consider a corpus, \(\mathcal{E}\), that contains entities across all domains in the dataset. Each entity is accompanied with a document based on its Wikipedia page. An example in our dataset consists of a query, \(x\), and an annotated set of relevant entities, \(y\subset\mathcal{E}\). As described in SS3, for all examples \(|y|<20\). Our task is to develop a system that, given \(\mathcal{E}\) and a query \(x\), predicts a set of relevant entities, \(\hat{y}\subset\mathcal{E}\). ### Evaluation Our primary evaluation metric is average \(F_{1}\), which averages per-example \(F_{1}\) scores. We compute \(F_{1}\) for each example by comparing the predicted set of entities, \(\hat{y}\), with the annotated set, \(y\). ### Baseline Systems We evaluated several combinations of retrievers and classifiers, as shown in Figure 3. For the retriever component, we consider a sparse BM25 retriever (Robertson et al., 2009) and a dense dual encoder retriever (denoted DE). Following Ni et al. (2022), we initialize our dual encoder from a T5 (Raffel et al., 2020) encoder and train with an in-batch sampled softmax loss (Henderson et al., 2017). Once we have a candidate set, we need to determine a set of relevant entities. To classify relevance of each candidate document for the given query, we consider a cross-attention model which consists of a T5 encoder and decoder.4 We train the cross-attention classifier using a binary cross-entropy loss with negative examples based on non-relevant documents in top 1,000 documents retrieved by BM25 and random non-relevant documents (similarly to Nogueira and Cho (2019)). As cross-attention classification for a large number of candidates is computationally expensive, we restrict BM25 and the dual encoder to retrieve 100 candidates which are then considered by the cross-attention classifier. As our T5-based dual encoder can only efficiently accommodate up to 512 tokens, Figure 3: We compare several systems consisting of a retriever for efficiently selecting a set of candidates from the document corpus and a document relevance classifier for determining the final predicted document set. we truncate document text. We discuss the impact of this and alternatives in SS5. Further, since T5 was pre-trained on Wikipedia, we investigate the impact of memorization in Appendix D. Additional details and hyperparameter settings are in Appendix A. ### Manual Error Annotation For the best overall system, we sampled errors and manually annotated 1145 query-document pairs from the validation set. For the retriever, we sampled relevant documents not included in the top-100 candidate set and non-relevant documents ranked higher than relevant ones. For the classifier, we sampled false positive and false negative errors made in the top-100 candidate set. This annotation process included judgements of document relevance (to assess agreement with the annotations in the dataset) and whether the document (and the truncated version considered by the dual encoder or classifier) contained sufficient evidence to reasonably determine relevance. We also annotated relevance for each constraint within a query. We discuss these results in SS5. ## 5 Results and Analysis We report the performance of our baseline systems on the test set in Table 3. In this section, we summarize the key findings from our analysis of these results and the error annotation described in SS4.4. Dual encoders outperform BM25.As shown in Table 3, the best overall system uses a T5-Large Dual Encoder instead of BM25 for retrieval. The performance difference is even more significant when comparing recall of Dual Encoders and BM25 directly. We report average recall (average per-example recall of the full set of relevant documents) and MRecall (Min et al., 2021) (the percentage of examples where the candidate set contains all relevant documents), over various candidate set sizes in Table 4. Retrieval and classification are both challenging.As we consider only the top-100 candidates from the retriever, the retriever's recall@100 sets an upper bound on the recall of the overall system. Recall@100 is only 0.476 for the T5-Large Dual Encoder, and the overall recall is further reduced by the T5-Large classifier to 0.368, despite achieving only 0.165 precision. This suggests that there is room for improvement from both stages to improve overall scores. As performance improves for larger T5 sizes for both retrieval and classification, further model scaling could be beneficial. Models struggle with intersection and difference.We also analyzed results across different templates and domains, as shown in Table 5. Different constraints lead to varying distributions over answer set sizes and the atomic categories used. Therefore, it can be difficult to interpret differences in F1 scores across templates. Nevertheless, we found the queries with set union have the highest average F1 scores. Queries with set intersection have the lowest average F1 scores, and queries with set difference also appear to be challenging. To analyze why queries with conjunction and negation are challenging, we labeled the relevance of individual query constraints (SS4.4), where a system incorrectly judges relevance of a non-relevant document. The results are summarized in Table 6. For a majority of false positive errors involving intersection, at least one constraint is satisfied. This could be interpreted as models incorrectly treating \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Avg. Recall@K} & \multicolumn{3}{c}{MRecall@K} \\ \cline{2-9} Retriever & 20 & 50 & 100 & 1000 & 20 & 50 & 100 & 1000 \\ \hline BM25 & 0.104 & 0.153 & 0.197 & 0.395 & 0.020 & 0.030 & 0.037 & 0.087 \\ T5-Base DE & 0.255 & 0.372 & 0.455 & 0.726 & 0.045 & 0.088 & 0.127 & 0.360 \\ T5-Large DE & **0.265** & **0.386** & **0.476** & **0.757** & **0.047** & **0.100** & **0.142** & **0.408** \\ \hline \hline \end{tabular} \end{table} Table 4: Average Recall and MRecall of various retrievers. \begin{table} \begin{tabular}{l l c c c c} \hline \hline Retriever (K=100) & Classifier & Avg. Precision & Avg. Recall & Avg. F1 \\ \hline BM25 & T5-Base & 0.168 & 0.160 & 0.141 \\ BM25 & T5-Large & 0.178 & 0.168 & 0.150 \\ T5-Large DE & T5-Base & 0.153 & 0.354 & 0.176 \\ T5-Large DE & T5-Large & 0.165 & 0.368 & **0.192** \\ \hline \hline \end{tabular} \end{table} Table 3: Average Precision, Recall, and F1 of baseline systems evaluated on the test set. intersection as union when determining relevance. Similarly, for a majority of examples with set difference, the negated constraint is not satisfied. This suggests that the systems are not sufficiently sensitive to negations. There is significant headroom to improve both precision and recall.As part of our manual error analysis (SS4.4), we made our own judgements of relevance and measured agreement with the relevance annotations in Quest. As this analysis focused on cases where our best system disagreed with the relevance labels in the dataset, we would expect agreement on these cases to be significantly lower than on randomly selected query-document pairs in the dataset. Therefore, it provides a focused way to judge the headroom and annotation quality of the dataset. For false negative errors, we judged 91.1% of the entities to be relevant for the films and books domains, and 81.4% for plants and animals. Notably, we collected relevance labels for the films and books domains and removed some entities based on these labels, as described in SS3, which likely explains the higher agreement for false negatives from these domains. This indicates significant headroom for improving recall as defined by Quest, especially for the domains where we collected relevance labels. For false positive errors, we judged 28.8% of the entities to be relevant, showing a larger disagreement with the relevance labels in the dataset. This is primarily due to entities not included in the entity sets derived from the Wikipedia category taxonomy (97.7%), rather than entities removed due to relevance labeling. This is a difficult issue to fully resolve, as it is not feasible to exhaustively label relevance for all entities to correct for recall issues in the Wikipedia category taxonomy. Future work can use pooling to continually grow the set of relevant documents (Sparck Jones and Van Rijsbergen, 1975). Despite this, our analysis suggests there is significant headroom for improving precision, as we judged a large majority of the false positive predictions to be non-relevant. Truncating document text usually provides sufficient context.In our experiments, we truncate document text to 512 tokens for the dual encoder, and 384 tokens for the classifier to allow for the document and query to be concatenated. Based on our error analysis (SS4.4), out of the documents with sufficient evidence to judge relevance, evidence occurred in this truncated context 93.2% of the time for the dual encoder, and 96.1% of the time for the classifier. This may explain the relative success of this simple baseline for handling long documents. We also evaluated alternative strategies but these performed worse in preliminary experiments5. Future work can evaluate efficient transformer variants (Guo et al., 2022; Beltagy et al., 2020). Footnote 5: For the dual encoder, we split documents into overlapping chunks of 512 tokens, and aggregated scores at inference (Dai and Callan, 2019). For the cross-attention model, we evaluated using BM25 to select the top-3 passages of length 128. ## 6 Conclusion We present Quest, a new benchmark of queries which contain implicit set operations with corresponding sets of relevant entity documents. Our experiments indicate that such queries present a \begin{table} \begin{tabular}{l c c c|c} \hline \hline & \multicolumn{4}{c}{\# Constraints} \\ \cline{2-4} & 1 & 2 & 3 & Neg. \\ \hline **Retriever** & & & & \\ \(A\cap B\) & 63.5 & 36.5 & — & — \\ \(A\cap B\cap C\) & 56.5 & 37.0 & 6.5 & — \\ \(A\setminus B\) & 80.3 & 19.7 & — & 59.1 \\ \(A\cap B\setminus C\) & 47.6 & 40.5 & 11.9 & 26.2 \\ \hline **Classifier** & & & & \\ \(A\cap B\) & 83.3 & 16.7 & — & — \\ \(A\cap B\cap C\) & 73.2 & 22.0 & 4.9 & — \\ \(A\setminus B\) & 81.0 & 19.1 & — & 38.1 \\ \(A\cap B\setminus C\) & 95.5 & 4.6 & 0.0 & 68.2 \\ \hline \hline \end{tabular} \end{table} Table 6: Analysis of false positive errors from the T5-Large classifier and cases where a non-relevant document was ranked ahead of a relevant one for the T5-Large dual encoder. For queries with conjunction, we determined the percentage of cases where 1, 2, or 3 constraints in the template were not satisfied by the predicted document (# Constraints). For queries with negation, we measured the percentage of cases where the negated constraint (Neg.) was not satisfied. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Template & Films & Books & Plants & Animals & **All** \\ \hline \(A\) & 0.231 & 0.436 & 0.209 & 0.214 & 0.274 \\ \(A\cup B\) & 0.264 & 0.366 & 0.229 & 0.271 & 0.282 \\ \(A\cap B\) & 0.115 & 0.138 & 0.049 & 0.063 & 0.092 \\ \(A\setminus B\) & 0.177 & 0.188 & 0.216 & 0.204 & 0.193 \\ \(A\cup B\cup C\) & 0.200 & 0.348 & 0.306 & 0.294 & 0.287 \\ \(A\cap B\cap C\) & 0.086 & 0.121 & 0.07 & 0.065 & 0.086 \\ \(A\cap B\setminus C\) & 0.119 & 0.112 & 0.121 & 0.136 & 0.122 \\ \hline **All** & 0.171 & 0.248 & 0.165 & 0.182 & 0.192 \\ \hline \hline \end{tabular} \end{table} Table 5: F1 of our strongest baseline (T5-Large DE + T5-Large Classifier) across templates and domains. challenge for modern retrieval systems. Future work could consider approaches that have better inductive biases for handling set operations in natural language expressions (for example, Vilnis et al. (2018)). The attributions in Quest can be leveraged for building systems that can provide fine-grained attributions at inference time. The potential of pretrained generative LMs and multi-evidence aggregation methods to answer set-seeking selective queries, while providing attribution to sources, can also be investigated. ## 7 Limitations Naturalness.Since our dataset relies on the Wikipedia category names and semi-automatically generated compositions, it does not represent an unbiased sample from a natural distribution of real search queries that contain implicit set operations. Further, we limit attention to non-ambiguous queries and do not address the additional challenges that arise due to ambiguity in real search scenarios. However, the queries in our dataset were judged to plausibly correspond to real user search needs and system improvements measured on Quest should correlate with improvements on at least a fraction of natural search engine queries with set operations. Recall.We also note that because Wikipedia categories have imperfect recall of all relevant entities (that contain sufficient evidence in their documents), systems may be incorrectly penalised for predicted relevant entities assessed as false positive. We quantify this in section 5. We have also limited the trusted source for an entity to its Wikipedia document but entities with insufficient textual evidence in their documents may still be relevant. Ideally, multiple trusted sources could be taken into account and evidence could be aggregated to make relevance decisions. RomQA Zhong et al. (2022) takes a step in this latter direction although the evidence attribution is not manually verified. Answer Set Sizes.To ensure that relevance labels are correct and verifiable, we seek the help of crowdworkers. However, this meant that we needed to restrict the answer set sizes to 20 for the queries in our dataset, to make annotation feasible. On one hand, this is realistic for a search scenario because users may only be interested in a limited set of results. On the other hand, our dataset does not model a scenario where the answer set sizes are much larger. ## Acknowledgements We would like to thank Isabel Kraus-Liang, Mahesh Maddinala, Andrew Smith, Daphne Domansi, and all the annotators for their work. We would also like to thank Mark Yatskar, Dan Roth, Zhuyun Dai, Jianmo Ni, William Cohen, Andrew McCallum, Shib Sankar Dasgupta and Nicholas Fitzgerald for useful discussions.
2301.03767
Online Backfilling with No Regret for Large-Scale Image Retrieval
Backfilling is the process of re-extracting all gallery embeddings from upgraded models in image retrieval systems. It inevitably requires a prohibitively large amount of computational cost and even entails the downtime of the service. Although backward-compatible learning sidesteps this challenge by tackling query-side representations, this leads to suboptimal solutions in principle because gallery embeddings cannot benefit from model upgrades. We address this dilemma by introducing an online backfilling algorithm, which enables us to achieve a progressive performance improvement during the backfilling process while not sacrificing the final performance of new model after the completion of backfilling. To this end, we first propose a simple distance rank merge technique for online backfilling. Then, we incorporate a reverse transformation module for more effective and efficient merging, which is further enhanced by adopting a metric-compatible contrastive learning approach. These two components help to make the distances of old and new models compatible, resulting in desirable merge results during backfilling with no extra computational overhead. Extensive experiments show the effectiveness of our framework on four standard benchmarks in various settings.
Seonguk Seo, Mustafa Gokhan Uzunbas, Bohyung Han, Sara Cao, Joena Zhang, Taipeng Tian, Ser-Nam Lim
2023-01-10T03:10:32Z
http://arxiv.org/abs/2301.03767v1
# Online Backfilling with No Regret for Large-Scale Image Retrieval ###### Abstract Backfilling is the process of re-extracting all gallery embeddings from upgraded models in image retrieval systems. It inevitably requires a prohibitively large amount of computational cost and even entails the downtime of the service. Although backward-compatible learning sidesteps this challenge by tackling query-side representations, this leads to suboptimal solutions in principle because gallery embeddings cannot benefit from model upgrades. We address this dilemma by introducing an online backfilling algorithm, which enables us to achieve a progressive performance improvement during the backfilling process while not sacrificing the final performance of new model after the completion of backfilling. To this end, we first propose a simple distance rank merge technique for online backfilling. Then, we incorporate a reverse transformation module for more effective and efficient merging, which is further enhanced by adopting a metric-compatible contrastive learning approach. These two components help to make the distances of old and new models compatible, resulting in desirable merge results during backfilling with no extra computational overhead. Extensive experiments show the effectiveness of our framework on four standard benchmarks in various settings. + Footnote †: \({}^{\dagger}\) This work was mostly done during an internship at Meta AI. ## 1 Introduction Image retrieval models [5, 10, 21, 23] have achieved remarkable performance by adopting deep neural networks for representing images. Yet, all models need to be upgraded at times to take advantage of improvements in training datasets, network architectures, and training techniques. This unavoidably leads to the need for re-extracting the features from millions or even billions of gallery images using the upgraded new model. This process, called _backfilling_ or _re-indexing_, needs to be completed before the retrieval system can benefit from the new model, which may take months in practice. To sidestep this bottleneck, several backfilling-free approaches based on backward-compatible learning [4, 13, 19, 20, 22] have been proposed. They learn a new model while ensuring that its feature space is still compatible with the old one, thus avoiding the need for updating old gallery embeddings. Although these approaches have achieved substantial performance gains without backfilling, they achieve feature compatibility at the expense of feature discriminability and their performance is suboptimal. We argue that backward-compatible learning is not a fundamental solution and backfilling is still essential to accomplish state-of-the-art performance without performance sacrifices. To resolve this compatibility-discriminability dilemma, we relax the backfill-free constraint and propose a novel online backfilling algorithm equipped with three technical components. We posit that an online backfilling technique needs to satisfy three essential conditions: 1) immediate deployment after the completion of model upgrade, 2) progressive and non-trivial performance gains in the middle of backfilling, and 3) no degradation of final performance compared to offline backfilling. To this end, we first propose a distance rank merge framework to make online backfilling feasible, which retrieves images from both the old and new galleries separately and merge their results to obtain the final retrieval outputs even when backfilling is still ongoing. While this approach provides a monotonic performance increase with the progress of backfilling regardless of the gallery of interest and network architectures, it requires feature computations twice, once from the old model and another from the new one at the inference stage of a query. To overcome this limitation, we introduce a reverse transformation module, which is a lightweight mapping network between the old and new embeddings. The reverse transformation module allows us to obtain the query representations compatible with both the old and new galleries using only a single feature extraction. On the other hand, however, we notice that the scales of distance in the embedding spaces of the two models could be significantly different. We resolve the limitation with a metric compatible learning technique, which calibrates the distances of two models via contrastive learning, further enhancing performance of rank merge. The main contributions of our work are summarized as follows. * We propose an online backfilling approach, a fundamental solution for model upgrades in image retrieval systems, based on distance rank merge to overcome the compatibility-discriminability dilemma in existing compatible learning methods. * We incorporate a reverse query transform module to make it compatible with both the old and new galleries while computing the feature extraction of query only once in the middle of the backfilling process. * We adopt a metric-compatible learning technique to make the merge process robust by calibrating distances in the feature embedding spaces given by the old and new models. * The proposed approach outperforms all existing methods by significant margins on four standard benchmark datasets under various scenarios. The rest of this paper is organized as follows. Section 2 reviews the related works. We present the main framework of online backfilling in Section 3, and discuss the technical components for improvement in Section 4 and 5. We demonstrate the effectiveness of the proposed framework in Section 6 and conclude this paper in Section 7. ## 2 Related Work Backward compatible learningBackward compatibility refers to the property to support older versions in hardware or software systems. It has been recently used in model upgrade scenarios in image retrieval systems. Since the feature spaces given by the models relying on training datasets in different regimes are not compatible [11, 24], model upgrades require re-extraction of all gallery images from new models, which takes a huge amount of computational cost. To prevent this time-consuming backfilling cost, backward compatible training (BCT) [1, 13, 15, 26, 19, 22] has been proposed to learn better feature representations while being compatible with old embeddings, which makes the new model backfill-free. Shen _et al_. [19] employ the influence loss that utilizes the old classifier as a regularizer when training the new model. LCE [13] introduces an alignment loss to align the class centers between old and new models and a boundary loss that restricts more compact intra-class distributions for the new model. Bai _et al_. [1] propose a joint prototype transfer with structural regularization to align two embedding features. UniBCT [26] presents a structural prototype refinement algorithm that first refines noisy old features with graph transition and then conducts backward compatible training. Although these approaches improved compatible performance without backfilling, they clearly sacrifice feature discriminability to achieve feature compatibility with non-ideal old gallery embeddings. Compatible learning with backfillingTo overcome the inherent limitation of backward compatible learning, several approaches [20, 25, 17] have been proposed to utilize backfilling but efficiently. Forward compatible training (FCT) [17] learn a lightweight transformation module that updates old gallery embeddings to be compatible with new embeddings. Although it gives better compatible performance than BCT, it requires an additional side-information [2] to map from old to new embeddings, which limits its practicality. Moreover, FCT still suffers from computational bottleneck until all old gallery embeddings are transformed, especially when the side-information needs to be extracted. On the other hand, RACT [25] and BiCT [20] alleviate this bottleneck issue by backfilling the gallery embeddings in an online manner. RACT first trains a backward-compatible new model with regression-alleviating loss, then backfills the old gallery embeddings with the new model. Because the new feature space is compatible with the old one, the new model can be deployed right away while backfilling is carried out in the background. BiCT further reduces the backfilling cost by transforming the old gallery embeddings with forward-compatible training [17]. Although both approaches can utilize online backfilling, they still sacrifice the final performance because the final new embeddings are constrained by the old ones. Unlike these methods, our framework enables online backfilling while fully exploiting the final new model performance without any degradation. ## 3 Image Retrieval by Rank Merge This section discusses our baseline image retrieval algorithm that makes online backfilling feasible. We first present our motivation and then describe technical details with empirical observations. ### Overview Our goal is to develop a fundamental solution via online backfilling to overcome the compatibility-discriminability trade-off in compatible model upgrade. This strategy removes inherent limitations of backfill-free backward-compatible learning--the inability to use state-of-the art representations of gallery images through model upgrades--while avoiding prohibitive costs, including the situation that we cannot benefit from model upgrade of the offline backfilling process, until backfilling is completed. To be specific, the proposed image retrieval system with online backfilling should satisfy the following three conditions: 1. The system can be deployed immediately as soon as model upgrade is complete. 2. The performance should monotonically increase without negative flips1 as backfill progresses. Footnote 1: The “negative flip” refers to performance degradation caused by incorrect retrievals of samples by the new model, which were correctly recognized by the old model. 3. The final performance should not be sacrificed compared to the algorithm relying on offline backfilling. We present a distance rank merge approach for image retrieval, which enables online backfilling in arbitrary model upgrade scenarios. Our method maintains two separate retrieval pipelines corresponding to the old and new models and merges the retrieval results from the two models based on distances from a query embedding. This allows us to run the retrieval system without a warm-up period and achieve surprisingly good results during the backfill process. Note that the old and new models are not required to be compatible at this moment but we will make them so to further improve performance in the subsequent sections. ### Formulation Let \(\mathbf{q}\in\mathbf{Q}\) be a query image and \(\mathbf{G}=\{\mathbf{g}_{1},...,\mathbf{g}_{N}\}\) be a gallery composed of \(N\) images. An embedding network \(\phi(\cdot)\) projects an image onto a learned feature embedding space. To retrieve the closest gallery image given a query, we find \(\arg\min_{\mathbf{g}\in\mathbf{G}}\text{dist}\left(\phi(\mathbf{q}),\phi( \mathbf{g})\right)\), where \(\text{dist}(\cdot,\cdot)\) is a distance metric. Following [19], we define the retrieval performance as \[\mathcal{M}(\phi(\mathbf{Q}),\phi(\mathbf{G})), \tag{1}\] where \(\mathcal{M}(\cdot,\cdot)\) is an evaluation metric such as mean average precision (mAP) or cumulative matching characteristics (CMC), and \(\phi(\cdot)\) indicates embedding models for query and gallery, respectively. Backward compatibilityDenote the old and new embedding networks by \(\phi^{\text{old}}(\cdot)\) and \(\phi^{\text{new}}(\cdot)\) respectively. If \(\phi^{\text{new}}(\cdot)\) is backward compatible with \(\phi^{\text{old}}(\cdot)\), then we can perform search on a set of old gallery embeddings using a new query embedding, _i.e._, \(\arg\min_{\mathbf{g}\in\mathbf{G}}\text{dist}(\phi^{\text{new}}(\mathbf{q}), \phi^{\text{old}}(\mathbf{g}))\). As stated in [19], the backward compatibility is achieved when the following criterion is satisfied: \[\mathcal{M}(\phi^{\text{new}}(\mathbf{Q}),\phi^{\text{old}}(\mathbf{G}))> \mathcal{M}(\phi^{\text{old}}(\mathbf{Q}),\phi^{\text{old}}(\mathbf{G})). \tag{2}\] From now, we refer to a pair of embedding networks for query and gallery as a retrieval system, _e.g._, \(\{\phi^{(\cdot)},\phi^{(\cdot)}\}\). Rank mergeAssume that the first \(M\) out of a total of \(N\) images are backfilled, _i.e._, \(\mathbf{G}^{\text{new}}=\{\mathbf{g}_{1},...,\mathbf{g}_{M}\}\) and \(\mathbf{G}^{\text{old}}=\{\mathbf{g}_{M+1},...,\mathbf{g}_{N}\}\). Note that the total number of stored gallery embeddings is fixed to \(N\) during the backfilling process, _i.e._, \(\mathbf{G}^{\text{old}}=\mathbf{G}-\mathbf{G}^{\text{new}}\). Then, we first conduct image retrieval using the individual retrieval systems, \(\{\phi^{\text{old}},\phi^{\text{old}}\}\) and \(\{\phi^{\text{new}},\phi^{\text{new}}\}\), independently as \[\mathbf{g}_{m} =\operatorname*{arg\,min}_{\mathbf{g}_{i}\in\mathbf{G}^{\text{ old}}}\text{ dist}\left(\phi^{\text{old}}(\mathbf{q}),\phi^{\text{old}}(\mathbf{g}_{i})\right), \tag{3}\] \[\mathbf{g}_{n} =\operatorname*{arg\,min}_{\mathbf{g}_{j}\in\mathbf{G}^{\text{ new}}}\text{ dist}\left(\phi^{\text{new}}(\mathbf{q}),\phi^{\text{new}}(\mathbf{g}_{j})\right). \tag{4}\] Figure 1 illustrates the retrieval process. For each query image \(\mathbf{q}\), we finally select \(\mathbf{g}_{m}\) if \(\text{dist}(\phi^{\text{old}}(\mathbf{q}),\phi^{\text{old}}(\mathbf{g}_{m}))< \text{dist}(\phi^{\text{new}}(\mathbf{q}),\phi^{\text{new}}(\mathbf{g}_{n}))\) and \(\mathbf{g}_{n}\) otherwise. The retrieval performance after rank merge during backfilling is given by \[\mathcal{M}_{t}:= \tag{5}\] \[\mathcal{M}(\{\phi^{\text{old}}(\mathbf{Q}),\phi^{\text{new}}( \mathbf{Q})\},\{\phi^{\text{old}}(\mathbf{G}^{\text{old}}_{t}),\phi^{\text{ new}}(\mathbf{G}^{\text{new}}_{t})\}),\] where \(t\in[0,1]\) indicates the rate of backfilling completion, _i.e._, \(|\mathbf{G}^{\text{new}}_{t}|=t|\mathbf{G}|\) and \(|\mathbf{G}^{\text{old}}_{t}|=(1-t)|\mathbf{G}|\). The criteria Figure 1: Image retrieval with the proposed distance rank merge technique. In the middle of backfilling, we retrieve images independently using two separate models and their galleries, and merge the retrieval results based on their distances. Note that the total number of gallery embeddings are fixed throughout the backfilling process, _i.e._, \(|\mathbf{G}|=|\mathbf{G}^{\text{new}}|+|\mathbf{G}^{\text{old}}|\). discussed in Section 3.1 are formally defined as \[\mathcal{M}_{0}\geq\mathcal{M}(\phi^{\text{old}}(\mathbf{Q}),\phi^{ \text{old}}(\mathbf{G})), \tag{6}\] \[\mathcal{M}_{1}\geq\mathcal{M}(\phi^{\text{new}}(\mathbf{Q}),\phi^ {\text{new}}(\mathbf{G})),\] (7) \[\mathcal{M}_{t_{1}}\geq\mathcal{M}_{t_{2}}\;\;\text{if}\;\;t_{1} \geq t_{2}. \tag{8}\] Comprehensive evaluationTo measure both backfilling cost and model performance comprehensively during online backfilling, we utilize the following metrics that calculate the area under mAP or CMC curves as \[\text{AUC}_{\text{mAP}}=\!\int_{0}^{1}\text{mAP}_{t}dt\;\;\text{and}\;\;\text {AUC}_{\text{CMC}}=\!\!\int_{0}^{1}\text{CMC}_{t}dt.\] ### Merge Results We present the results from the rank merge strategy on two standard benchmarks, including ImageNet-1K [18] and Places-365 [28], in Figure 2. Our rank merging approach yields strong and robust results for all datasets; both mAP and CMC monotonically increase without negative flips as backfill progresses even though the old and new models are not compatible each other. Also, it takes full advantage of the new model until the end of backfilling without suffering from performance degradation. This validates that our rank merge technique satisfies the criteria for online backfilling discussed in Section 3.1 and 3.2. Please refer to Section 6.1 for the experimental detail. ## 4 Reverse Query Transform Our baseline image retrieval method is model-agnostic, free from extra training, and effective for performance improvement. However, one may argue that the proposed approach is computationally expensive at inference time because we need to conduct feature extraction twice per query for both the old and new models. This section discusses how to alleviate this limitation by introducing a small network, called the reverse query transform module. ### Basic Formulation To reduce the computational cost incurred by computing query embeddings twice at inference stage, we compute the embedding using the new model and transform it to the version compatible with the old model through the reverse Figure 3: Reverse query transform module, \(\psi(\cdot)\), learns a mapping from new to old feature spaces. We only update the parameters of the module \(\psi(\cdot)\) (in red rectangle) during training. Figure 2: mAP and CMC results on the standard benchmarks using ResNet-18. _Old_ and _New_ denote the performance without backfilling and with offline backfilling, respectively. The proposed distance rank merging of the old and new models, denoted by _Merge_, exhibits desirable results; the performance monotonically increases as backfill progresses without negative flips for all datasets and our algorithm based on online backfilling achieves competitive final performances with offline backfilling. The numbers in the legend indicate either AUC\({}_{\text{mAP}}\) or AUC\({}_{\text{CMC}}\) scores. query transform module as illustrated in Figure 3. To establish such a mechanism, we fix the parameters of the old and new models \(\{\phi^{\text{old}},\phi^{\text{new}}\}\) after training them independently, and train a lightweight network, \(\psi(\cdot)\), which transforms the embedding in the new model to the one in the old model. For each training example \(\mathbf{x}\), our objective is minimizing the following loss: \[\mathcal{L}_{\text{RQT}}(\mathbf{x}):=\text{dist}\left(\psi\left(\phi^{\text{ new}}(\mathbf{x})\right),\phi^{\text{old}}(\mathbf{x})\right), \tag{9}\] where \(\text{dist}(\cdot,\cdot)\) is a distance metric such as \(\ell_{2}\) or cosine distances. Because we only update the parameters in \(\psi(\cdot)\), not the ones in \(\phi^{\text{new}}(\cdot)\) or \(\phi^{\text{old}}(\cdot)\), we can still access the representations given by the new model at no cost even after the optimization of \(\psi(\cdot)\). Note that this reverse query transform module differs from FCT [17] mainly in terms of transformation direction and requirement of side information. FCT performs a transformation from the old representation to the new, while the opposite is true for our proposed approach. Since the embedding quality of a new model is highly likely to be better than that of an old one, our reverse transformation module performs well even without additional side information and, consequently, is more practical and efficient. ### Integration into Baseline Retrieval System Figure 4 illustrates the distance rank merge process together with the proposed reverse transformation module. The whole procedure consists of two retrieval systems defined by a pair of query and gallery representations, backward retrieval system \(\{\phi^{\text{rev}},\phi^{\text{old}}\}\) and new retrieval system \(\{\phi^{\text{new}},\phi^{\text{new}}\}\), where \(\phi^{\text{rev}}:=\psi(\phi^{\text{new}})\). Note that we obtain both the new and compatible query embeddings, \(\phi^{\text{new}}(\mathbf{q})\) and \(\phi^{\text{rev}}(\mathbf{q})=\psi(\phi^{\text{new}}(\mathbf{q}))\), using a shared feature extraction network, \(\phi^{\text{new}}(\cdot)\). The entire image retrieval pipeline consists of two parts: 1) feature extraction of a query image and 2) search for the nearest image in a gallery from the query. Compared to the image retrieval based on a single model, the computational cost of the proposed model with rank merge requires negligible additional cost, which corresponds to feature transformation \(\psi(\cdot)\) in the first part. Note that the number of total gallery embeddings is fixed, _i.e._, \(|\mathbf{G}^{\text{new}}|+|\mathbf{G}^{\text{old}}|=|\mathbf{G}|\), so the cost of the second part is always the same in both cases. ## 5 Distance Calibration While the proposed rank merge technique with the basic reverse transformation module works well, there exists room for improvement in calibrating feature embedding spaces of both systems. This section discusses the issues in details and presents how we figure them out. ### Cross-Model Contrastive Learning The objective in (9) cares about the positive pairs \(\phi^{\text{old}}\) and \(\phi^{\text{rev}}\) with no consideration of negative pairs, which can sometimes lead to misranked position. To handle this issue, we employ a supervised contrastive learning loss [14, 7] to consider both positive and negative pairs as follows: \[\mathcal{L}_{\text{CL}}(\mathbf{x}_{i},y_{i})=-\log\frac{\sum_{y_{k}=y_{i}}s_{ ik}^{\text{old}}}{\sum_{y_{l}=y_{i}}s_{ik}^{\text{old}}+\sum_{y_{k}\neq y_{i}}s_{ ik}^{\text{old}}}, \tag{10}\] where \(s_{ij}^{\text{old}}=\exp\left(-\text{dist}\left(\phi^{\text{rev}}(\mathbf{x}_ {i}),\phi^{\text{old}}(\mathbf{x}_{j})\right)\right)\) and \(y_{i}\) denotes the class membership of the \(i^{\text{th}}\) sample. For more robust contrastive training, we perform hard example mining for both the positive and negative pairs2. Such a contrastive learning approach facilitates distance calibration and improves feature discrimination because it promotes separation of the positive and negative examples. Footnote 2: For each anchor, we select the half of the examples in each of positive and negative labels based on the distances from the anchor. Now, although the distances within the backward retrieval system \(\{\phi^{\text{rev}},\phi^{\text{old}}\}\) become more comparable, they are still not properly calibrated in terms of the distances in the new retrieval system \(\{\phi^{\text{new}},\phi^{\text{new}}\}\). Considering distances in both retrieval systems jointly when we train the reverse transformation module, we can obtain more comparable distances and consequently achieve more reliable rank merge results. From this perspective, we propose a Figure 4: Image retrieval merging with reverse query transform module. Backward retrieval system consists of reversely transformed new query and old gallery, \(\{\phi^{\text{rev}},\phi^{\text{old}}\}\). The final image retrieval results are given by merging the outputs from \(\{\phi^{\text{rev}},\phi^{\text{old}}\}\) and \(\{\phi^{\text{new}},\phi^{\text{new}}\}\). cross-model contrastive learning loss as \[\mathcal{L}_{\text{CMCL}}(\mathbf{x}_{i},y_{i})= \tag{11}\] \[-\log\frac{\sum_{y_{k}=y_{i}}s_{ik}^{\text{old}}}{\sum_{y_{k}=y_{i }}s_{ik}^{\text{old}}+\sum_{y_{k}\neq y_{i}}s_{ik}^{\text{old}}+\sum_{y_{k} \neq y_{i}}s_{ik}^{\text{new}}}\] \[-\log\frac{\sum_{y_{k}=y_{i}}s_{ik}^{\text{new}}}{\sum_{y_{k}=y_{ i}}s_{ik}^{\text{new}}+\sum_{y_{k}\neq y_{i}}s_{ik}^{\text{new}}+\sum_{y_{k} \neq y_{i}}s_{ik}^{\text{old}}},\] where \(s_{ij}^{\text{new}}=\exp(-\text{dist}\big{(}\phi^{\text{new}}(\mathbf{x}_{i}), \phi^{\text{new}}(\mathbf{x}_{j})\big{)})\) and \(s_{ij}^{\text{old}}=\exp(-\text{dist}\big{(}\phi^{\text{new}}(\mathbf{x}_{i}), \phi^{\text{old}}(\mathbf{x}_{j})\big{)})\). Figure 5 illustrates the concept of the loss function. The positive pairs from the backward retrieval system \(\{\phi^{\text{rev}},\phi^{\text{old}}\}\) are trained to locate closer to the anchor than not only the negative pairs from the same system but also the ones from the new system \(\{\phi^{\text{new}},\phi^{\text{new}}\}\), and vice versa. We finally replace (9) with (11) for training the reverse transformation module. Compared to (10), additional heterogeneous negative terms in the denominator of (11) play a role as a regularizer to make the distances from one model directly comparable to those from other one, which is desirable for our rank merge strategy. ### Training New Feature Embedding Until now, we do not jointly train the reverse transformation module \(\psi(\cdot)\) and the new feature extraction module \(\phi^{\text{new}}(\cdot)\) as illustrated in Figure 3. This hampers the compatibility between the backward and new retrieval systems because the backward retrieval system \(\{\phi^{\text{rev}},\phi^{\text{old}}\}\) is the only part to be optimized while the new system \(\{\phi^{\text{new}},\phi^{\text{new}}\}\) is fixed. To provide more flexibility, we add another transformation module \(\rho(\cdot)\) on top of the new model as shown in Figure 6, where \(\rho^{\text{new}}=\rho(\phi^{\text{new}})\) and \(\rho^{\text{rev}}=\psi(\rho(\phi^{\text{new}}))\). In this setting, we use \(\rho^{\text{new}}\) as the final new model instead of \(\phi^{\text{new}}\), and our rank merge process employs \(\{\rho^{\text{rev}},\phi^{\text{old}}\}\) and \(\{\rho^{\text{new}},\rho^{\text{new}}\}\) eventually. This strategy helps to achieve a better compatibility by allowing both systems to be trainable. The final loss function to train the reverse transformation module has the identical form to \(\mathcal{L}_{\text{CMCL}}\) in (11) except for the definitions of \(s_{ij}^{\text{new}}\) and \(s_{ij}^{\text{old}}\), which are given by \[s_{ij}^{\text{new}} =\exp\left(-\text{dist}\left(\rho^{\text{new}}(\mathbf{x}_{i}), \rho^{\text{new}}(\mathbf{x}_{j})\right)\right) \tag{12}\] \[s_{ij}^{\text{old}} =\exp\left(-\text{dist}\left(\rho^{\text{rev}}(\mathbf{x}_{i}), \phi^{\text{old}}(\mathbf{x}_{j})\right)\right). \tag{13}\] Note that this extension does not result in computational overhead at inference stage but yet improves the performance even further. ## 6 Experiments We present our experiment setting, the performance of the proposed approach, and results from the analysis of algorithm characteristics. ### Dataset and Evaluation Protocol We employ four standard benchmarks, which includes ImageNet-1K [18], CIFAR-100 [9], Places-365 [28], Market-1501 [27]. As in previous works [17, 19], we adopt the extended-class setting in model upgrade; the old model is trained with examples from a half of all classes while the new model is trained with all samples. For example, on the ImageNet-1K dataset, the old model is trained with the first 500 classes and the new model is trained with the whole 1,000 classes. Following the previous works [17, 20, 25], we measure mean average precision (mAP) and cumulative matching characteristics (CMC)3. We also report our comprehensive results in terms of AUC\({}_{\text{mAP}}\) and AUC\({}_{\text{CMC}}\) at 10 backfill time slices, _i.e._, \(t\in\{0.0,0.1,...,1.0\}\) in (5). Footnote 3: CMC corresponds to top-\(k\) accuracy, and we report top-1 accuracy in all tables and graphs. ### Implementation Details We employ ResNet-18 [6], ResNet-50 [6], and ViT-B/32 [3] as our backbone architectures for either old or new Figure 5: Illustration of cross-model contrastive learning loss with backward retrieval system \(\{\phi^{\text{old}},\phi^{\text{rev}}\}\) and new retrieval system \(\{\phi^{\text{new}},\phi^{\text{new}}\}\). Two boxes with dotted lines corresponds to two terms in (11). For each retrieval system, the distances between positive pairs are learned to be both smaller than those of negative pairs in the two retrieval systems. Figure 6: Compatible training with learnable new embedding. Compared to Figure 3, another transformation module \(\rho(\cdot)\) is incorporated on top of the new model to learn new embedding favorable to our rank merging. The retrieval results are now merged from \(\{\rho^{\text{rev}},\phi^{\text{old}}\}\) and \(\{\rho^{\text{new}},\rho^{\text{new}}\}\). models. All transformation modules, \(\psi(\cdot)\) and \(\rho(\cdot)\), consist of 1 to 5 linear layer blocks, where each block is composed of a sequence of operations, (Linear \(\rightarrow\) BatchNorm \(\rightarrow\) ReLU), except for the last block that only has a Linear layer. Our algorithm does not use any side-information. Our modules are trained with the Adam optimizer [8] for 50 epoch, where the learning rate is \(1\times 10^{-4}\) at the beginning and decayed using cosine annealing [12]. Our frameworks are implemented with the Pytorch [16] library and we plan to release the source codes of our work. ### Results Homogeneous model upgradeWe present the quantitative results in the homogeneous model upgrade scenario, where old and new models have the same architecture. We employ ResNet-50 for ImageNet and ResNet-18 for other datasets. Table 1 and Figure 7 compare the proposed framework, referred to as RM (Rank Merge), with existing compatible learning approaches, including BCT [19], FCT [17], and BiCT [20]. As shown in the table, RM consistently outperforms all the existing compatible learning methods by remarkably significant margins in all datasets. BCT [19] learns backward compatible feature representations, which is backfill-free, but its performance gain is not impressive. \begin{table} \begin{tabular}{c c|c|c|c c|c|c c|c|c|c} \hline \hline & \multicolumn{3}{c|}{ImageNet-1K} & \multicolumn{3}{c|}{CIFAR-100} & \multicolumn{3}{c|}{Places-365} & \multicolumn{3}{c}{Market-1501} \\ & AUC\({}_{\text{mAP}}\) & AUC\({}_{\text{CMC}}\) & Gain & AUC\({}_{\text{mAP}}\) & AUC\({}_{\text{CMC}}\) & Gain & AUC\({}_{\text{mAP}}\) & AUC\({}_{\text{CMC}}\) & Gain & AUC\({}_{\text{mAP}}\) & AUC\({}_{\text{CMC}}\) & Gain \\ \hline Old & 31.2 & 49.7 & 0\% & 21.6 & 34.3 & 0\% & 16.5 & 30.7 & 0\% & 62.7 & 82.7 & 0\% \\ New & 51.3 & 70.3 & 100\% & 47.4 & 62.6 & 100\% & 23.4 & 39.1 & 100\% & 77.3 & 90.9 & 100\% \\ \(\text{RM}_{\text{active}}\) (Ours) & 40.0 & 63.9 & 44\% & 30.8 & 49.1 & 36\% & 19.5 & 35.8 & 43\% & 69.2 & 87.0 & 45\% \\ \hline BCT [19] & 32.0 & 46.3 & 4\% & 26.4 & 43.5 & 19\% & 17.5 & 37.0 & 14\% & 66.6 & 84.3 & 27\% \\ FCT [17] & 36.9 & 58.7 & 28\% & 27.1 & 49.4 & 21\% & 22.5 & 37.3 & 87\% & 66.4 & 84.2 & 25\% \\ FCT (w/ side-info) [17] & 43.6 & 65.0 & 62\% & 37.0 & 53.9 & 60\% & 23.7 & 38.3 & 104\% & 66.4 & 84.4 & 25\% \\ Bict [20] & 35.1 & 59.7 & 19\% & 29.0 & 48.3 & 29\% & 19.0 & 34.9 & 36\% & 65.0 & 82.4 & 16\% \\ **RM (Ours)** & **53.4** & **68.1** & **110\%** & **41.4** & **60.7** & **78\%** & **28.2** & **41.7** & **170\%** & **70.7** & **87.6** & **55\%** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with existing compatible learning methods on four standard benchmarks in homogeneous model upgrades. _Gain_ denotes relative gain that each method achieves from old model in terms of AUC\({}_{\text{mAP}}\), compared to the gain of new model. The proposed framework, dubbed as RM, consistently outperforms all other models with significantly large margins for all datasets. Note that RM\({}_{\text{active}}\) indicates the basic version of distance rank merge described in Sec. 3.2 and that _Old_ and _New_ denote embedding models of gallery images. Figure 7: mAP and CMC (Top-1 Acc.) results of our full framework in comparison to existing approaches. The numbers in the legend indicate either AUC\({}_{\text{mAP}}\) or AUC\({}_{\text{CMC}}\) scores. FCT [17] achieves meaningful performance improvement by transforming old gallery features, but most of the gains come from side-information [2]. For example, if side-information is not available, the performance gain of FCT drops from 62% to 28% on the ImageNet dataset. Also, such side-information is not useful for the re-identification dataset, Market-1501, mainly because the model for the side-information is trained for image classification using the ImageNet dataset, which shows its limited generalizability. On the other hand, although BiCT [20] takes advantage of online backfilling with less backfilling cost, it suffers from degraded final performance and negative flips in the middle of backfilling. Note that \(\text{RM}_{\text{naive}}\), our naive rank merging between old and new models, is already competitive to other approaches. Heterogeneous model upgradeWe evaluate our framework in more challenging scenarios and present the results in Figure 8, where the old and new models have different architectures, _e.g_., ResNet-18 \(\rightarrow\) ResNet-50 or ResNet-18 \(\rightarrow\) ViT-B/32. In this figure, \(\text{RM}_{\text{RQT}}\) (green line) denotes our ablative model trained with (9). Even in this setting, where both embedding spaces are more incompatible, our rank merge results from the old and new models still manage to achieve a monotonous performance growth curve and RM improves the overall performance significantly further, which validates the robustness of our frameworks. Ablation studyWe analyze the results from the ablations of models for our cross-model contrastive learning. For compatible training, CL-S employs contrastive learning within the backward system only as in (10) while our CMCL considers distance metrics from both backward and new retrieval systems simultaneously as in (11). For a more thorough ablation study, we also design and test another metric learning objective, called CL-M, which is given by \[\mathcal{L}_{\text{CL-M}}(\mathbf{x}_{i},y_{i})= -\log\frac{\sum_{y_{k}=y_{i}}s_{ik}^{\text{old}}}{\sum_{y_{k}=y_{ i}}s_{ik}^{\text{old}}+\sum_{y_{k}\neq y_{i}}s_{ik}^{\text{old}}}\] \[-\log\frac{\sum_{y_{k}=y_{i}}s_{ik}^{\text{new}}}{\sum_{y_{k}=y_{ i}}s_{ik}^{\text{new}}+\sum_{y_{k}\neq y_{i}}s_{ik}^{\text{new}}}, \tag{14}\] which conducts contrastive learning for both backward and new retrieval systems separately. Figure 9 visualizes the results from the ablation studies, where CMCL consistently outperforms both CL-S and CL-M in various datasets and architectures. CL-M generally gives better merge results than CL-S because it calibrates the distances of new retrieval system additionally. However, CL-M still suffers from negative flips because the distance metrics of both retrieval systems are calibrated independently and not learned to be directly comparable to each other. On the other hand, CMCL improves overall performance curves consistently without negative flips. This validates that considering the distance metrics of both systems simultaneously helps to achieve better metric compatibility and consequently stronger merge results. ## 7 Conclusion We presented a novel compatible training framework for effective and efficient online backfilling. We first addressed Figure 8: Experimental results with heterogeneous model upgrades. Our naïve rank merge between different architectures still achieves promising performance curves in various settings, and our full algorithm exhibits significantly better results. the inherent trade-off between compatibility and discriminability, and proposed a practical alternative, online back-filling, to handle this dilemma. Our distance rank merge framework elegantly sidesteps this issue by bridging the gap between old and new models, and our metric-compatible learning further enhances the merge results with distance calibration. Our framework was validated via extensive experiments with significant improvement. We believe our work will provide a fundamental and practical foundation for promoting new directions in this line of research.
2310.04991
Video-Teller: Enhancing Cross-Modal Generation with Fusion and Decoupling
This paper proposes Video-Teller, a video-language foundation model that leverages multi-modal fusion and fine-grained modality alignment to significantly enhance the video-to-text generation task. Video-Teller boosts the training efficiency by utilizing frozen pretrained vision and language modules. It capitalizes on the robust linguistic capabilities of large language models, enabling the generation of both concise and elaborate video descriptions. To effectively integrate visual and auditory information, Video-Teller builds upon the image-based BLIP-2 model and introduces a cascaded Q-Former which fuses information across frames and ASR texts. To better guide video summarization, we introduce a fine-grained modality alignment objective, where the cascaded Q-Former's output embedding is trained to align with the caption/summary embedding created by a pretrained text auto-encoder. Experimental results demonstrate the efficacy of our proposed video-language foundation model in accurately comprehending videos and generating coherent and precise language descriptions. It is worth noting that the fine-grained alignment enhances the model's capabilities (4% improvement of CIDEr score on MSR-VTT) with only 13% extra parameters in training and zero additional cost in inference.
Haogeng Liu, Qihang Fan, Tingkai Liu, Linjie Yang, Yunzhe Tao, Huaibo Huang, Ran He, Hongxia Yang
2023-10-08T03:35:27Z
http://arxiv.org/abs/2310.04991v3
# Video-Teller: Enhancing Cross-Modal Generation with Fusion and Decoupling ###### Abstract This paper proposes Video-Teller, a video-language foundation model that leverages multi-modal fusion and fine-grained modality alignment to significantly enhance the video-to-text generation task. Video-Teller boosts the training efficiency by utilizing frozen pretrained vision and language modules. It capitalizes on the robust linguistic capabilities of large language models, enabling the generation of both concise and elaborate video descriptions. To effectively integrate visual and auditory information, Video-Teller builds upon the image-based BLIP-2 model and introduces a _cascaded Q-Former_ which fuses information across frames and ASR texts. To better guide video summarization, we introduce a fine-grained modality alignment objective, where the cascaded Q-Former's output embedding is trained to align with the caption/summary embedding created by a pretrained text auto-encoder. Experimental results demonstrate the efficacy of our proposed video-language foundation model in accurately comprehending videos and generating coherent and precise language descriptions. It is worth noting that the fine-grained alignment enhances the model's capabilities (\(4\%\) improvement of CIDEr score on MSR-VTT) with only \(13\%\) extra parameters in training and zero additional cost in inference. ## 1 Introduction Large language models (LLMs) have made significant advancements (OpenAI, 2023; Chowdhery et al., 2022; Bai et al., 2022), and have subsequently been extensively utilized in multimodal tasks such as image-to-text generation and video-to-text generation (Zhang et al., 2023; Xu et al., 2023; Huang et al., 2023; Alayrac et al., 2022; Wang et al., 2022), giving rise to a new class of models called multimodal large language model (MLLM). Prior to LLMs, video understanding models have been limited by the complexity of generated textual descriptions (Yan et al., 2023), and downstream video-to-text tasks are constrained to short-form generations such as single-sentence video captioning. With the incorporation of large language models, models such as Video-LLaMA, VideoChat and Video-ChatGPT (Zhang et al., 2023; Li et al., 2023; Maaz et al., 2023) are now capable of not only generating longer and nuanced video digests but also engaging in conversations grounded in video content. To leverage the power of pretrained LLMs such as LLaMA 2 (Touvron et al., 2023) and Vicuna (Chiang et al., 2023) without incurring the forbidding cost of retraining LLMs, MLLMs such as BLIP-2 (Li et al., 2023) have been proposed to integrate a trainable light-weight visual backbone with a frozen LLM via adaptor-like mechanisms (such as the Q-Former proposed in BLIP-2). The expansion of BLIP-2 into the realm of video has quickly given rise to models such as Video-LLaMA (Zhang et al., 2023), which incorporates both visual and auditory information by prompting frozen LLM with embeddings computing by two corresponding encoders. By relying on LLMs for modality integration, however, increases the computational cost during inference. Additionally, prior knowledge embedded in very large language models may negatively bias the generated video descriptions, leading to hallucination. Consequently, the issue of enhancing the accuracy of text generation in visual-language models while reducing the computational expense incurred has now become imminent. In order to maintain the efficiency of model training while reducing computational overhead during inference, we propose cascaded Q-Former which fuses multi-frame visual information with auditory information prior to prompting LLMs, effectively reducing the computational overhead of LLM by half. Additionally, in contrast to Video-LLaMA's direct usage of raw audio, we leverage ASR information from videos as the representation for the audio modality to further enhance the model's comprehension capabilities. Due to the incorporation of additional modal information, to decouple crucial comments directly using methods similar to BLIP-2 becomes more difficult. Therefore, we propose the utilization of fine-grained modality alignment, thus enhancing the precision of content generated by the model and alleviating the issue of hallucination, we propose the utilization of fine-grained modality alignment as an auxiliary training approach. Figure 1 shows a high-level overview of the key concepts of Video-Teller. We evaluated Video-Teller in both video-to-text generation and text-to-video retrieval tasks. Specifically, in video captioning, our approach achieves better results than the existing methods, such as HiTeA (Ye et al., 2022) with a smaller amount of data, thus confirming the effectiveness of fine-grained alignment and the integration of ASR. Additionally, in the task of long-text generation (video summarization), we obtained better performance (measured by BLEURT (Sellam et al., 2020)) than the baseline model with larger frozen LLMs such as Video-LLaMA and VideoChat (Zhang et al., 2023; Li et al., 2023). Overall, the main contributions of this paper are as follow. * We propose Video-Teller, a video-language foundation model that integrates both the visual and speech information. Video-Teller reduces the computational cost of modality fusion by incorporating visual and ASR information through a cascaded Q-Former before the LLM. * We enhance video language learning by employing a text auto-encoder with LLM as the decoder to decouple textual features, enabling fine-grained alignment of video-text representations in an unsupervised manner. This approach improves the fusion of cross-modal information thus boosts the model's generation capability. * In addition to providing demonstrations of Video-Teller output, we quantitatively compare our proposed method with two representative MLLMs, Video-LLaMA (Zhang et al., 2023) and VideoChat (Li et al., 2023). ## 2 Related work The pursuit foundation models that integrate and understand multiple modalities like vision and text have received enormous impetus from the research community in recently years. Previously, foundation models were largely end-to-end trainable models with architectures like dual-encoders (Jia Figure 1: Overview of the proposed method. Here we show the detailed description generation (long-form text). et al., 2021; Li* et al., 2022; Zeng et al., 2022; Bao et al., 2022), fusion-encoders (Li et al., 2019; Yang et al., 2022; Chen et al., 2020; Su et al., 2020; Li et al., 2020; Lu et al., 2019; Tan & Bansal, 2019), and encoder-decoders (Yu et al., 2022; Yan et al., 2023). However, these foundation models typically require fine-tuning of the entire model during adaptation, resulting in significant computational expenses. The advent of BLIP-2 (Li et al., 2023a) changed the multimodal landscape by introducing a lightweight adaptor module, the Q-Former, which utilizes learnable queries to facilitate alignment of multiple modalities, reducing the need for fine-tuning the pre-trained language/visual models. Adaptor like modules have recently been remarkably successful in allowing multimodal researchers to tap into the tremendous natural language powers of large language models, with new models emerging every week such as InstructBLIP, VideoChat, Video-LLaMA and Mini-GPT4 (Dai et al., 2023; Li et al., 2023c; Zhang et al., 2023b; Zhu et al., 2023). While adaptor modules like Q-Former can aid in merging input modalities, explicit alignments between modalities have traditionally been done via contrastive loss(He et al., 2020) between a single textual [CLS] token and the other modalities. Specifically, previous models either align [CLS] tokens of text features and visual features for sample-level modal alignment (Jia et al., 2021; Radford et al., 2021; Yu et al., 2022), or align [CLS] tokens with the rest of the tokens of the same modality using momentum (Yang et al., 2022). Such approaches enable modal alignment from a global perspective (via a single text token), but overlooks local information. In contrast, few foundation models have explored fine-grained alignment between tokens across modalities during pre-training, which we argue is crucial for detailed understanding of complex input data such as videos. (Li et al., 2022) propose LOUPE, which learns fine-grained semantic alignment from the novel perspective of game-theoretic interactions. While (Shukor et al., 2022) leverage hierarchical cross-modal alignment loss for fine-grained modality alignment. These methods are highly effective, but they also tend to be more intricate. So in this paper, we propose to use a text auto-encoder to decouple the target text and use the decoupled feature to align with the video's hidden states. ## 3 Method ### Preliminaries: BLIP-2 BLIP-2 (Li et al., 2023a) is an image-text model designed to optimize training efficiency. It achieves this by utilizing pre-trained image encoders and frozen large language models. To address the modality gap, BLIP-2 introduces a lightweight Querying Transformer called Q-Former. Q-Former consists of learned queries and a transformer module with cross-attention. The input image initially undergoes the frozen Vision Transformer (ViT) to obtain an image patch token sequence. Subsequently, the learned queries interact with the image tokens through cross-attention. This process allows the input image to be encoded into a fixed-length sequence (as demonstrated in the paper, 32 tokens). These tokens are then projected and fed into the large language model to generate the corresponding text description of the image. Despite having \(54\times\) fewer trainable parameters, BLIP-2 outperforms Flamingo80B (Alayrac et al., 2022). However, it should be noted that BLIP-2 is specifically designed to handle single-image inputs and cannot be directly applied to video-based applications. ### Model Architecture Of Video-Teller As illustrated in Figure 2, Video-Teller is composed of two primary components. The first component is a video foundation model, which takes frames and ASR texts as input and incorporates a LLM as the language decoder. The second component is a text auto-encoder, which shares a similar structure with the video foundation model and also employs the same LLM as the language decoder. It is important to highlight that the text auto-encoder is exclusively utilized during the training phase and do not incur any additional computational cost during inference. #### 3.2.1 Video-Teller The extension of BLIP-2 to process video input can be tackled via multiple approaches. One approach is to encode each frame individually via image-based BLIP-2 model, and prompt LLM directly using the concatenated frame-level embeddings. This approach, although straightforward and powerful, incurs considerable computational overhead as the input sequence length of the frozen LLM is now multiple by the number of frames sampled. Additionally, this approach relies entirely on LLM to perform modality integration and alignment. To address the aforementioned issue, we propose a cascaded Q-Former approach for integrating information from different video frames and texts generated by ASR. Let \(\mathbf{V}\in\mathbb{R}^{F\times C\times H\times W}\) denote the input video frames, where \(F\), \(C\), \(H\), and \(W\) represent the number of frames, image channels, image height, and image weight, respectively. We utilize the vision encoder and Q-Former from BLIP-2 to individually extract the representation \(\mathbf{R}_{f}\in\mathbb{R}^{Q_{i}\times E_{i}}\) for each frame, where \(Q_{i}\) and \(E_{i}\) indicate the number and size of query tokens in the image Q-Former. The aggregated visual features are obtained by concatenating all the image tokens from the Q-Former and are denoted as \(\mathbf{R}vision\in\mathbb{R}^{FQ_{i}\times E_{i}}\). For ASR text, we first use encoder-only BERT (Devlin et al., 2019) to process it and obtain the encoded text features. We use the last hidden states of the text features \(\mathbf{R}_{ASR}\in\mathbb{R}_{i}^{E}\) as the ASR tokens to be combined with the visual features. In order for the combined ASR and visual features to be consumed by the LLM, we need to further downscale the dimension of the combined features since the total number of tokens is too large to be directly handled by the LLM. Towards this end, we employed another transformer to reduce the number of tokens and to fuse the information from the ASR and visual features. We name this added transformer cascaded Q-Former and it adopts the same BERT structure as the original Q-Former with fixed number of query tokens to produce a fixed length of result tokens. We concat ASR tokens \(\mathbf{R}_{ASR}\) with visual tokens \(\mathbf{R}_{vision}\) as input to the cascaded Q-Former. Finally, we gain the representation of the whole video \(\mathbf{R}_{video}\in\mathbb{R}^{Q_{i}\times E_{v}}\), where \(Q_{v},E_{v}\) denotes the query number and embedded dimension of cascaded Q-Former. Here we manually split \(\mathbf{R}_{video}\) into two components, where the first component includes the first token of \(\mathbf{R}_{video}\) that is used for video-text contrastive learning, and the second component contains the remaining 32 tokens that is used for fine-grained modality alignment and video-grounded text reconstruction. Figure 2: Overall architecture of the proposed model. The model consists of two primary branches. On the right-hand side, we have the text auto-encoder responsible for encoding the target text into a fixed-length representation denoted as \(\mathbf{R}_{text}\). Conversely, on the left-hand side, we have the video module, which encodes the input video (comprising frames and ASR) into a video representation that shares the same shape as the text representation. Both of these representations are trained to reconstruct the target text by utilizing the LLM, while they are directly aligned through the Mean Squared Error (MSE) loss. #### 3.2.2 Text Auto-encoder We present a novel text auto-encoder that deviates from conventional models. Our approach leverages a frozen large language model as the decoder, while employing BERT and text Q-Former as the encoder to encode input text into a fixed-length representation (32 \(\times\) 768). Our objective is to find the mapping between the input text and the soft prompt (fixed-length representation) that enables the LLM to recover the same input text. Our experiments show that, for one-sentence captions, it is sufficient to freeze the pretrained BERT module and only update weights of the text Q-Former. However, multi-sentence summaries cannot be reconstructed with frozen BERT encoder. As a result, in our subsequent experiments, the entire text encoder (including both BERT and text Q-Former) is trainable. ### Fine-grained modality alignment As illustrated before, the text auto-encoder turns the input text into a fixed length intermediate representation, which covers the crucial information for text reconstruction. For video to text tasks, we aim at generating text with intermediate video representation. So we take the decoupled fixed length intermediate representation from text auto-encoder as intermediate target for video foundation model. This means we align video with corresponding text not only through the video-grounded text reconstruction but also the hidden feature's consistency, namely fine-grained modality alignment. Our proposed method is different from previous, instead of using game-theoretic (Li et al., 2022) or hierarchical cross-modal alignment loss (Shukor et al., 2022) as we directly utilize the encoded tokens in our text auto-encoder as a learning target. ## 4 Experiment We evaluate our proposed method on several downstream tasks, including video captioning, video summarization and video retrieval. Figure 3: An example from Video-CSR. Here frames represent the video’s vision information. ### Setup DatasetsWe test our method's video understanding capability on MSR-VTT (Xu et al., 2016) and Video-CSR (Liu et al., 2023). MSR-VTT is a large-scale video benchmark for video understanding, especially generating video captions. It covers 10K videos and each was annotated with about 20 natural sentences. There are 6513 videos for training and 2990 videos for testing. It should be noted that as MSR-VTT doesn't provide ASR, we use "none." as the input to our ASR branch. Video-CSR is a newly released large-scale video benchmark for video understanding, covering roughly 5000 videos ranging from 15 seconds to 1 minute with each video annotated by human. There are 5 captions and 5 summaries for each video. Summaries are long captions that includes more details about the subject and activities in the video. The average length of captions is 12.71 and the average length of summaries is 62.93 in Video-CSR. We adopt Video-CSR since its videos contain rich ASR information and is suitable to evaluate our framework with both visual and ASR input. Videos in this dataset can be divided into two parts, one part with rich ASR information while the other part with little ASR information. The ratio of videos with ample and limited ASR information is approximately 1 to 2. In cases that the video's ASR conveys only little information, we use the text "none." as the ASR input. For experiments on MSR-VTT, we use the WebVid-2M (Bain et al., 2022) and CC3M (Sharma et al., 2018) (used as static video) for pre-training. While in experiments on Video-CSR, we collect a pre-training dataset consists of 100K YouTube videos, where each video has 5 captions and 5 summaries generated by GPT-3.5 with the videos' metadata from YouTube, which covers description, ASR, comments and so on. The generated captions contains the key information of the video, but may miss some essential visual information if it is not described by the video's metadata. An example of a video from Video-CSR is shown in Figure 3. Model ConfigurationsWe construct our model directly from the pre-trained BLIP-2, leveraging its extensive prior knowledge of images. For text auto-encoder, we use \(\mathbf{BERT_{base}}\) to process the raw input. And then we use the first five layers of \(\mathbf{BERT_{base}}\) as the text Q-Former. 33 learnable queries are used to project the text into the fix-length representation with cross-attention where the first one is for video-text contrastive learning. For the cascaded Q-Former, we construct it with the first 5 layers of the pre-trained \(\mathbf{BERT_{base}}\) with 33 learnable query tokens. Empirically, we find that using more layers on the cascaded Q-Former and text Q-Former will deteriorate the performance. For ASR, the weights of \(\mathbf{BERT}\) in its encoder is shared with the text auto-encoder. Totally, there are about 307M trainable parameters containing the image Q-Former, video Q-Former, text encoder, text Q-Former and a few linear projection layers. We apply pretrained opt-6.7b (Zhang et al., 2022) as our frozen LLM. For each video input, we sampled 8 frames as the vision representation. Training and EvaluationWe adopt a three-stage training process for our model. For the first stage, we pretrain the text auto-encoder. We use 32 Tesla-V100 GPUs, with a batch size of 8 on each individual GPU, and conducted training for two epochs. For the second stage, we train the whole model on video captioning or video summarization using 64 Tesla-V100 GPUs, with a batch size of 8 for video captioning and 6 for video summarization on each individual GPU. We train the model for another 2 epochs. For the third stage, we further finetune the model with 32 Tesla-V100 GPUs for 3 epochs on MSR-VTT and 10 epochs on Video-CSR. \begin{table} \begin{tabular}{l|c|c c c c} \hline \hline Model & \#PT Data & B@4 & M & R & C \\ \hline HiTeA (Ye et al., 2022) & 17M & 49.2 & 30.7 & 65.0 & 65.1 \\ VideoCoCa (Yan et al., 2023) & 3B & 53.8 & - & 68.0 & 73.2 \\ GIT (Wang et al., 2022a) & 0.8B & 53.8 & 32.9 & 67.7 & 73.9 \\ GIT2 (Wang et al., 2022a) & 12.9B & **54.8** & 32.9 & **68.2** & **75.9** \\ \hline Video-Teller w/o A & 4.5M & 47.9 & 32.4 & 65.5 & 68.0 \\ Video-Teller & 4.5M & 49.2 & 33.0 & 66.4 & 72.0 \\ Video-Teller (SCST) & 4.5M & 49.4 & **33.4** & 67.0 & 74.5 \\ \hline \hline \end{tabular} \end{table} Table 1: Results for video caption on MSR-VTT. w/o A means without fine-grained modality alignment. SCST means Self-Critical Sequence Training (Rennie et al., 2017). **Video-Teller achieves similar performance with its counterparts but uses much less PreTraining Data.** ### Video Captioning We conducted experiment on two datasets, MSR-VTT and Video-CSR. As Video-CSR is a newly released dataset, we implement baselines with VideoCoCa (Yan et al., 2023) by adding a similar ASR fusion module to facilitate it to extract information from both frames and ASR text. We initialize VideoCoCa with CoCa pretrained on LAION-5B (Schuhmann et al., 2022). We name the VideoCoCa model with the added ASR fusion module VideoCoCa (ASR). Results on MSR-VTT can be found in Table 1 and results on Video-CSR can be found in Table 2. All results are reported on BLEU-4 (B@4), METEOR (M), CIDEr (C) and ROUGE-L (R). We also test the model applying self-critical sequence training, which is a REINFORCE algorithm that directly optimize the CIDEr metric (Rennie et al., 2017). Those results demonstrate Video-Teller's strong video description though using limited videos for pre-training compared with other models. ### Video Summarization We evaluate performance of video summarization on Video-CSR. This dataset covers 5000 videos. We randomly choose 1200 videos for testing, while the rest are used for fine-tuning the models. It is important to mention that the ratio of videos with ample and limited ASR information in the test set and training set is both approximately 1 to 2. We compare with four baseline models: VideoCoCa, VideoCoCa (ASR), Video-LLaMA (Zhang et al., 2023b), and VideoChat (Li et al., 2023b). Among them, VideoCoCa and Video-LLaMA only uses visual input and VideoCoCa (ASR) and VideoChat uses both visual and ASR input. We also evaluate both zero-shot and finetuned performance. For the metrics, we choose BLEURT (Sellam et al., 2020) as the main metrics. We also report results with CIDEr (C) and ROUGE-L (R). Results can be found in Table 3. After calculating the metrics, we randomly select 20 generated sentences from different models. We manually ranked each result to assess their level of consistency with various indicators and find that semantic-related evaluation metrics such as BLEURT (Sellam et al., 2020) are more suitable than metrics based on string matching for long text evaluation. The results also indicate that Video-Teller has achieved certain advantages in video summarization compared to other models. \begin{table} \begin{tabular}{l|c|c c c c|c c c c} \hline \hline & & \multicolumn{4}{c|}{**Finetuned**} & \multicolumn{4}{c}{**Zero-Shot**} \\ Model & ASR & B@4 & M & R & C & B@4 & M & R & C \\ \hline VideoCoCa & No & 6.2 & 11.0 & 23.8 & 18.7 & 2.1 & 10.7 & 18.7 & 5.7 \\ VideoCoCa (ASR) & Yes & 7.1 & 11.9 & 25.0 & 22.1 & 2.8 & 11.4 & 19.7 & 9.1 \\ \hline Video-Teller w/o A & Yes & 7.2 & 12.7 & 26.3 & 21.9 & 3.5 & 12.2 & 22.2 & 13.9 \\ Video-Teller & Yes & **10.4** & **14.7** & **28.7** & **30.7** & **5.6** & **14.2** & **24.0** & **19.9** \\ \hline \hline \end{tabular} \end{table} Table 2: Results for video caption on Video-CSR. w/o A means without fine-grained modality alignment. For Zero-Shot, both models are trained 100K videos from pretraining dataset. \begin{table} \begin{tabular}{l|c|c c c|c c c} \hline \hline & & \multicolumn{2}{c|}{**Finetuned**} & \multicolumn{4}{c}{**Zero-Shot**} \\ Model & \#PT Data & BLEURT & R & C & BLEURT & R & C \\ \hline VideoCoCa & 0.5M & 29.6 & 19.3 & 2.9 & 28.8 & 18.6 & 3.0 \\ VideoCoCa (ASR) & 0.5M & 36.8 & 22.4 & 9.5 & 31.0 & 20.1 & 8.1 \\ Video-LLaMA & - & - & - & - & 39.3 & 19.2 & 2.1 \\ VideoChat & - & - & - & - & 42.8 & 22.6 & 15.2 \\ \hline Video-Teller w/o A & 0.5M & 45.2 & 22.4 & 9.7 & 41.2 & 20.1 & 7.1 \\ Video-Teller & 0.5M & **47.1** & **23.5** & **11.2** & **43.3** & 21.3 & 9.0 \\ \hline \hline \end{tabular} \end{table} Table 3: Results for video summarization on Video-CSR. w/o A means without fine-grained modality alignment. ### Ablation Experiments While our model exhibits commendable performance in the text generation task, we remain skeptical about the extent to which the inclusion of ASR and fine-grained alignment can genuinely enhance its performance. Consequently, we undertake ablation experiments and assess them using Video-CSR and MSR-VTT dataset. Results on Video-CSR can be found in Table 4. Results for MSR-VTT are in Table 4. As MSR-VTT doesn't provid ASR, we only test the influence of alignment and contrastive loss. From the results, we can observe that on Video-CSR our model's performance declines when either ASR or fine-grained alignment is absent. This demonstrates the effectiveness of our approach on real-world scenario datasets. Result on MSR-VTT captioning also shows the fine-grained alignment improves the performance. ### Video Retrieval results Though achieving strong results on video generation task with fine-grained modality alignment, it still needs to be verified whether the method will have an impact on the accuracy of retrieval. Through ablation experiments, it's demonstrated that fine-grained modality alignment enhances the cross-modal generation capability of the model without affecting its retrieval accuracy. Result can be found in Table 5. The model is pre-trained with WebVid-2M (Bain et al., 2022) and CC3M (Sharma et al., 2018). From above result, we have demonstrated that fine-grained alignment can enhance the generation capability of the model without affecting video retrieval task. ## 5 Analysis As shown before, we find that Video-Teller, with limited video data for pre-training, achieves strong performance both on video summarization and video captioning. We will analyze the improvements to the model that our proposed method brings in terms of hallucination of description. Similar to LLM, Video-Teller is bothered by hallucination. it tends to fill in incorrect information, especially when generating detailed description. We evaluated the severity of different models' hallucination through manual assessment. Specifically, we randomly selected 50 generated results from the test set of Video-CSR and categorized them into three types: no hallucination, slightly \begin{table} \begin{tabular}{l|c c c|c c c c} \hline \hline & \multicolumn{3}{c|}{**Finetuned**} & \multicolumn{3}{c}{**Zero-Shot**} \\ Model & B@4 & M & R & C & B@4 & M & R & C \\ \hline \hline Results on Video-CSR & \multicolumn{6}{c}{} \\ \hline Video-Teller w/o ASR & 4.7 & 9.8 & 22.8 & 13.1 & 2.2 & 10.5 & 19.7 & 13.4 \\ Video-Teller w/o A & 7.2 & 12.7 & 26.3 & 21.9 & 21.9 & 12.2 & 22.2 & 13.9 \\ Video-Teller w/o C & 10.3 & 14.7 & 28.5 & 30.4 & 5.6 & 14.1 & 24.0 & 19.8 \\ Video-Teller & **10.4** & **14.7** & **28.7** & **30.7** & **5.6** & **14.2** & **24.0** & **19.9** \\ \hline \hline Results on MSR-VTT & \multicolumn{6}{c}{} \\ \hline Video-Teller w/o A & 47.9 & 31.5 & 65.3 & 69.6 & 12.4 & 17.6 & 36.3 & 24.6 \\ Video-Teller w/o C & 48.4 & 32.9 & 65.7 & 70.9 & 13.4 & 18.7 & 38.5 & 25.4 \\ Video-Teller & **49.2** & **33.0** & **66.4** & **72.0** & **15.6** & **19.6** & **40.1** & **26.9** \\ \hline \hline \end{tabular} \end{table} Table 4: Results for video captioning. Here w/o A means without align while w/o C means without contrastive learning. We also use w/o ASR represents without ASR. \begin{table} \begin{tabular}{l|c|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{**Finetuned**} & \multicolumn{3}{c}{**Zero-Shot**} \\ Model & \#PT Data & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 \\ \hline Video-Teller w/o A & 4.5M & 33.1 & 57.8 & 65.9 & 41.0 & 67.1 & 77.3 \\ Video-Teller & 4.5M & 33.5 & 57.5 & 66.1 & 40.7 & 67.5 & 77.7 \\ \hline \hline \end{tabular} \end{table} Table 5: Results for video retrieval on MSR-VTT, where w/o A means without fine-grained alignment. hallucination, and severe hallucination, based on the comparison between the generated content and manually annotated content. The ratios for each model are illustrated in Table 7. We also provide criteria for rating different levels of hallucination in Table 6. Based on the findings presented in Table 7, it is evident that models utilizing LLMs face a more pronounced issue of hallucination. This can be attributed to the limited information provided by the visual encoder, forcing the LLM to heavily rely on imaginative processes to complete the description. In contrast, VideoCoCa, which does not employ an LLM, exhibits a relatively milder form of hallucination. This difference can be explained by VideoCoCa's tendency to generate shorter descriptions when faced with insufficient information, thereby reducing the generation of extraneous content. Conversely, the extensive prior knowledge of the LLM engenders the production of erroneous information. With our fine-grained alignment, Video-Teller is able to significantly reduce the rate of hallucination, with the no hallucination rate increased from 0.40 to 0.56, and severe hallucination rate diminished from 0.34 to 0.20. This indicates that the fine-grained alignment enforces the encoded video tokens \(\mathbf{R}_{video}\) to be more relevant to the semantics of the target caption/summary and thus reduces hallucination. We provide a demo which shows model trained without fine-grained alignment suffers more from hallucination in Figure 4 in the appendix. As we could see in Figure 4, this video belongs to the category with a high ASR (Automatic Speech Recognition) content. Therefore, in order to generate its summary, it is necessary to make better use of the ASR information. From the ASR information, we can infer that this video discusses the relevant aspects of the decline in clean energy prices, just as predicted by Video-Teller. However, we can observe that without alignment, the model's description includes specific price changes that cannot be extracted from the video. ## 6 Conclusion In this paper, we propose Video-Teller, a robust video-text foundation model that attains impressive performance on video-to-text tasks, encompassing both concise and comprehensive descriptions. Video-Teller leverages the rich speech information contained in the videos to enhance the model's understanding of the video. Simultaneously, it utilizes pre-trained visual models and large language models to reduce training costs while maintaining the impressive performance. Furthermore, we employ a standalone text auto-encoder to learn the proper intermediate language tokens that guides the \begin{table} \begin{tabular}{l|l} \hline \hline Hallucination level & Description \\ \hline no hallucination & The predicted summary delineates events that are entirely congruous with the actual video, albeit with potential omissions in its depiction. \\ \hline moderate hallucination & The predicted summary portrays events that are largely congruent with the actual video, albeit with some minor deviations in certain details. \\ \hline severe hallucination & The predicted summary depicts events that are starkly divergent from the actual video. \\ \hline \hline \end{tabular} \end{table} Table 6: Criteria of rating hallucination. \begin{table} \begin{tabular}{l|c c c} \hline \hline Model & no hallucination & moderate hallucination & severe hallucination \\ \hline VideoCoCa (ASR) & 0.60 & 0.26 & 0.14 \\ Video-LLaMA & 0.26 & 0.40 & 0.34 \\ Video-Teller w/o A & 0.40 & 0.26 & 0.34 \\ Video-Teller & 0.56 & 0.24 & 0.20 \\ \hline \hline \end{tabular} \end{table} Table 7: Hallucination ratio of different models. w/o A means trained without fine-grained alignment. learning of the video foundation model, which boosts the decoupling of the fused multi modality information. Extensive experimental results demonstrate the impressive performance of our approach with light-weighted training, effectively reducing model hallucinations (no hallucination rate wit a gain from 40% to 56%) and significantly improving the accuracy of model descriptions (BLEURT score increased from 41.2 to 43.3).
2303.02200
No MSP Counterparts Detected in GBT Searches of Spider Candidates 4FGL J0935.3+0901, 4FGL J1627.7+3219, and 4FGL J2212.4+0708
We performed radio searches for the "spider" millisecond pulsar (MSP) candidates 4FGL J0935.3+0901, 4FGL J1627.7+3219, and 4FGL J2212.4+0708 using the Green Bank Telescope in an attempt to detect the proposed radio counterpart of the multi-wavelength variability seen in each system. We observed using the VEGAS spectrometer, centered predominantly at 2165 MHz; however, we were also granted observations at 820 MHz for 4FGL J1627.7+3219. We performed acceleration searches on each dataset using PRESTO as well as additional jerk searches of select observations. We see no evidence of a radio counterpart in any of the observations for each of the three systems at this time. Additional observations, perhaps at different orbital phases (e.g., inferior conjunction), may yield detections of an MSP in the future. Therefore, we urge continued monitoring of these systems to fully characterize the radio nature, however faint or variable, of each system.
Kyle A. Corcoran, Scott M. Ransom, Ryan S. Lynch
2023-03-03T20:18:24Z
http://arxiv.org/abs/2303.02200v1
# No MSP Counterparts Detected in GBT Searches of Spider Candidates ###### Abstract We performed radio searches for the "spider" millisecond pulsar (MSP) candidates 4FGL J0935.3+0901, 4FGL J1627.7+3219, and 4FGL J2212.4+0708 using the Green Bank Telescope in an attempt to detect the proposed radio counterpart of the multi-wavelength variability seen in each system. We observed using the VEGAS spectrometer, centered predominantly at 2165 MHz; however, we were also granted observations at 820 MHz for 4FGL J1627.7+3219. We performed acceleration searches on each dataset using PREST0 as well as additional jerk searches of select observations. We see no evidence of a radio counterpart in any of the observations for each of the three systems at this time. Additional observations, perhaps at different orbital phases (e.g., inferior conjunction), may yield detections of an MSP in the future. Therefore, we urge continued monitoring of these systems to fully characterize the radio nature, however faint or variable, of each system. Millisecond Pulsars (1062) -- Spider Pulsars ## Introduction Redback (RB) and black widow (BW) pulsars - deemed "spiders" - are compact binary systems containing a millisecond pulsar (MSP) and a low-mass companion (\(M\gtrsim 0.1M_{\odot}\) for RBs and \(M<0.1M_{\odot}\) for BWs; Roberts, 2013). These systems are unique among MSPs in that they can be identified through multi-wavelength observations spanning much of the electromagnetic spectrum. Significant \(\gamma\)-rays, X-ray photon detection or even light curve modulation, and optical variability have all been used to study the nature of several spider systems as well as to identify candidate systems. One such candidate system was identified by Wang et al. (2020). The candidate RB (and transitional MSP) 4FGL J0935.3+0901 was identified as an unassociated source by _Fermi_ LAT (Large Area Telescope), and the system was found to exhibit \(\gamma\)-ray modulation. A possible X-ray source was also found in archival data from _Swift_ within the _Fermi_ uncertainty region for the object. Archival and follow-up optical observations suggest that the system potentially has an orbital period of \(\sim 2.5\,\mathrm{hr}\) as well as spectral features consistent with an accretion disk. Wang et al. (2020) suggested that follow-up observations be conducted to determine if the binary is in fact a transitional MSP in an accretion-driven state or, potentially, the rotationally-powered MSP state that would follow. Braglia et al. (2020) also reported two new spider candidates in a similar fashion. By searching for optical variations in archival survey data for systems that had possible associated X-ray counterparts within the _Fermi_ uncertainty region, they identified 4FGL J1627.7+3219 and 4FGL J2212.4+0708 as candidates. We performed radio searches for each of these candidates with the Green Bank Telescope (GBT) using the VEGAS spectrometer. In the section that follows, we outline the procedures we followed in analyzing each GBT observation. After, we discuss the implications of our null results and comment on the need for additional observations in the future. ## Observations and Search Procedures We used a 1500 MHz-bandwidth mode on VEGAS with coherent dedispersion, centered at 2165 MHz to observe all systems. This mode allows us to cover roughly 1700-2700 MHz with an effective bandwidth of almost 1000 MHz when significant RFI is not present. We were also granted observations at 820 MHz for 4FGL J1627.7+3219, which were taken in the incoherent "search" mode. The S-band data for each observation were combined together (i.e., had the frequency resolution reduced and were converted to total intensity) using the psrfits_subband routine, and channels containing prominent RFI - as found via PREST0's rfifind routine - were removed from all observations. A summary of all observations can be found in Table 1. We performed acceleration searches on each dataset using standard routines in PREST0. As not all of the few thousand candidates generated for each observation are necessarily sensible, we took the top 100 candidate parameter sets and folded the time-series data on each candidate. These preliminary folds allow us to visually inspect for additional RFI as well as candidates with no potential signals in the time-series. We then folded the remaining candidates on the full observational dataset, which allows us to see if a candidate peaks in significance at a non-zero DM, and we once again visually inspected these plots to further vet each candidate. Legitimate detections - or even candidate detections that would need confirmation from an additional observation - would likely show statistically significant (\(\gtrsim 5\sigma\)) signal throughout the whole observation or some small fraction thereof at the least. We did not find any candidates that we believe are pulsar signals. For binaries with short periods, and especially with shorter duration scans, jerk searches can be performed to further search for pulsations. Similarly to how acceleration searches assume constant acceleration during an observation, jerk searches assume constant jerk with linearly varying acceleration (Andersen & Ransom, 2018). The short period nature of 4FGL J0935.3+0901 and 4FGL J2212.4+0708 warrant such additional searches. We again used PREST0 and performed a jerk search with a wmax value of 50 for the first scan (MJD=59613) of 4FGL J0935.3+0901 and 200 for the second scan of 4FGL J0935.3+0901 and the scan of 4FGL J2212.4+0708. We visually inspected the preliminary folds for the time-series data for the top 100 candidates to reject RFI and null signals. After folding the remaining candidates on the full time-series data, we still found no candidates that we believe are pulsar signals. ## Future Outlook Although our observations do not produce a radio pulsation detection, they are by no means comprehensive. We obtained our observations in a manner agnostic to orbital phase. While detections of pulsations in spiders are not always limited to certain orbital phases, certain characteristics of the binary can limit our ability to detect pulsations. Radio eclipses from circumbinary material can mask pulsations, and for RBs this masking can even happen for 25-100% of the orbit. Observations most sensitive to pulsations would occur at inferior conjunction, where heating of the companion would be at a maximum and material that can obscure pulsations would be most transparent. A possible spinning neutron star counterpart can never truly be ruled out; however, Ray et al. (2013) showed that in some cases it takes a substantial effort to verify the presence of a spider MSP. Additional observations for each binary focused at inferior conjunction could provide more sensitive limits for a detection, though. Continued monitoring may be necessary, especially in the case of 4FGL J0935.3+0901, to rule out the scenario that the binaries are transitional MSPs and currently in a low-mass X-ray binary state. The population of spider pulsars is still relatively small, so characterizing the nature of these candidate systems is a worthwhile pursuit. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ 4FGL} & \(P_{\rm orb}\) & MJD & Obs. Length & Sample Time & DM\({}^{\dagger}\) & \(N_{\rm channels}\) & Central Freq. & Channel Width & Effective BW \\ & [d] & & [s] & [\(\mu\)s] & [pc cm\({}^{-3}\)] & & [MHz] & [MHz] & [MHz] \\ \hline J0935.3+0901 & 0.10292a & 59613 & 2687 & 43.69 & 31.2 & 768 & 2165 & 1.4648 & 900 \\ & & 59614 & 907 & & & & & & 750 \\ \hline J1627.7+3219 & 0.49927b & 59619 & 2100 & 43.69 & 0.715 & 768 & 2165 & 1.4648 & 850 \\ & & 59620 & 2452 & 40.96 & & 2048 & 820 & 0.09765 & 135 \\ \hline J2212.4+0708 & 0.31884b & 59613 & 1380 & 43.69 & 41.1 & 768 & 2165 & 1.4648 & 625 \\ \hline \end{tabular} \end{table} Table 1: Summary of our observations on the three spider candidates. ## Acknowledgments The Green Bank Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. GBT observations were taken under project AGBT22A-355. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. KAC is supported by JWST-GO-02204.002-A. Support for program #2204 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127. SMR is a CIFAR Fellow and is supported by the NSF Physics Frontiers Center awards 2020265. Robert C. Byrd Green Bank Telescope (GBT) PRESTO (Ransom, 2011), psrfits_utils
2310.02338
Is a Quantum Gravity Era Necessary?
We present the first published framework of the entirety of cosmological history which is thoroughly classical (without any quantum-gravitational era or singularities) and which passes all the known extensive consistency checks on such a model, and discuss some of its possible cosmological implications, such as its ability to account for the matter-antimatter asymmetry, dark flow, and the Hubble tension, albeit at the cost of further assumptions.
Bogdan Veklych
2023-10-03T18:21:27Z
http://arxiv.org/abs/2310.02338v2
# Is a Quantum Gravity Era Necessary? ###### Abstract We present the first published framework of the entirety of cosmological history which is thoroughly classical (without any quantum-gravitational era or singularities) and which passes all the known extensive consistency checks on such a model, and discuss some of its possible cosmological implications, such as its ability to account for the matter-antimatter asymmetry, dark flow, and the Hubble tension, albeit at the cost of further assumptions. Author affiliation: University of Wisconsin-Madison; [email protected], [email protected]; orcid.org/0000-0003-1897-7998 Keywords: cosmological evolution, past-eternal cosmology, dark matter, Hubble tension ## 1 Introduction As is well-known, the current established cosmological knowledge of the history of the Universe extends only 13.8 billion years into the past to a hot dense rapidly expanding phase, nicknamed the Big Bang, which in turn is widely conjectured to have been produced by the decay of an inflationary field - but further than that it is not known with any certainty what preceded it or how it came about. While it is easy to come up with speculative quantum-gravitational ideas for the ultimate past of the Universe (such as e.g. origin of Universes by budding from "quantum foam" in an eternal empty progenitor Universe, or, as another example, an empty 2+1-dimensional compactified Milne past-eternal contracting phase, stable due to rigidity of 2+1-dimensional vacuum, which then changes dimensionality at the bounce via some hypothetical quantum-gravitational mechanism), apparently there has never been an example in the literature of a thoroughly classical consistent cosmological model of the full prehistory of the cosmic inflation, with no singularities or speculative quantum-gravitational epochs or other handwaved unclear "gaps in history". This is however not surprising as there are numerous very tight consistency constraints and no-go results on such a model [1][2][3][4][5][6], and the purpose of the present article is to fill this theoretical gap by introducing a framework with such desirable features that satisfies all these consistency constraints.
2308.00765
Tunnelling-induced cosmic bounce in the presence of anisotropies
If we imagine rewinding the universe to early times, the scale factor shrinks and the existence of a finite spatial volume may play a role in quantum tunnelling effects in a closed universe. It has recently been shown that such finite volume effects dynamically generate an effective equation of state that could support a cosmological bounce. In this work we extend the analysis to the case in which a (homogeneous) anisotropy is present, and identify a criteria for a successful bounce in terms of the size of the closed universe and the properties of the quantum field.
Jean Alexandre, Katy Clough, Silvia Pla
2023-08-01T18:04:17Z
http://arxiv.org/abs/2308.00765v2
# Tunnelling-induced cosmic bounce in the presence of anisotropy ###### Abstract If we imagine rewinding the universe to early times, the scale factor shrinks and the existence of a finite spatial volume may play a role in quantum tunnelling effects in a closed universe. It has recently been shown that such finite volume effects dynamically generate an effective equation of state that could support a cosmological bounce. In this work we extend the analysis to the case in which a (homogeneous) anisotropy is present, and identify a criteria for a successful bounce in terms of the size of the closed universe and the properties of the quantum field. + Footnote †: preprint: KCL-PH-TH/2023-42 ## I Introduction Our universe is expanding, and on the largest scales it appears homogeneous and isotropic with a small spatial curvature [1]. In the standard paradigm, in which our universe emerged from a cosmological singularity, inflation provides a dynamical mechanism to achieve the current state for a generic initial condition [2; 3; 4]. However, the success of inflation is not entirely independent of the initial conditions - some models require a certain level of homogeneity in order to proceed (see [5] for a review), and all inflationary potentials will fail to create an exponential expansion for an initially collapsing state in the absence of a violation of the Null Energy Condition (NEC) [6; 7; 8; 9]. An alternative paradigm, that of ekpyrosis [10; 11; 12], commonly uses a mechanism of slow contraction to provide the smoothing of inhomogeneities in the case of a non singular cosmic bounce, (see [13] for a review). Such models also necessitate a violation of the NEC in order to transition to expansion. Therefore mechanisms that violate the NEC in the early universe are of interest for such scenarios. Most mechanisms for NEC violation require additional exotic components or a modification of general relativity (GR) [8]. Recently, a mechanism has been proposed in which the NEC is violated by finite volume effects, which necessarily occur for a scalar field with a Higgs-like potential in standard Quantum Field Theory (QFT) on an FLRW background with a closed topology [14]. The effect arises from tunnelling between two degenerate vacua, which is allowed if the field is confined in a finite spatial volume 1. One nice aspect of this mechanism is that it "turns off" in a period of expansion, meaning that after a cosmological bounce it would quickly become suppressed - it therefore naturally favours expansion over contraction. We note here that alternative quantum effects, involving fermion dynamics, have been proposed to induce NEC-violation and potentially lead to a cosmological bounce [17; 18; 19; 20]. Footnote 1: Note that this effect is distinct to the well known Casimir effect [15]. Although both are based on the finite-volume assumption, tunnelling is independent of the geometry/topology of the spatial unit cell. For a comparison between the two effects, see [16]. A key question is whether the mechanism described in [14] could also provide some kind of smoothing of inhomogeneities or anisotropies, and to what extent it must dominate over these in order for the bounce to be successful. In this work we will discuss the (homogeneous) anisotropic case, and explain why the energy of the quantum fluid must already dominate over the anisotropy before the bounce in order for it to proceed. In other bounce scenarios, a scalar field that is dominated by its kinetic energy can play the role of a smoother in a preceding slow contraction phase [13], so it is possible that the field itself could be responsible for this at an earlier kinetic dominated field (for example, as it rolls down into one of the minima of the potential) 2. However, the description we use here is only valid in the vicinity of the bounce, and so more work is required to quantify out-of-equilibrium effects and the impact of inhomogeneities at an earlier stage. These aspects are more difficult to treat and need to be explored further in future work. What is clear is that by the time the universe is nearing the bounce, the anisotropies must be suppressed in order for it to succeed. Footnote 2: Also, gravitational particle creation tends to rapidly suppress irregularities in the geometry, which can be seen with semiclassical backreaction effects [21]. We will show that criteria for the success of the bounce can be stated in terms of the size of the closed universe at the point at which the net energy density is zero, and the properties of the quantum field (mainly its mass and vacuum energy). We focus on the case of anisotropy as a second component since it is the component that dominates the energy budget the quickest during a collapse. Roughly speaking, at the point of zero net energy density, the size of the universe must be comparable to the Compton wavelength of the field for its pressure to be sufficient to turn around the collapse. We will make this statement more precise in what follows, and give the phenomenological consequences for the field. The article is organised as follows: In Sec. II, we summarise the background of tunnelling in a finite volume, in Sec. III we set out the standard description of a homogeneous anisotropic cosmology, in Sec. IV we extend the QFT description to the anisotropic case and in Sec. V we describe the conditions for success in terms of the model parameters and discuss phenomenology. We briefly provide some numerical illustrations in Sec. VI and conclude with a brief discussion in Sec. VII. ## II Background I: tunnelling in an FLRW background with finite volume Spontaneous Symmetry Breaking, where the scalar field is trapped above one vacuum, is only strictly valid in an infinite volume, where tunnelling to another degenerate vacuum is completely suppressed. In a finite volume, tunnelling between two degenerate bare vacua \(\phi=\pm v\) is possible, leading to an effective potential with a lower overall minimum (see figure 1). Using a semi-classical approximation for the partition function, which is dominated by a dilute gas of instantons and anti-instantons, it was shown in [16; 22] that the resulting effective theory is such that: 1. the true vacuum is symmetric and occurs at \(\phi=0\), consistently with convexity of the effective potential when several saddle points are taken into account [23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33]; 2. the corresponding effective action is not extensive (it is not proportional to the volume), but has a non-trivial volume dependence; 3. the resulting ground state fluid violates the NEC. The above features arise from non-perturbative properties of the partition function, since a convex effective potential cannot be obtained from a bare double-well potential with perturbative quantum corrections only. The (anti-)instantons considered in [16; 22] depend only on the Euclidean time, and the tunnelling process is similar to the one described in Quantum Mechanics, where tunnelling of a particle in a double-well potential leads to a true ground state energy which is lower than the ground state energy in both individual degenerate wells [34]3. Footnote 3: Because the vacua are degenerate, the \(O(4)\)-symmetric Coleman bounce [35; 36] does not play a role here, since it would require a bubble with infinite radius. The tunnelling mechanism described in [16; 22] is therefore not related to a first order quantum phase transition, but rather to a second order phase transition and happens uniformly in space. Following the same approximation for the partition function, but in a flat and isotropic Friedmann-Lemaitre-Robertson-Walker universe (FLRW), it was shown in [14] that the above mechanism dynamically generates a cosmic bounce, which is followed by an asymptotic de Sitter phase where tunnelling is suppressed exponentially. In this context, "finite volume" is provided by a unit cell in the form of a 3-torus of volume \(V_{cell}\). Some recent works in cosmology have considered the evidence for a closed universe, see for example [37]. However, since we do not see any periodicity in our observable universe, any closed volume must be larger than its current size, so any finite volume effects will now be negligible. The physical size of the closed universe can of course be much smaller far in the past, when our comoving volume was smaller, and thus finite volume effects could have played a role in the early universe. In the isotropic case of [14], tunnelling between the minima of the double-well potential \[U(\phi)=\kappa^{-1}\Lambda_{b}+\frac{\lambda_{b}}{24}(\phi^{2}-v_{b}^{2})^{2}\, \tag{1}\] leads to the convex effective potential \[U_{eff}(\phi)=U_{0}+\frac{1}{2}M^{2}\phi^{2}+\mathcal{O}(\phi^{4}) \tag{2}\] where \[U_{0} = \kappa^{-1}\Lambda\left(1-r\frac{e^{-\alpha_{iso}^{3}}}{\alpha_{ iso}^{3/2}}\right) \tag{3}\] \[M^{2} = \frac{\lambda v^{2}}{3}\left(1-\frac{27\lambda}{32\pi^{2}}\right) >0\.\] In the above expressions, \(\lambda_{b}\), \(\Lambda_{b}\) and \(v_{b}\) have to be understood as the bare parameters, while \(\Lambda\), \(\lambda\) and \(v\) are the renormalised parameters 4. Also, \(\kappa=8\pi G\) and we have Figure 1: Schematic representation of the bare potential \(U(\phi)\) (blue line) and the effective potential \(U_{eff}(\phi)\) obtained from tunnelling between the vacua (orange dashed line). \(U_{0}\) is given by eq.(3), from which we can see that the effective potential curve reaches a lower value for a smaller volume. In this way the finite volume effects act as a negative contribution to the energy density, and can violate the NEC during a period of contraction. defined \[r=\frac{\lambda\kappa v^{4}}{3\sqrt{3\pi}\ \Lambda}\ \ \,\ \ \ \ \ \alpha_{iso}^{3}\equiv a^{3}\Sigma\, \tag{4}\] where \(\Sigma\) is the action of one instanton relating the bare vacua and \(a\) is the FLRW scale factor (we consider \(\hbar=c=1\)). The potential is illustrated in Fig. 1. Both \(r\) and \(\Sigma\) depend on the field parameters, but \(\Sigma\) is also proportional to the volume \(V_{cell}\) of the fundamental spatial cell, which therefore needs to be finite for the tunnelling probability \(\propto\exp(-\alpha_{iso}^{3})\) to be finite - that is, one requires a closed universe. The present study extends the work of [14] to the anisotropic case, and considers the impact of other components being present. In this work, an adiabatic approximation is assumed, where the tunnelling rate is large compared to the expansion rate, which allows the use of equilibrium QFT. This approximation is very good in the vicinity of the bounce, which is the regime on which we focus. ## III Background II: Homogeneous Anisotropic Universe Description We review here features of the homogeneous but anisotropic universe relevant to our study, and in particular discuss how the anisotropy can be treated as an additional matter component in the Friedmann equations, assuming an appropriate equation of state. Starting with the anisotropic Bianchi-I metric \[\mathrm{d}s^{2}=-\mathrm{d}t^{2}+a_{1}^{2}\,\mathrm{d}x^{2}+a_{2}^{2}\, \mathrm{d}y^{2}+a_{3}^{2}\,\mathrm{d}z^{2}\, \tag{5}\] the Friedmann equations read (\(i,j=1,2,3\)) \[H_{1}H_{2}+H_{1}H_{3}+H_{2}H_{3} = +\kappa\rho\, \tag{6}\] \[(\mathrm{for}\ i\neq j)\ \ \ \ H_{i}H_{j}+\frac{\ddot{a}_{i}}{a_{i}}+ \frac{\ddot{a}_{j}}{a_{j}} = -\kappa p\, \tag{7}\] where \(H_{i}=\dot{a}_{i}/a_{i}\) and as above \(\kappa=8\pi G\). Following [38; 39] we note that eq.(6) can be written \[H^{2}=\frac{\kappa}{3}\rho+\sigma^{2}\, \tag{8}\] where the averaged Hubble rate \(H\) and the anisotropy \(\sigma^{2}\) are defined as \[H := \frac{1}{3}(H_{1}+H_{2}+H_{3})\, \tag{9}\] \[\sigma^{2} := \frac{1}{18}\Big{[}(H_{1}-H_{2})^{2}+(H_{2}-H_{3})^{2}+(H_{1}-H_{ 3})^{2}\Big{]}\.\] Eq.(8) shows that \(3\kappa^{-1}\sigma^{2}\) can be interpreted as an energy density arising from anisotropy. Similarly, the trace of Friedmann equations can be written \[\dot{H}+H^{2}=-\frac{\kappa}{6}(\rho+3p)-2\sigma^{2}\, \tag{10}\] such that \(3\kappa^{-1}\sigma^{2}\) can also be interpreted as a pressure arising from anisotropy. Anisotropy therefore plays a role similar to a homogeneous perfect fluid with equation of state \(w=p/\rho=1\), and we expect that the corresponding energy density scales as \(a^{-3(1+w)}=a^{-6}\), where \(a=(a_{1}a_{2}a_{3})^{1/3}\) is the average scale factor. This is consistent with eqs.(7) from which one can show that \[\dot{H}_{i}-\dot{H}_{j}=-3H(H_{i}-H_{j})\, \tag{11}\] which implies \((H_{i}-H_{j})\propto a^{-3}\) and thus \(\sigma^{2}\propto a^{-6}\). Finally we note that, if a bounce occurs, then at this bounce \(H=0\) and \(\dot{H}>0\), such that the matter contribution should satisfy at the bounce \[\rho = -\frac{3}{\kappa}\sigma^{2}\leq 0 \tag{12}\] \[\rho+3p = -\frac{6}{\kappa}\left(\dot{H}+2\sigma^{2}\right)\leq 0\,\] as in the isotropic case. ## IV Anisotropic Quantum Fluid Description As discussed above, in the isotropic case (\(a_{1}=a_{2}=a_{3}\equiv a\)) and from the effective potential (2), the action for the true ground state \(\phi=0\) is \[S_{eff}^{isotropic}=\int\mathrm{d}^{4}x\sqrt{-g}\ \kappa^{-1}\Lambda\left(1-r\ \frac{e^{-\alpha_{iso}^{3}}}{\alpha_{iso}^{3/2}}\right). \tag{13}\] In the anisotropic case (5), the only change to the instanton action \(\Sigma\) is via the volume \(a^{3}V_{cell}\to a_{1}a_{2}a_{3}V_{cell}\), such that the action (13) must be modified as \[S_{eff}=\int\mathrm{d}^{4}x\sqrt{-g}\ \kappa^{-1}\Lambda\left(1-r\ \frac{e^{-\alpha_{1}\alpha_{2}\alpha_{3}}}{\sqrt{\alpha_{1}\alpha_{2}\alpha_{ 3}}}\right)\, \tag{14}\] where \(\alpha_{i}\equiv a_{i}\Sigma^{1/3}\). The stress-energy tensor can be decomposed as \[T_{\mu\nu}\equiv\frac{2}{\sqrt{-g}}\frac{\delta S_{eff}}{\delta g^{\mu\nu}}= \mathrm{diag}(\rho,a_{1}^{2}\,p,a_{2}^{2}\,p,a_{3}^{2}\,p)\, \tag{15}\] and leads to the dimensionless energy density and pressure \[\tilde{\rho} \equiv \frac{\kappa\rho}{\Lambda}=1-r\ \frac{e^{-\alpha^{3}}}{\alpha^{3/2}}\, \tag{16}\] \[\tilde{p} \equiv \frac{\kappa p}{\Lambda}=-1+r\ \left(\frac{1}{2\alpha^{3/2}}-\alpha^{3/2} \right)e^{-\alpha^{3}}\,\] where the average scale factor \(\alpha\) is defined by \[\alpha^{3}\equiv a_{1}a_{2}a_{3}\Sigma. \tag{17}\] In the previous expressions and from its definition in Eq.(4), \(r\) describes the quantum field - it is completely determined once we specify its mass, vacuum energy and self interaction strength (via the parameters \(v\), \(\Lambda\) and \(\lambda\)). The pressure and energy density are therefore determined by the combination of field parameters \(r\) and the average size of the universe \(a\,V_{cell}^{1/3}\). The Friedmann equations read, in terms of the rescaled quantities, with the rescaled time \(\tau\equiv t\ \sqrt{\Lambda/3}\), \[\frac{{\cal H}_{1}{\cal H}_{2}}{3}+\frac{{\cal H}_{1}{\cal H}_{3}} {3}+\frac{{\cal H}_{2}{\cal H}_{3}}{3} = +\tilde{\rho}\, \tag{18}\] \[(\mbox{for}\ i\neq j)\quad\frac{{\cal H}_{i}{\cal H}_{j}}{3}+ \frac{\alpha_{i}^{\prime\prime}}{3\alpha_{i}}+\frac{\alpha_{j}^{\prime\prime} }{3\alpha_{j}} = -\tilde{p}\, \tag{19}\] where a prime refers to the derivative with respect to \(\tau\). We can then define the rescaled average Hubble rate \({\cal H}\) and anisotropy \(\tilde{\sigma}^{2}\) \[{\cal H} := \frac{1}{3}({\cal H}_{1}+{\cal H}_{2}+{\cal H}_{3})\, \tag{20}\] \[\tilde{\sigma}^{2} := \frac{1}{18}\Big{[}({\cal H}_{1}-{\cal H}_{2})^{2}+({\cal H}_{2}- {\cal H}_{3})^{2}+({\cal H}_{1}-{\cal H}_{3})^{2}\Big{]}\,\] so that eq.(18) can be simply written as \[{\cal H}^{2}=\tilde{\rho}+\tilde{\sigma}^{2}. \tag{21}\] We can see from eq. (12) that \(\tilde{\rho}<0\) and \(\tilde{\rho}+3\tilde{p}<0\), in order to balance out the anisotropy contribution in the vicinity of the average bounce, defined by \({\cal H}=0\) and \({\cal H}^{\prime}>0\). In what follows, we will study under which conditions the cosmological bounce can be induced. ## V Critical solutions As discussed in the previous section, in a universe with significant anisotropy, an additional contribution to the energy density and the pressure of the spacetime exists. Starting from some initial condition, several scenarios are possible given the different scalings in \(\alpha\). The NEC violation from tunnelling does not necessarily win over the anisotropy during the collapse (even where it is initially larger) - a bounce requires not only that both contributions cancel each other such that \({\cal H}=0\), but also that at this point of equality, the pressure satisfies the necessary condition for the universe to bounce (i.e., \({\cal H}^{\prime}>0\)). For this latter condition to be true, the size of the universe at this point must be sufficiently small (relative to the field parameters) for finite volume effects to be significant, but not too small to avoid a collapse. In this section we derive the specific requirements, and comment on the resulting phenomenology. ### Critical point The critical solution of the Friedmann equations for which a bounce occurs (\(\rho_{c},p_{c},\alpha_{c},\tilde{\sigma}_{c}^{2}\)) can be found by imposing the condition \({\cal H}={\cal H}^{\prime}=0\). This critical point is unstable: a value of \(\alpha\) that is slightly larger than \(\alpha_{c}\) leads to a bounce (the NEC violation \(\propto\alpha^{-3/2}\) dominates) and a value which is slightly smaller leads to a collapse (the anisotropy \(\propto\alpha^{-6}\) dominates). From these conditions, the energy density and the pressure at the critical point satisfy \[\tilde{p}_{c}=\tilde{\rho}_{c}=-\tilde{\sigma}_{c}^{2}. \tag{22}\] Also, from eqs.(16), we find that the averaged scale factor \(\alpha_{c}\) is given by the implicit algebraic equation \[4\alpha_{c}^{3/2}+re^{-\alpha_{c}^{3}}(-3+2\alpha_{c}^{3})=0\, \tag{23}\] and the anisotropy can be expressed as \[\tilde{\sigma}_{c}^{2}=\frac{1+2\alpha_{c}^{3}}{3-2\alpha_{c}^{3}}. \tag{24}\] We can see from eq.(23) that necessarily \(\alpha_{c}^{3}<3/2\), and one can identify the two regimes \[\alpha_{c} \rightarrow (3/2)^{1/3}\quad\mbox{ for}\ \ r\gg 1 \tag{25}\] \[\alpha_{c} \sim \left(\frac{3r}{4}\right)^{2/3}\quad\mbox{ for}\ \ r\ll 1\.\] One can understand the role of \(\alpha_{c}\) from the point of view of the pressure. Assume that there is a time \(\tau_{1}\) where \({\cal H}(\tau_{1})=0\): * A bounce requires the condition \({\cal H}^{\prime}(\tau_{1})>0\), and thus \(|\tilde{p}(\tau_{1})|>|\tilde{\rho}(\tau_{1})|\), such that \[4\alpha^{3/2}(\tau_{1})+re^{-\alpha^{3}(\tau_{1})}[-3+2\alpha^{3}(\tau_{1})]>0\,\] (26) which leads to \(\alpha(\tau_{1})>\alpha_{c}\); * A collapse follows in the situation where \({\cal H}^{\prime}(\tau_{1})>0\), and thus \(|\tilde{p}(\tau_{1})|<|\tilde{\rho}(\tau_{1})|\), such that \[4\alpha^{3/2}(\tau_{1})+re^{-\alpha^{3}(\tau_{1})}[-3+2\alpha^{3}(\tau_{1})]<0\,\] (27) which leads to \(\alpha(\tau_{1})<\alpha_{c}\). ### Comparison with the isotropic case One can also infer a maximum value for the rescaled scale factor \(\alpha_{iso}\) at the bounce, which happens when the anisotropy (and any other components if there are) are negligible and the quantum field completely dominates. We then have from eq.(16) \[\alpha_{iso}^{3/2}=r\ e^{-\alpha_{iso}^{3}}\, \tag{28}\] which leads to the two regimes \[\alpha_{iso} \sim (\ln r)^{1/3}\quad\mbox{ for}\quad\ r\gg 1 \tag{29}\] \[\alpha_{iso} \sim r^{2/3}\quad\mbox{ for}\quad\ r\ll 1\, \tag{30}\] and we note that \(\alpha_{iso}\) is not bounded when \(r\rightarrow\infty\). We sketch \(\alpha_{c}\) and \(\alpha_{iso}\) on Fig.2, where the region between the two curves represents the possible range of values of the rescaled scale factor at which a bounce can occur for a particular quantum field (as parametrised by \(r\)). ### Size of the Universe at the bounce From the previous results one can put bounds on the typical physical size of the universe at the bounce, for a given field. The instanton action is of the order [16] \[\Sigma\sim\frac{m^{3}}{\lambda}V_{cell}\, \tag{31}\] where \(m=v\sqrt{\lambda/3}\). The physical length \(L\) is given by \(L=a(t)V_{cell}^{1/3}\) and its value at the bounce then satisfies \[\lambda^{1/3}\ \frac{\alpha_{c}}{m}\ \lesssim\ L_{b}\ \lesssim\ \lambda^{1/3}\ \frac{\alpha_{iso}}{m}. \tag{32}\] To get a feel for the phenomenological consequences of the model, we can relate the field quantities \(m\) and \(\Lambda\) with the size of the closed universe at the bounce \(L_{b}\), parameterised by the ratio \(r\). For simplicity we assume here that \(\lambda^{1/3}\) is of order 1 5. Footnote 5: This assumption is not necessarily justified for an axion-like particle, with a potential of the form \(M^{4}\cos(\phi/f)\). Indeed, a small mass \(\propto M^{2}/f\) compared to 1eV [40] implies an extremely small self-coupling constant \(\propto M^{4}/f^{4}\). However, there are axion models not requiring such a small self coupling, as for example in the string-inspired model presented in [41]. Our results can easily be adapted to other values for \(\lambda\) depending on the model. The current size of the visible universe is \(\sim 10^{26}\) meters, and we do not see any evidence of periodicity in it [42]. Any bounce must have happened before the electroweak phase transition, at which point the the size of our observable universe was about \(10^{11}\) meters. This therefore imposes a minimum on the size of the closed universe at the bounce - any smaller and we would see evidence for periodicity now. However, a bounce could also have occurred much earlier than this and so the universe could have been smaller. If instead we take the bounce to occur at the era of grand unification, the size of the closed universe would be of order 1 meter or larger. As can be seen from Fig.3, for \(r=1\), if the bounce occurred when \(L\sim 1\) m, the scalar field would need a bare mass of \(\sim 10^{-7}\) eV and a vacuum energy \(\Lambda\) of order \(10^{-140}\ \ell_{p}^{-2}\), therefore much smaller than the current cosmological constant. The plot illustrates how the values change for different values of \(r\), but in general one requires a larger mass to be consistent with a smaller \(L\), and small values for the vacuum energy are required. Figure 3: These plots illustrate consistent values of the field parameters and the size \(L_{b}\) of the closed universe at the bounce for different values of the field parameter \(r\). The upper plot shows the relation between \(\Lambda\) and \(m\). The lower plot shows the relation between the maximum value for \(L_{b}\) and \(m\) (the minimum of \(L_{b}\) has very similar values and is not represented for the sake of clarity - thus the range of possible values between the limits in Fig. 2 all lie close to the lines in this plot). We note that the function \(L_{b}(m)\) is almost independent of \(r\) for \(r\gg 1\), whereas it changes significantly for \(r\ll 1\). Figure 2: A plot of the rescaled average scale factor at the bounce \(\alpha=a\Sigma^{1/3}\) (which is related to the size of the closed universe) versus \(r\) (which is determined by the properties of the field). We plot the maximum \(\alpha_{iso}\) (orange line) and minimum \(\alpha_{c}\) (dashed blue line) values, as a function of \(r\). Although \(\alpha_{c}\) asymptotically tends to \((3/2)^{1/3}\), \(\alpha_{iso}\) is not bounded and goes to infinity when \(r\rightarrow\infty\). Numerical solutions In this section we numerically integrate the Friedmann equations (18), both in the bouncing case (where NEC violation dominates) and in the collapsing case (where the anisotropy dominates), to illustrate the possible outcomes. We choose the initial anisotropy \(\tilde{\sigma}^{2}(\tau_{0})\) as one of the parameters. For simplicity, we will consider \(\alpha_{2}(\tau_{0})=\alpha(\tau_{0})\) and \(\mathcal{H}_{2}(\tau_{0})=\mathcal{H}(\tau_{0})\). Hence from (20) we find for all times \[\mathcal{H}_{1} = \mathcal{H}\pm\sqrt{3\tilde{\sigma}^{2}}\, \tag{33}\] \[\mathcal{H}_{2} = \mathcal{H}\,\] (34) \[\mathcal{H}_{3} = \mathcal{H}\mp\sqrt{3\tilde{\sigma}^{2}}. \tag{35}\] The initial value of \(\mathcal{H}\) can then be determined by equations (21) and (16), namely \[\mathcal{H}^{2}(\tau_{0})=\tilde{\sigma}^{2}(\tau_{0})+1-r\frac{e^{-\alpha^{3 }(\tau_{0})}}{\alpha^{3/2}(\tau_{0})}. \tag{36}\] We take the negative root \(\mathcal{H}(\tau_{0})<0\) since we want to start from a contracting phase. As a consequence, the initial Hubble rates are entirely determined by \(\tilde{\sigma}^{2}(\tau_{0})\), and for the numerical analysis the quantities we fix are \(\tilde{\sigma}^{2}(\tau_{0})\), \(\alpha(\tau_{0})\) and \(r\). We take \(r\leq e\), so that we are before the point of equality in the energy densities in the anisotropy and the quantum field. For these values of \(r\), we choose the initial average scale factor such that \(\alpha^{3}(\tau_{0})=\alpha_{1}(\tau_{0})\alpha_{2}(\tau_{0})\alpha_{3}(\tau_ {0})=1\). This allows the bounce to happen soon after the initial time, compared to the typical time scale of the whole process. A larger initial scale factor would shift the time when the bounce occurs. Fig.4 shows an example of a bouncing solution. We include the time evolution of the scale factors \(\alpha_{i}\), and Hubble rates \(\mathcal{H}_{i}\), the anisotropy \(\tilde{\sigma}^{2}\), and the energy density \(\tilde{\rho}\). We choose \(r=2\), and initial conditions at \(\tau_{0}=0\)\(\alpha_{i}(\tau_{0})=\{2/3,1,3/2\}\), and \(\tilde{\sigma}^{2}(\tau_{0})=0.05\). Fig.5 shows an example of a case where the bounce is not reached, due to the anisotropy dominating, but that is close to the critical case. The initial conditions are \(r=2\), \(\alpha_{i}(\tau_{0})=\{2/3,1,3/2\}\) and \(\tilde{\sigma}^{2}(\tau_{0})=0.18247\). ## VII Discussion In this work we have studied the possibility of a cosmic bounce occurring in a universe in which there is a significant (but not dominant) anisotropy, in addition to the presence of a scalar field which is subject to finite volume effects in a closed universe. We have shown that criteria for the success of the bounce can be stated in terms of the size of the closed universe at the point at which the net energy density is zero, and have studied the properties of the quantum field (mass and vacuum energy) that permit a bounce of a size consistent with our own cosmological history. At the point of zero net energy density, the size of the universe must be roughly comparable to the Compton wavelength of the field for its pressure to be sufficient to turn around the collapse, so values smaller than \(\sim 10^{-5}\) eV are needed for the mass. We also find that the vacuum energy of the field must be extremely small, even in comparison to the current day cosmological constant. After the bounce, the universe transitions to expansion and the contribution of the quantum field to the energy density reduces to its vacuum energy, with finite volume effects completely suppressed. The fact that this vacuum energy is smaller than the current value is therefore consistent with what we observe (it could be a small contribution to its value), but the smallness of the value seems to require some further explanation - although this is of course true of the cosmological constant itself. Examples of further consequences of tunnelling effect in finite volume are as follows. One could involve an out-of-equilibrium QFT description of tunnelling in the background of a time-varying metric, which would allow a more accurate study away from the bounce. Then, the inclusion of the Casimir effect due to the finite volume \(V_{cell}\) could give rise to new effects, with possible cosmological relevance. Also, the effect of spatial curvature should be included, in the situation where the space fundamental cell is a 3-sphere instead of a 3-torus, and these studies are left for future works. ###### Acknowledgements. We thank Tim Clifton, Malcolm Fairbairn and David J. Marsh for helpful conversations. This work is supported by the Leverhulme Trust (grant RPG-2021-299). JA is also supported by the Science and Technology Facilities Council (grant STFC-ST/T000759/1). KC is supported by an STFC Ernest Rutherford fellowship, project reference ST/V003240/1. For the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
2309.01746
On a theorem of Lafforgue
We give a new proof, along with some generalizations, of a folklore theorem (attributed to Laurent Lafforgue) that a rigid matroid (i.e., a matroid with indecomposable basis polytope) has only finitely many projective equivalence classes of representations over any given field.
Matthew Baker, Oliver Lorscheid
2023-09-04T18:04:57Z
http://arxiv.org/abs/2309.01746v2
# On a theorem of Lafforgue ###### Abstract. We give a new proof, along with some generalizations, of a folklore theorem (attributed to Laurent Lafforgue) that a rigid matroid (i.e., a matroid with indecomposable basis polytope) has only finitely many projective equivalence classes of representations over any given field. The authors thank David Speyer, Alex Fink, and Rudi Pendavingh for helpful discussions. We also thank BIRS for their hospitality hosting the workshop 23w5149: "Algebraic Aspects of Matroid Theory", during which the arguments in this paper emerged. Thanks also to Justin Chen, Eric Katz, and Bernd Sturmfels for their feedback on an earlier version of the paper. The first author was supported by NSF grant DMS-2154224 and a Simons Fellowship in Mathematics. The second author was supported by Marie Sklodowska Curie Fellowship MSCA-IF-101022339. ## 2 Reformulation and generalizations of Lafforgue's theorem It is well-known to experts that a matroid \(M\) is rigid if and only if every valuated matroid \(\mathbb{M}\) whose underlying matroid is \(M\) is rescaling equivalent to the trivially valuated matroid. Since we could not find a reference for this result, we provide a proof in Appendix B. Recall from [3] (see also Appendix A) that there is a category of algebraic objects called _pastures_, which generalize not only fields but also partial fields and hyperfields. According to [1], there is a robust notion of (weak) matroids over a pasture1\(P\) such that (to mention just a few examples): Footnote 1: Technically speaking, [1] deals with _tracts_, not pastures, but the difference between the two is immaterial when considering weak matroids. For the sake of brevity, we do not define tracts in this paper, nor do we consider idylls or ordered blueprints (both of which play a prominent role in [4]). * Matroids over the Krasner hyperfield \(\mathbb{K}\) are the same thing as matroids in the usual sense. * Matroids over the tropical hyperfield \(\mathbb{T}\) are the same thing as valuated matroids. * Matroids over a field \(K\) are the same thing as \(K\)-representable matroids, together with a choice of a matrix representation (up to the equivalence relation where two matrices are equivalent if they have the same row space). For every matroid \(M\) there is a functor \(\mathcal{X}_{M}\) from pastures to sets taking a pasture \(P\) to the set of rescaling equivalence classes of (weak) \(P\)-representations of \(M\). A matroid \(M\) is rigid if and only if \(\mathcal{X}_{M}(\mathbb{T})\) consists of a single point. For a field \(K\), \(\mathcal{X}_{M}(K)\) coincides with the set of projective equivalence classes of representations of \(M\) over \(K\). Thus Lafforgue's theorem is equivalent to the assertion that if \(\mathcal{X}_{M}(\mathbb{T})\) is a singleton, then \(\mathcal{X}_{M}(K)\) is finite for every field \(K\). Recall from [3] that for every matroid \(M\), the functor \(\mathcal{X}_{M}\) is representable by a pasture \(F_{M}\) canonically associated to \(M\), called the _foundation_ of \(M\). Concretely, this means that \(\operatorname{Hom}(F_{M},P)=\mathcal{X}_{M}(P)\) for every pasture \(P\), functorially in \(P\). From this point of view, Lafforgue's theorem is equivalent to the assertion that if \(\operatorname{Hom}(F_{M},\mathbb{T})=\{0\}\), then \(\operatorname{Hom}(F_{M},K)\) is finite for every field \(K\). This is the statement of Lafforgue's theorem that we actually prove in this paper. The advantage of this formulation is that it turns out to be a special case of a result which can be formulated purely in the language of pastures, without any mention of matroids! In fact, the algebraic incarnation of this result holds more generally with pastures (which generalize fields) replaced by _bands_ (which generalize rings). See Appendix A for an overview of bands, including a definition, some examples, and the key facts needed for the present paper. ### An algebraic generalization of Lafforgue's theorem In order to state the algebraic result about bands which implies Lafforgue's theorem, we mention (see Proposition 4.4 below) that given a band \(B\) and a field \(K\), there is a canonically associated \(K\)-algebra \(\rho_{K}(B)\) with the universal property that \(\operatorname{Hom}_{\operatorname{Band}}(B,S)=\operatorname{Hom}_{K- \operatorname{alg}}(\rho_{K}(B),S)\) for every \(K\)-algebra \(S\). Moreover, if \(B\) is finitely generated (which is the case, for example, when \(B=F_{M}\) for some matroid \(M\)), then so is \(\rho_{K}(B)\). If \(B\) is finitely presented, the set \(\operatorname{Hom}(B,\mathbb{T})\) has the structure of a finite polyhedral complex \(\Sigma_{B}\); cf. Remark 4.5. Moreover, if \(K\) is a field, the set \(\operatorname{Hom}_{\operatorname{Band}}(B,K)\) is equal to \(\operatorname{Hom}_{K-\operatorname{alg}}(\rho_{K}(B),K)\), which is in turn equal to the set \(X_{B,K}(K)\) of \(K\)-points of the finite type affine \(K\)-scheme \(X_{B,K}:=\operatorname{Spec}(\rho_{K}(B))\). (When \(B=F_{M}\) for a matroid \(M\), we call \(X_{B,K}\) the _reduced realization space_ of \(M\) over \(K\).) Our first generalization of Lafforgue's theorem is as follows: **Theorem 2.1**.: _For every finitely generated band \(B\) and every field \(K\), we have the inequality \(\dim X_{B,K}\leqslant\dim\Sigma_{B}\). In particular, if \(\operatorname{Hom}(B,\mathbb{T})=\{0\}\), then \(\dim\Sigma_{B}=0\) and thus \(X_{B,K}(K)=\operatorname{Hom}(B,K)\) is finite for every field \(K\)._ Applying Theorem 2.1 to \(B=F_{M}\) immediately gives: **Corollary 2.2** (Lafforgue).: _If \(M\) is a rigid matroid, then \(\mathcal{X}_{M}(K)\) is finite for every field \(K\)._ In the terminology of Remark B.2, Theorem 2.1 in the case \(B=F_{M}\) says precisely that for any field \(K\), the dimension of the reduced realization space of \(M\) over \(K\) is bounded above by the dimension of the reduced Dressian of \(M\). ### A relative version of Lafforgue's theorem Rudi Pendavingh (private communication) asked if there might be a relative version of Lafforgue's theorem with respect to minors of \(M\). More precisely, Pendavingh asked the following question: Suppose \(N\) is an (embedded) minor of \(M\) with the property that a valuated matroid structure on \(M\) is determined, up to rescaling equivalence, by its restriction to \(N\). Is it then true that, for every field \(K\), there are (up to projective equivalence) at most finitely many extensions of each \(K\)-representation of \(N\) to a \(K\)-representation of \(M\)? We answer Pendavingh's question in the affirmative, proving the following algebraic generalization of Corollary 2.2: **Theorem 2.3**.: _Let \(K\) be an algebraically closed valued field, and let \(v:K\to\mathbb{T}\) be a non-trivial valuation. If \(f:B_{1}\to B_{2}\) is a homomorphism of finitely generated bands, then the fiber dimension of \(f_{K}:\operatorname{Hom}(B_{2},K)\to\operatorname{Hom}(B_{1},K)\) is bounded above by the fiber dimension of \(f_{\mathbb{T}}:\operatorname{Hom}(B_{2},\mathbb{T})\to\operatorname{Hom}(B_{1},\mathbb{T})\), i.e., if \(x\in\operatorname{Hom}(B_{1},K)\) and \(x^{\prime}\) is the image of \(x\) in \(\operatorname{Hom}(B_{1},\mathbb{T})\), then \(\dim f_{K}^{-1}(x)\leqslant\dim f_{\mathbb{T}}^{-1}(x^{\prime})\)._ _In particular, setting \(B_{1}=F_{N}\) and \(B_{2}=F_{M}\) when \(N\) is an embedded minor of a matroid \(M\), we find that if the induced map \(\mathcal{X}_{N}(\mathbb{T})\to\mathcal{X}_{M}(\mathbb{T})\) has finite fibers (i.e., a valuated matroid structure on \(N\) has at most finitely many extensions to \(M\), up to rescaling equivalence) then, for every field \(k\), the natural map \(\mathcal{X}_{N}(k)\to\mathcal{X}_{M}(k)\) has finite fibers, i.e., every \(k\)-representation of \(N\) has at most finitely many extensions to \(M\), up to projective equivalence._ Note that Lafforgue's theorem (Corollary 2.2) follows from the special case of Theorem 2.3 where \(N\) is the trivial (empty) matroid and \(f_{\mathbb{T}}:\operatorname{Hom}(B_{2},\mathbb{T})\to\operatorname{Hom}(B_{1},\mathbb{T})\) has finite fibers. ## 3 Some examples In this section we present examples of both rigid and non-rigid matroids (see Appendix A for some details on our notation). **Example 3.1** (Dress-Wenzel).: In [11, Theorem 5.11], Dress and Wenzel showed that if the inner Tutte group \(F_{M}^{\times}\) of the matroid \(M\) is finite, then \(M\) is rigid. From our point of view, this is clear, since the inner Tutte group is the multiplicative group of the foundation (cf. [4, Corollary 7.13]) and a non-trivial homomorphism \(F_{M}\to\mathbb{T}\) of pastures would give, in particular, a nonzero group homomorphism \(F_{M}^{\times}\to(\mathbb{R},+)\); however the only torsion element of \((\mathbb{R},+)\) is 0. For example: 1. The foundation of the Fano matroid \(F_{7}\) is \(\mathbb{F}_{2}\), so \(F_{7}\) is rigid. More generally, any binary matroid has foundation equal to either \(\mathbb{F}_{1}^{\pm}\) or \(\mathbb{F}_{2}\)[4, Corollary 7.32] and so it is rigid. 2. The foundation of the ternary spike \(T_{8}\) is \(\mathbb{F}_{3}\) (see [5, Proposition 8.9]), so \(T_{8}\) is also rigid. 3. Dress and Wenzel prove in [11, Corollary 3.8] that the inner Tutte group of any finite projective space of dimension at least 2 is finite, which provides a wealth of additional examples of rigid matroids. 4. Since the automorphism group of the ternary affine plane \(M=\operatorname{AG}(2,3)\) acts transitively, all single-element deletions are isomorphic to each other. Let \(M^{\prime}\) be any of these deletions. By [5, Proposition 6.2], the foundation of \(M^{\prime}\) is equal to the hexagonal (or sixth-root-of-unity) partial field \(\mathbb{H}\,=\,\mathbb{F}_{1}^{\pm}(T)/\!/\!\!/\!\!\!/\!\!\!/T^{3}+1,T-T^{2}-1 \!\!\!\!/\!\!\!/\), whose multiplicative group is the group of sixth roots of unity in \(\mathbb{C}\). Therefore \(M^{\prime}\) is rigid. It is not true that a matroid \(M\) is rigid if and only if its inner Tutte group (or, equivalently, its foundation) is finite. For example: **Example 3.2** (suggested by Rudi Pendavingh).: Let \(M\) be the Betsy Ross matroid (cf. [23, Figure 3.3], where \(M\) is also called \(B_{11}\)). Using the Macaulay2 software described in [10], we have checked that \(F_{M}\) is the (infinite) golden ratio partial field \(\mathbb{G}\,=\,\mathbb{F}_{1}^{\pm}(T)/\!\!/\!\!\!/\!\!\!/\langle T^{2}-T-1\!\rangle\). One checks easily that \(\operatorname{Hom}(\mathbb{G},\mathbb{T})\) is trivial, so \(M\) is rigid; in particular, the converse of the statement "\(F_{M}\) finite implies \(M\) rigid" is not true. It is also easy to see directly that \(\mathbb{G}\) admits only finitely many homomorphisms to any field. **Example 3.3**.: The matroid \(U_{2,4}\) is not rigid, since its foundation is the near-regular partial field \(\mathbb{U}\,=\,\mathbb{F}_{1}^{\pm}(T_{1},T_{2})/\!\!/\!\!/\!\!\!/\langle T_{ 1}+T_{2}-1\!\rangle\), which admits infinitely many different homomorphisms to \(\mathbb{T}\) (map \(T_{1}\) to \(1\) and \(T_{2}\) to any element less than or equal to \(1\), or vice-versa). And for any field \(K\), the reduced realization space \(\mathcal{X}_{M}(K)\) is equal to \(K\backslash\{0,1\}\), so in particular it is infinite whenever \(K\) is. The base polytope of \(U_{2,4}\) is an octahedron, which admits a regular matroid decomposition into two tetrahedra (see [18, p. 189] for a nice visualization). **Example 3.4**.: The non-Fano matroid \(M=F_{7}^{-}\) is not rigid, and it provides an example for which the dimension of the reduced realization spaces \(\mathcal{X}_{M}(K)\) and \(\mathcal{X}_{M}(\mathbb{T})\) jumps. The foundation of \(M\) is the dyadic partial field \(\mathbb{D}=\mathbb{F}_{1}^{\pm}(T)/\!\!/\!\!\!/\langle T+T-1\!\rangle\) by [5, Prop. 8.4], and there is at most one homomorphism \(F_{M}=\mathbb{D}\to K\) into any field \(K\), sending \(T\) to the multiplicative inverse of 2 (if it exists, i.e., if \(\operatorname{char}K\neq 2\)). In contrast, there are infinitely many homomorphisms \(\mathbb{D}\to\mathbb{T}\) (parametrized by the image of \(f(T)\in\mathbb{T}\)). So \(\dim\mathcal{X}_{M}(K)=0<1=\dim\mathcal{X}(\mathbb{T})\). ## 4 Proof of the main theorems The key fact needed for the proof of Theorem 2.1 is the following theorem of Bieri and Groves [7, Theorem A], which is a cornerstone of tropical geometry. For the statement, recall that a _semi-valuation_ from a ring \(R\) to \(\overline{\mathbb{R}}=\mathbb{R}\cup\{+\infty\}\) is a map \(v:R\to\overline{\mathbb{R}}\) such that \(v(0)=+\infty\), \(v(xy)=v(x)+v(y)\), and \(v(x+y)\geqslant\min\{v(x),v(y)\}\) for all \(x,y\in R\). (The map \(v\) is called a _valuation_ if, in addition, \(v(x)=+\infty\) implies that \(x=0\).) If \(R\) is a \(K\)-algebra, where \(K\) is a valued field (i.e., a field endowed with a valuation \(v:K\to\overline{\mathbb{R}}\)), a \(K\)_-semi-valuation_ is a semi-valuation which restricts to the given valuation on \(K\). **Theorem 4.1** (Bieri-Groves).: _Let \(K\) be a field endowed with a real valuation \(v\), and suppose \(R\) is a finitely generated \(K\)-algebra with Krull dimension equal to \(n\), having generators \(T_{1},\ldots,T_{n}\). Let \(X=\operatorname{Spec}(R)\) be the corresponding affine \(K\)-scheme. Then the set_ \[\operatorname{Trop}(X):=\{(v(T_{1}),\ldots,v(T_{n}))\mid v:R\to\overline{ \mathbb{R}}\text{ is a $K$-semi-valuation}\}\] _is a polyhedral complex of dimension \(\dim(\operatorname{Trop}(X))=\dim X\)._ **Remark 4.2**.: Bieri and Groves assume that \(X\) is irreducible and show, more precisely, that \(\operatorname{Trop}(X)\) has _pure_ dimension \(n\). Our formulation of the Bieri-Groves theorem (which does not include the purity statement) follows immediately from theirs by decomposing \(X\) into irreducible components. **Remark 4.3**.: More or less by definition, a semi-valuation on a ring \(R\) is precisely the same thing as a homomorphism from \(R\) to \(\mathbb{T}\) in the category of bands, and if \(K\) is a valued field then a \(K\)-semi-valuation on \(R\) is the same thing as a homomorphism from \(R\) to \(\mathbb{T}\) which restricts to the given homomorphism \(v:K\to\mathbb{T}\) on \(K\). Let \(K\) be a field, and let \(\operatorname{Alg}_{K}\) denote the category of \(K\)-algebras, i.e. ring extensions \(R\) of \(K\) together with \(K\)-linear ring homomorphisms. We write \(\operatorname{Hom}_{K}(R,S)\) for the set of \(K\)-algebra homomorphisms between two \(K\)-algebras \(R\) and \(S\). Given a band \(B\), we define the _associated \(K\)-algebra_ as \[\rho_{K}(B)\ =\ K[B]\,/\,\langle N_{B}\rangle,\] where \(K[B]\) is the monoid algebra over \(K\) and the elements of the nullset \(N_{B}\) are interpreted as elements of \(K[B]\) (cf. Definition A.1). It comes with a band homomorphism \(\alpha_{B}:B\to\rho_{K}(B)\), which maps \(a\) to \([a]\). The other main ingredient needed for the proof of Theorem 2.1 is the following technical but important result: **Proposition 4.4**.: _Let \(K\) a field, \(B\) be a band and \(R=\rho_{K}(B)\) the associated \(K\)-algebra._ 1. _The homomorphism_ \(\alpha_{B}:B\to\rho_{K}(B)\) _is initial for all homomorphisms from_ \(B\) _to a_ \(K\)_-algebra, i.e., for every_ \(K\)_-algebra_ \(S\) _the natural map_ \[\operatorname{Hom}_{K}(R,S)\ \stackrel{{\alpha_{B}^{*}}}{{ \longrightarrow}}\ \operatorname{Hom}(B,S)\] _is a bijection._ _._ 2. _Assume we are given a valuation_ \(v_{K}:K\to\mathbb{T}\)_, and that_ \(B\) _is finitely generated by_ \(a_{1},\ldots,a_{n}\)_. Let_ \(T_{i}=\alpha_{B}(a_{i})\) _for_ \(i=1,\ldots,n\)_, and let_ \(X=\operatorname{Spec}R\)_. Let_ \(\exp^{n}:\overline{\mathbb{R}}^{n}\to\mathbb{T}^{n}\) _be the coordinate-wise exponential map. Then the_ \(T_{i}\) _generate_ \(R\) _as a_ \(K\)_-algebra, and_ \[\exp^{n}\big{(}\operatorname{Trop}(X)\big{)}\ \subset\ \operatorname{Hom}(B,\mathbb{T})\] _as subsets of_ \(\mathbb{T}^{n}\)_._ Proof.: We begin with (1). The map \(\alpha_{B}^{*}\) is injective since \(R\) is generated by the subset \(\alpha_{B}(B)\), and therefore every homomorphism \(f:R\to S\) is determined by the composition \(f\circ\alpha_{B}:B\to S\). In order to show that \(\alpha_{B}^{*}\) is surjective, consider a band homomorphism \(f:B\to S\), which is, in particular a multiplicative map. Therefore it extends (uniquely) to a \(K\)-linear homomorphism \(\hat{f}:K[B]\to S\) from the monoid algebra \(K[B]\) to \(S\). For every \(\sum a_{i}\in N_{B}\), we have \(\sum f(a_{i})\in N_{S}\) by the definition of a band homomorphism. By the definition of \(N_{S}\), this means that \(\sum f(a_{i})=0\) in \(S\). Thus \(\hat{f}\) factorizes through \(\hat{f}:R=K[B]/\langle N_{B}\rangle\to S\), and, by construction, we have \(f=\tilde{f}\circ\alpha_{B}=\alpha_{B}^{*}(\tilde{f})\). This establishes (1). We continue with (2). Since \(B\) is generated by \(a_{1},\ldots,a_{n}\) as a pointed monoid and \(\alpha_{B}(B)\) generates \(R\) as a \(K\)-algebra, \(R\) is generated as a \(K\)-algebra by \(T_{1},\ldots,T_{n}\). In order to verify that \(\exp^{n}(\operatorname{Trop}(X))\subset\operatorname{Hom}(B,\mathbb{T})\), consider a point \((v(T_{1}),\ldots,v(T_{n}))\in\operatorname{Trop}(X)\), where \(v:R\to\overline{\mathbb{R}}\) is a \(K\)-semi-valuation. Post-composing \(v\) with \(\exp\) yields a seminorm \(v^{\prime}:R\to\mathbb{T}\), which is, equivalently, a band homomorphism. Pre-composing \(v^{\prime}\) with \(\alpha_{B}\) yields a band homomorphism \(v^{\prime\prime}:B\to\mathbb{T}\), which is an element of \(\operatorname{Hom}(B,\mathbb{T})\). By construction, \(\exp^{n}(v(T_{1}),\ldots,v(T_{n}))=v^{\prime\prime}\), which establishes the last assertion. **Remark 4.5**.: 1. Under the assumptions of Proposition 4.4.(2), \(\operatorname{Hom}(B,\mathbb{T})\) embeds as a subspace of \(\mathbb{T}^{n}\), which has a well-defined (Lebesgue) covering dimension in the sense of [21, Chapter 3]. As discussed in [17], the subspace topology of \(\operatorname{Hom}(B,\mathbb{T})\subset\mathbb{T}^{n}\) is equal to the compact-open topology for \(\operatorname{Hom}(B,\mathbb{T})\) with respect to the discrete topology for \(B\) and the natural order topology for \(\mathbb{T}\), which shows that the dimension of \(\operatorname{Hom}(B,\mathbb{T})\) does not depend on the embedding into \(\mathbb{T}^{n}\). 2. With the topologies just described, \(\exp^{n}\) defines a continuous injection from \(\operatorname{Trop}(X)\) to \(\operatorname{Hom}(B,\mathbb{T})\) which identifies the former with a closed subspace of the latter. In particular, [21, Prop. 3.1.5] shows that \(\dim\operatorname{Trop}(X)\leqslant\dim\operatorname{Hom}(B,\mathbb{T})\). 3. If in addition to the assumptions of (2), \(N_{B}\) is finitely generated as an ideal of \(B^{+}\), then \(\operatorname{Hom}(B,\mathbb{T})\) is a tropical pre-variety in \(\mathbb{T}^{n}\) and is therefore the underlying set of a finite polyhedral complex. The dimension of \(\operatorname{Hom}(B,\mathbb{T})\) as a polyhedral complex is equal to its covering dimension [21, Theorem 2.7 and Section 3.7]. Proof of Theorem 2.1.: Let \(v:K\to\mathbb{T}\) be a valuation (which we can take to be the trivial valuation if we like). Let \(\alpha_{B}:B\to R\) be the canonical homomorphism to the associated \(K\)-algebra \(R=\rho_{K}(B)\), cf. Proposition 4.4. Let \(a_{1},\ldots,a_{n}\in B\) be a set of generators for \(B\), and for \(i=1,\ldots,n\) let \(T_{i}=\alpha_{B}(a_{i})\). By Proposition 4.4, the \(T_{i}\) generate \(R\) as a \(K\)-algebra, i.e., \(R=K[T_{1},\ldots,T_{n}]/I\) for some ideal \(I\). Let \(X=\operatorname{Spec}R\), so that \(X(K)=\operatorname{Hom}_{K}(R,K)\). Proposition 4.4 yields a commutative diagram where the right-hand vertical map is obtained by composing with \(v:K\to\mathbb{T}\) and the left-hand vertical map is induced by composing the embedding of \(X(K)=\operatorname{Hom}_{K}(R,K)\) into \(K^{n}\) via \(\phi\mapsto(\phi(T_{i}))_{i=1}^{n}\) with the coordinate-wise absolute value \(v_{K}^{n}:K^{n}\to\mathbb{T}^{n}\). By the Bieri-Groves theorem (Theorem 4.1), the dimension of the affine variety \(X\) is equal to the dimension of \(\operatorname{Trop}(X)\), as defined in Remark 4.5. Using Proposition 4.4(2) and Remark 4.5(2), we conclude that \[\dim\big{(}X\big{)}\ =\ \dim\big{(}\operatorname{Trop}(X)\big{)}\ \leqslant\ \dim\big{(} \operatorname{Hom}(B,\mathbb{T})\big{)},\] as desired. Proof of Theorem 2.3.: Suppose \(f:B_{1}\to B_{2}\) is a band homomorphism. Choose generators \(x_{1},\ldots,x_{m}\) for \(B_{1}\). Completing \(f(x_{1}),\ldots,f(x_{m})\) to a set of generators for \(B_{2}\) if necessary, we find a generating set \(y_{1},\ldots,y_{n}\) for \(B_{2}\) with \(m\leqslant n\) such that \(f(x_{i})=y_{i}\) for \(i=1,\ldots,m\). Setting \(X=\operatorname{Spec}(\rho_{K}(B_{1}))\) and \(Y=\operatorname{Spec}(\rho_{K}(B_{2}))\), and letting \(\operatorname{Trop}(X)\) (resp. \(\operatorname{Trop}(Y)\)) be the tropicalization of \(X\) with respect to \(\alpha_{B_{1}}(x_{1}),\ldots,\alpha_{B_{1}}(x_{m})\) (resp. \(\alpha_{B_{2}}(y_{1}),\ldots,\alpha_{B_{2}}(y_{n})\)), we obtain a commutative diagram Since \(\operatorname{Trop}(Y)\) is a closed subspace of \(\operatorname{Hom}(B_{2},\mathbb{T})\) (resp. \(\operatorname{Trop}(X)\) is a closed subspace of \(\operatorname{Hom}(B_{1},\mathbb{T})\)), it suffices to prove that if \(x\in X(K)\) and \(x^{\prime}=\operatorname{Trop}(x)\in\operatorname{Trop}(X)\), then \(\dim f_{K}^{-1}(x)\leqslant\dim f_{\mathbb{T}}^{-1}(x^{\prime})\). To see this, write \(f_{K}^{-1}(x)=Z(K)\) with \(Z\) an affine subscheme of \(Y\). If we pull back the functions \(\alpha_{B_{2}}(y_{1}),\ldots,\alpha_{B_{2}}(y_{n})\) to a set of generators for the affine coordinate ring of \(Z\), we obtain a commutative diagram Applying the Bieri-Groves theorem to \(Z\), we find that the image of \(Z(K)\) under \(\operatorname{Trop}\) has dimension equal to \(\dim f_{K}^{-1}(x)\). In addition, the natural map \(\operatorname{Trop}(Z)\to\operatorname{Trop}(Y)\) identifies \(\operatorname{Trop}(Z)\) with a closed subspace of \(\operatorname{Trop}(Y)\), since \(\operatorname{Trop}(Z)\) (resp. \(\operatorname{Trop}(Y)\) is the topological closure of \(Z(K)\) (resp. \(Y(K)\)) in \(\mathbb{T}^{n}\) (cf. [20, Proposition 2.2]). By construction, \(\operatorname{Trop}(Z)\) is in fact a closed subspace of \(\dim f_{\mathbb{T}}^{-1}(x^{\prime})\). This means that \(\dim f_{K}^{-1}(x)=\dim\operatorname{Trop}(Z)\leqslant\dim f_{\mathbb{T}}^{- 1}(x^{\prime})\) as desired. ## Appendix A Pastures and Bands More details pertaining to the following overview of bands and pastures can be found in [2]. In this text, a _pointed monoid_ is a (multiplicatively written) commutative semigroup \(A\) with identity \(1\), together with a distinguished element \(0\) that satisfies \(0\cdot a=0\) for all \(a\in A\). The _ambient semiring of \(A\)_ is the semiring \(A^{+}=\mathbb{N}[A]/\langle 0\rangle\), which consists of all finite formal sums \(\sum a_{i}\) of nonzero elements \(a_{i}\in A\). Note that \(A\) is embedded as a submonoid in \(A^{+}\), where \(0\) is identified with the empty sum. An _ideal of \(A^{+}\)_ is a subset \(I\) that contains \(0\) and is closed under both addition and multiplication by elements of \(A^{+}\). **Definition A.1**.: A _band_ is a pointed monoid \(B\) together with an ideal \(N_{B}\) of \(B^{+}\) (called the _nullset_) such that for every \(a\in A\), there is a unique \(b\in A\) with \(a+b\in N_{B}\). We call this \(b\) the _additive inverse of \(a\)_, and we denote it by \(-a\). A _band homomorphism_ is a multiplicative map \(f:B\to C\) preserving \(0\) and \(1\) such that \(\sum a_{i}\in N_{B}\) implies \(\sum f(a_{i})\in N_{C}\). This defines the category Bands. ( For a subset \(S\) of \(B^{+}\), we denote by \(\langle\!\langle S\rangle\!\rangle\) the smallest ideal of \(B^{+}\) that contains \(S\) and is closed under the _fusion axiom_ (cf. [6]) 1. if \(c+\sum a_{i}\) and \(-c+\sum b_{j}\) are in \(\langle\!\langle S\rangle\!\rangle\), then \(\sum a_{i}+\sum b_{j}\) is in \(\langle\!\langle S\rangle\!\rangle\). **Definition A.2**.: A band \(B\) is _finitely generated_ if it is finitely generated as a monoid. It is a _finitely presented fusion band_, which we abbreviate by simply saying that \(B\) is _finitely presented_, if it is finitely generated and \(N_{B}=\langle\!\langle S\rangle\!\rangle\) for a finite subset \(S\) of \(N_{B}\). The _unit group of \(B\)_ is the submonoid \(B^{\times}=\{a\in B\mid ab=1\) for some \(b\in B\}\) of \(B\), which is indeed a group. **Definition A.3**.: A _pasture_ is a band \(P\) with \(P^{\times}=P-\{0\}\) and \[N_{P}\ =\ \big{\langle}a+b+c\in P^{+}\ \big{|}\ a+b+c\in N_{P}\big{\rangle} \big{\rangle}.\] **Example A.4**.: Every ring \(R\) is a band, with nullset \(N_{R}=\{\sum a_{i}\mid\sum a_{i}=0\text{ in }R\}\). In fact, this defines a fully faithful embedding Rings \(\to\) Bands. Every field is a pasture. The following examples of interest are bands which are not rings (we write \(a-b\) for \(a+(-b)\)): * The _regular partial field_ is the pasture \(\mathbb{F}_{1}^{\pm}=\{0,1,-1\}\) with nullset \[N_{\mathbb{F}_{1}^{\pm}}\ =\ \big{\{}n.1+n.(-1)\,\big{|}\,n\geqslant 0\big{\}}\ =\ \langle\! \langle 1-1\rangle\!\rangle.\] * The _Krasner hyperfield_ is the pasture \(\mathbb{K}=\{0,1\}\) with nullset \[N_{\mathbb{K}}\ =\ \mathbb{N}-\{1\}\ =\ \big{\langle}1+1,\ 1+1+1\big{\rangle}.\] * The _tropical hyperfield_ is the pasture \(\mathbb{T}=\mathbb{R}_{\geqslant 0}\) with nullset \[N_{\mathbb{T}}\ =\ \{0\}\ \cup\ \big{\{}\sum a_{i}\ \big{|}\ a_{1},\ldots,a_{n}\text{ assumes its maximum at least twice}\big{\}}.\] Examples of band homomorphisms are the inclusion \(\mathbb{K}\hookrightarrow\mathbb{T}\) and the surjection \(\mathbb{T}\to\mathbb{K}\) that sends every nonzero element to \(1\). A band homomorphism \(R\to\mathbb{T}\) from a ring \(R\) into \(\mathbb{T}\) is the same thing as a non-archimedean seminorm. In particular, the trivial absolute value on a field \(K\) is the unique band homomorphism \(K\to\mathbb{T}\) that factors through \(\mathbb{K}\). The pasture \(\mathbb{F}_{1}^{\pm}\) is an initial object in Bands, i.e., every band \(B\) comes with a unique homomorphism \(\mathbb{F}_{1}^{\pm}\to B\). This leads to a description \(B=\mathbb{F}_{1}^{\pm}[T_{i}\ |\ i\in I]/\!\!/\langle S\rangle\) of \(B\) in terms of generators \(\{T_{i}\ |\ i\in I\}\) and relations \(S\subset B^{+}\), in the sense that \(\{T_{i}\}\cup\{0,-1\}\) generates \(B\) as a monoid, \(S\) generates the ideal \(N_{B}\), and \(S\) contains a complete set of binary relations between the signed products \(x=\pm T_{i_{1}}\cdots T_{i_{r}}\) of the \(T_{i}\), i.e., if \(x-y\in S\) then \(x=y\) as elements of \(B\). Similarly, we write \(P=\mathbb{F}_{1}^{\pm}(T_{i}\ |\ i\in I)/\!\!/\langle S\rangle\) for a pasture \(P\) if \(P^{\times}\) is generated as a group by \(\{T_{i}\ |\ i\in I\}\) and \(-1\), if \(N_{P}=\langle\!\langle S\rangle\!\rangle\), and if \(S\) contains a complete set of binary relations between the signed products of the \(T_{i}\). For example, \[\mathbb{K}\ =\ \mathbb{F}_{1}^{\pm}/\!\!/\langle\!\langle 1+1,\ 1+1+1\rangle\! \rangle,\quad\text{and}\quad\mathbb{F}_{5}\ =\ \mathbb{F}_{1}^{\pm}(T)/\!\!/\langle T^{2}+1,\ T-1-1 \rangle\!\rangle.\] ## Appendix B Valuated matroids and subdivisions of the basis polytope In this section, we show that a matroid is rigid if and only if it has a unique rescaling class over \(\mathbb{T}\). We begin with some observations and recall some results from the literature. For a pasture \(F\), we can identify isomorphism classes of a (weak) Grassmann-Plucker function \(\Delta\) with the corresponding _Plucker vector_\((\Delta(I))_{I\in\binom{E}{r}}\in\mathbb{P}^{\binom{E}{r}}(F)\). We call this Plucker vector a _representation_ of \(M\), and by abuse of terminology we use the terms "Grassmann-Plucker function" and "Plucker vector" interchangeably. Every matroid \(M\) can be (uniquely) represented over \(\mathbb{K}\) by the Grassmann-Plucker function \(\Delta_{M}:\binom{E}{r}\to\mathbb{K}\) which sends an \(r\)-subset \(I\) of \(E\) to \(1\) if it a basis of \(M\) and to \(0\) otherwise. Post-composing \(\Delta_{M}\) with the inclusion \(\mathbb{K}\hookrightarrow\mathbb{T}\) defines the _trivial representation of \(M\)_, which shows that \(M\) has at least one rescaling class over \(\mathbb{T}\). Recall that the _basis polytope_\(P_{M}\) of \(M\) is the convex hull of the points \(e_{I}=\sum_{i\in I}e_{i}\in\mathbb{R}^{n}\) for which \(I\) is a basis of \(M\). Let \(\Delta:\binom{E}{r}\to\mathbb{T}\) be a Plucker vector for \(M\), i.e., \(\operatorname{supp}\left(\Delta\right)=\operatorname{supp}\left(\Delta_{M}\right)\). Let \(\mathcal{S}_{\Delta}=\{e_{I}\ |\ \Delta(I)\neq 0\}\) be the support of \(\Delta\), considered as a subset of \(\mathbb{R}^{n}\). Post-composing with log yields a function \(\widehat{\Delta}:\mathcal{S}_{\Delta}\to\mathbb{R}\) whose graph \(\Gamma\) is a subset of \(\mathbb{R}^{n}\times\mathbb{R}\). The convex closure of \(\Gamma\) has a unique coarsest structure as a polyhedral complex. The lower faces of this polyhedral complex are those faces for which the last coordinate of the outward normal vector is negative. Omitting this last coordinate projects these faces onto \(P_{M}\) and defines a polyhedral subdivision of \(P_{M}\) called the _regular subdivision associated to \(\Delta\)_, see e.g. [18, Definition 2.3.8]. By a theorem of Speyer (cf. [22, Prop. 2.2]), this subdivision of \(P_{M}\) is a _matroid subdivision_, i.e., all faces of the subdivision are themselves matroid polytopes, and conversely every regular matroid subdivision of \(P_{M}\) comes from a \(\mathbb{T}\)-representation of \(M\) (see also [18, Lemma 4.4.6] and [14, Thm. 10.35]). **Proposition B.1**.: _A matroid \(M\) is rigid if and only if \(M\) has a unique rescaling class over \(\mathbb{T}\)._ Proof.: Let \(r\) be the rank and \(E=\{1,\ldots,n\}\) the ground set of \(M\). Let \(\Delta:\binom{E}{r}\to\mathbb{T}\) be a tropical Plucker vector for \(M\), and let \(\mathcal{S}_{\Delta}\) be as above. By definition, \(M\) is rigid if and only if \(P_{M}\) admits only the trivial regular matroid subdivision. Since none of the points of \(\mathcal{S}_{\Delta}\) lies in the convex closure of the other points, \(\Delta:\binom{E}{r}\to\mathbb{T}\) induces the trivial matroid subdivision if and only if the subset \(\left\{(e_{I},\tilde{\Delta}(I))\mid I\in\mathcal{S}_{\Delta}\right\}\) of \(\mathbb{R}^{n}\times\mathbb{R}\) is contained in an affine hyperplane \(H\). In this case, let \(x_{i}e_{i}\) be the unique intersection point of \(H\) with the coordinate axis generated by \(e_{i}\) (in the case of a loop \(i\) of \(M\) there is no such intersection point, and we can formally put \(x_{i}=+\infty\)). Then \(\tilde{\Delta}(I)=\sum_{k=1}^{r}x_{i_{k}}e_{i_{k}}\) for \(I\in\mathcal{S}_{\Delta}\). Rescaling \(\Delta\) by \(t=(\exp(-x_{i})\mid i=1,\ldots,n)\) yields a Plucker vector \(\Delta_{0}=t.\Delta:\binom{E}{r}\to\mathbb{T}\) for which \[\tilde{\Delta}_{0}(I)\ =\ \tilde{\Delta}(I)-\sum_{k=1}^{r}x_{i_{k}}e_{i_{k}}\ =\ 0\] for every \(I\in\mathcal{S}_{\Delta}\). Thus \(\tilde{\Delta}_{0}\) is the trivial representation of \(M\). Conversely, rescaling \(\Delta_{0}\) yields a Plucker vector \(\Delta\) for which \(\left\{(e_{I},\tilde{\Delta}(I))\mid I\in\mathcal{S}_{\Delta}\right\}\) is contained in an affine hyperplane, which concludes the proof. **Remark B.2**.: The (local) _Dressian_ of a matroid \(M\) (cf. [19]) is a polyhedral complex \(\Delta_{M}\) whose underlying set consists of all \(\mathbb{T}\)-representations of \(M\); the polyhedral structure is defined by the 3-term tropical Plucker relations. One can show using [19, Cor. 18] that the lineality space of \(\Delta_{M}\) is precisely the set of valuations on \(M\) which are projectively equivalent to the trivial valuation. The topological space \(\operatorname{Hom}(F_{M},\mathbb{T})\) considered in the body of this paper can then be naturally identified with \(\Delta_{M}\) modulo its lineality space, which we call the _reduced Dressian_\(\overline{\Delta}_{M}\). (We omit the details, as it would take us too far afield into a somewhat lengthy discussion of various topologies and polyhedral structures.) See [9, Section 3] for an algorithm for computing the Dressian and/or reduced Dressian of a matroid \(M\), and also (in Section 5) some interesting counterexamples to plausible-sounding assertions.
2308.06300
Classification of All Blood Cell Images using ML and DL Models
Human blood primarily comprises plasma, red blood cells, white blood cells, and platelets. It plays a vital role in transporting nutrients to different organs, where it stores essential health-related data about the human body. Blood cells are utilized to defend the body against diverse infections, including fungi, viruses, and bacteria. Hence, blood analysis can help physicians assess an individual's physiological condition. Blood cells have been sub-classified into eight groups: Neutrophils, eosinophils, basophils, lymphocytes, monocytes, immature granulocytes (promyelocytes, myelocytes, and metamyelocytes), erythroblasts, and platelets or thrombocytes on the basis of their nucleus, shape, and cytoplasm. Traditionally, pathologists and hematologists in laboratories have examined these blood cells using a microscope before manually classifying them. The manual approach is slower and more prone to human error. Therefore, it is essential to automate this process. In our paper, transfer learning with CNN pre-trained models. VGG16, VGG19, ResNet-50, ResNet-101, ResNet-152, InceptionV3, MobileNetV2, and DenseNet-20 applied to the PBC dataset's normal DIB. The overall accuracy achieved with these models lies between 91.375 and 94.72%. Hence, inspired by these pre-trained architectures, a model has been proposed to automatically classify the ten types of blood cells with increased accuracy. A novel CNN-based framework has been presented to improve accuracy. The proposed CNN model has been tested on the PBC dataset normal DIB. The outcomes of the experiments demonstrate that our CNN-based framework designed for blood cell classification attains an accuracy of 99.91% on the PBC dataset. Our proposed convolutional neural network model performs competitively when compared to earlier results reported in the literature.
Rabia Asghar, Sanjay Kumar, Paul Hynds, Abeera Mahfooz
2023-08-11T07:57:12Z
http://arxiv.org/abs/2308.06300v3
# Automatic Classification of Blood Cell Images using Covolutional Neural Network ###### Abstract Human blood is primarily composed of plasma, red blood cells, white blood cells, and platelets. It plays a vital role in transporting nutrients to different organs, where it stores essential health-related data about the human body. Blood cells are utilized to defend the body against diverse infections, including fungi, viruses, and bacteria. Hence, the analysis of blood can help physicians in assessing an individual's physiological condition. Blood cells have been sub classified into eight groups: Neutrophils, eosinophils, basophils, lymphocytes, monocytes, immature granulocytes (promyelocytes, myelocytes, and metamelyocytes), erythroblasts, and platelets or thrombocytes on the basis of their nucleus, shape and cytoplasm. Traditionally, pathologists and hematologists in laboratories have examined these blood cells using a microscope before manually classifying them. The manual approach is slower and more prone to human error. Therefore, it is essential to automate this process. In our paper, transfer learning with CNN pre-trained models--VGG16, VGG19, ResNet-50, ResNet-101, ResNet-152, InceptionV3, MobileNetV2 and DenseNet-20 applied to PBC dataset's normal DIB. The overall accuracy achieved with these models lies between 91.375 and /to 94.72%. Hence, inspired by these pre-trained architectures, a model has been proposed to automatically classify the ten types of blood cells with increased accuracy. A novel CNN-based framework has been presented to improve accuracy. The proposed CNN model has been tested on PBC dataset normal DIB. The outcomes of the experiments demonstrate that our CNN-based framework designed for blood cell classification attains an accuracy of 99.91% on the PBC dataset. Our proposed convolutional neural network model performs competitively when compared to earlier results reported in literature. Blood Cell Subtypes, Machine Learning, Classification, Feature Extraction, Pre-trained Models, Deep Learning, Image Analyses, Autoimmune Diseases. ## I Introduction Blood is a specific type of circulating connective fluid that takes oxygen from the lungs and transports it to all human body cells. The cells require oxygen for metabolism, which the blood carries from the lungs to the cells. In this way, blood cells carry hormones, and eliminates unnecessary materials that are eventually eliminated by organs like the liver, kidneys, or intestine. The blood is comprised of plasma that forms liquid portion and cell fragments. The cell fragments are composed of white blood cells (WBCs) with 1% proportion, responsible for immunity. Red blood cells (RBCs) comprise 40-50% of the total blood volume, carrying oxygen and carbon dioxide, whereas platelets have the crucial role of promoting blood clotting [1, 2]. WBCs can be divided into two general groups based on the presence of granules: granulocytes and agranulocytes (non-granulocytes). Lymphocytes and monocytes fall under agranulocytes, while neutrophils, eosinophils, and basophils are considered to be granulocytes. Undeveloped WBCs called immature granulocytes (IG) are expelled from the bone marrow into the blood. The presence of immature granulocytes (promyelocytes, myelocytes, and metamelyocytes) in the blood signifies an early reaction to an infection, swelling, or some other type of problem with the bone marrow like leukemia, except the blood from newly born children or pregnant women. White blood cells act as defenders against infections, breaking down foreign proteins present in bacteria, viruses, and fungi. Through division, WBCs fight infections and diseases by recognizing, identifying, and attaching themselves to these foreign antigens [3]. Red blood cells, also known as erythroblasts, help tissues produce energy by delivering appropriate oxygen. When energy is produced, waste in the form of carbon dioxide is also formed. RBCs are responsible for providing that carbon dioxide to the lungs so that it is exhaled. Erythroblasts are immature RBCs that are usually present in the blood of newly born child in a duration from 0-4 months. Their presence in human blood after the neonatal period (0-4 months) indicates severe problems like damaged bone marrow, stress, and malignant tumor that may lead to cancer or benign that grows in size but do not infect other body parts. Platelets are also known as thrombocytes and are essential for the immunity system; their primary responsibility is to stop bleeding. If bleeding starts from an injury or blood vessel damage somewhere in the body, the brain sends an alerting signal to the platelets. The platelets flow to the wounded area, cluster together, and form a clot, sealing the blood vessel to stop the bleeding. They also play an important role in tissue repair and remodeling to prevent tumor progression and leakage of vesicular fluids. They comprise a tiny proportion, i.e., less than 1% of blood volume. Typically, the common rates of neutrophils in the blood are 0-6%. Eosinophils comprise 1-3%, basophils 0-1%, lymphocytes 25 33%, and monocytes 3-10% of the leukocytes floating in the blood [4]. The classification of blood cells is a current research area for scientists trying to diagnose diseases that affect blood cells. Blood cell classification using microscopic images of blood would only be done manually by medical professionals with the necessary experience and training. Blood is analyzed in two different ways. The first method is a complete blood count (CBC) test that calculates the total percentage of RBCs, WBCs, and platelets; the second is the peripheral blood smears (PBS) test. These results represent the patient's overall health. Microscopic blood images can accurately reveal the types of RBCs, WBCs, and platelets, enabling early disease diagnosis. Every type of cell present in human blood serves a distinct purpose. A change in the number of blood cell types would result in an illness or disease. A low count of WBCs can cause various illnesses, including blood cancer. A lower count of healthy red blood cells leads to Anemia [5], and a lower ratio of platelets leads to excessive bleeding. Conventional blood cell type detection methods take a long time and have low accuracy, which highlights the significance of accurate systems for the rapid and precise analysis of blood cells [6]. Blood cells in microscopic images of blood smears have been categorized using conventional machine learning (ML) techniques like support vector machine, decision tree, k-nearest neighbor, naive Bayes, and artificial neural network [6, 7]. The general process for traditional ML approaches includes pre-processing of blood smear images, segmentation to divide the cells, feature extraction, feature selection to remove undesired data, and classification. Despite many promising results, feature extraction and selection significantly affect how well classical ML algorithms perform in classification. Choosing the optimal features and finding the appropriate feature extraction algorithm has become complex and time-consuming [7]. Several deep learning (DL) techniques of convolutional neural networks (CNNs) have recently been proposed to tackle this challenging topic. Recent developments in deep learning allow us to estimate the type of blood cells from microscopic images. In contrast to conventional ML approaches, DL-based approaches have the capability of autonomous feature extraction and selection. Prior research revealed that for classifying blood cells, CNN performed better than traditional ML techniques [8]. The aim of this research is to develop a model for the classification of various blood cells, using machine learning methods. For this purpose, firstly, we have used transfer learning to evaluate the performance of CNN pre-trained models: VGG16, VGG19, ResNet-50, ResNet-101, ResNet-152, InceptionV3, MobileNetV2, and DenseNet-201 for microscopic image dataset provided by PBC dataset and evaluated their performance. Then we have introduced a CNN model for the classification of ten major blood subtypes. Our work aims to develop a convolution neural network (CNN) based model with decent generalization ability for the classification of various types of blood cells. The paper is organized as follows. Section 2 introduces the related work. In Section 3, the blood dataset is presented. Section 4 explains the transfer learning approach. Moving on to Section 5, the proposed methodology is detailed. Results and their analysis are reported in Section 6, followed by a discussion. The last section draws the conclusions. ## II Related Work Elhassan et al. proposed a two-step deep learning model to categorize atypical lymphocytes and immature WBCs [9]. The problem of an unbalanced distribution of WBCs in blood samples was addressed using a new method known as the "GT-DCAE WBC augmentation model," which is a hybrid model based on geometric transformation (GT) and a deep convolutional autoencoder (DCAE). A hybrid multi-classification model known as the "two-stage DCAE-CNN atypical WBC classification model" was created to divide atypical WBCs into eight categories. The model's average accuracy, sensitivity, and precision were 97%, 97%, and 98%, respectively. Ahmad et al. presented an improved hybrid method for optimum deep feature extraction using DenseNet201 and Darknet53 [10]. The dominant characteristics were then chosen using an entropy-controlled marine predator algorithm. (ECMPA). A public dataset of 5000 images of five distinct subtypes of WBCs was used. The system obtained an overall average accuracy of 99.9% while reducing the size of the feature vector by more than 95%. Singh et al. [11] suggested white blood cell classification using CNN and multiple optimizers such as SGD, Adadelta, and Adam with a batch size of 32 and 10 epochs. The best outcomes were obtained by using the Adam optimizer. The Adam optimizer yielded performance parameter values of 97% accuracy, 99% recall, and F1 score of 98%. Darrin et al. presented a video analysis method to automatically classify imbalance red blood cells from videos to monitor the status of sickle cell anemia patients [12]. The videos consisted of 6-100 frames. The convolutional neural network (CNN) model and a recurrent CNN were combined, and an accuracy of 97% with an F1-score of 0.94 was achieved. To solve this problem of blood cell classification, Rabul and Salam suggested Otsu's thresholding with Gray Level Co-occurrence Matrix (GLCM) features [13]. The R, G, and B channels from the original RGB image were separated from the Kaggle BCCD public data set to conduct image subtraction between the blue-red and blue-green channels. Then, WBCs are extracted from B-G channels using Otsu's thresholding and morphological filtering, and features are decorrelated using an ANOVA test and a zero-phase component analysis (ZCA) whitening procedure. KNN classification produced an accuracy of 94.25%. Miklisa et al. proposed a training strategy for neural networks to highlight the malaria-infected red blood cell pixels using the NH malaria data set [14]. Masked images were used to highlight the diseased area by dividing the image into R, G, and B channels, and then the intensity of the red channel was increased. The proposed approach achieved an accuracy of 97.2%. A method for classifying Multiple Myeloma (MM) and Acute Lymphoblastic Leukemia (ALL) using the SN-AM dataset was suggested by Deepika et al. [15]. With a minimal number of parameters and computation time, the model was trained using an optimized Dense Convolution Neural Network framework capable of identifying the type of cancer present in cells with a precision of 97.2%. Ansari _et al._ attempted to create a deep learning model with a customized architecture for identifying acute leukemia using images of lymphocytes and monocytes. A unique dataset with images of acute lymphoblastic and acute myeloid leukemia (AML) is used, and a new dataset has been developed. A Generative Adversarial Network (GAN) increased the dataset's size [16]. Six convolution layers, four dense layers, and a SoftMax activation function were part of the proposed CNN model based on the Tversky loss function for categorizing acute leukemia images. The proposed model had a 99.5% accuracy rate for identifying the various kinds of acute leukemia. To identify various blood cells in microscopic blood images, Alkafrawi and Ismail suggested an AlexNet-based classification model [17]. Using convolution neural networks, the tests were carried out on a dataset of 17,000 blood smear samples obtained from the Hospital Clinic of Barcelona. Five convolutional layers, three maximum pooling layers and three fully connected layers make up the AlexNet model. The AlexNet-based model had a minimal Quadratic Loss of 0.0049 and a high accuracy of 95.08%. Lee et al. created a new CNN-based blood cell detection and counting architecture [18]. VGG-16 was used to generate feature maps, which were then enhanced by feature fusion and a convolutional block attention mechanism (CBAM). The experiments on detecting RBCs, WBCs, and platelets were conducted using the BCCD dataset with two levels of certainty: 0.9 and 0.8, respectively. CBAM, which enlarges input images 1.5 times and uses images in RGB and grayscale color spaces, achieved the best recalls for RBC detection: 82.3% and 86.7% under two confidence levels. Meanwhile, it achieved a precision of 74.7% and 70.1%. Region of interest, which performs image preprocessing and uses RBG and grayscale images, outperforms other models for WBC detection. Precision and recall are 76.1% and 95%, respectively. Kareem et al. classified blood cells using two distinct scenarios, the first using CNN directly and the second using SVM [19]. A data collection containing 10295 cell images was used. CNN obtained an accuracy of 98.4%, while SVM achieved an accuracy of 90.6%. Miserlis et al. proposed an AI-based diagnosis for Peripheral Arterial Disease (PAD) and created 11 different ANN models [20]. DenseNet201, ResNet50v2, EfficientNetB0, and EfficientNetB7 achieve 97.22% precision. Training and testing were carried out between 2 and 8 seconds. The EfficientNetB0 and Resnet50v2 networks exhibited the best accuracy and speed execution. Arif et al. proposed a framework for automatic leukemia detection based on deep learning [21]. The framework comprises several layers, including convolutional layers, batch normalization, leaky ReLU, and max pooling layers, and a CNN model called AlexNet was used to identify leukemia. The proposed framework accurately classified the images as either normal or leukemia-affected. They achieved 98.05% accuracy, 97.59% specificity, 100% recall, and 99.06% F1 score. Meena Devi and Neel Ambary [22] tested different CNN architectures to detect and classify WBC. To train CNN, AlexNet, VGG16, GoogleNet, and ResNet50 were tested and analyzed at the convolution layer. The VGG16 CNN architecture trained with transfer learning outperformed with an accuracy of 97.16% in detecting monocytes 98.40%, basophils 98.48%, lymphocytes 99.52%, eosinophils 96.5%, and neutrophils 95.05%. Relevant data samples must be generated to address the challenge of imbalanced data, incomplete data samples, and missing labels. Pandya et al. [23] described generating data samples using a deep convolutional generative adversarial network (DCGAN). The testing shows that the model generates WBC blood cell images with 99.44% accuracy. Alnawayseh et al. suggested differential counting of white blood cells (WBCs) to assess the immune system state of a patient [24]. Raw pixels from the data collection were used as input, extraneous pixels were removed, A CNN model with three layers: a convolutional layer, a downsized pooling layer, and fully connected hidden layers was trained. Weights are automatically given to data sets based on loss and accuracy. You Only Look Once version 5 (YOLOv5) was used by Luong et al. to suggest a method for classifying and counting white blood cells for the diagnosis of blood related diseases [25]. The blood cells were carefully labeled with 619 leukemia cells, 115 neutrophils, 80 lymphocytes, 23 eosinophils, and 73 monocytes and were accurately detected and classified with 93% accuracy by the YOLOv5 algorithm. To automatically extract RBC features, Z. Liao and Y. Zhang suggested an ultrasonic RF signal convolutional neural network [26]. RBCA-VGG10, a network model with removed layers and modified VGG16 structure, beat LeNet, AlexNet, GoogleNet, and ResNet models in terms of accuracy by 10.15%, 9.30%, 4.40%, and 8.34%, respectively. In [27], the authors described a blood cell image classification algorithm based on Efficient Net that used EfficientNet-B7 as the classification model and Contrast Limited Adaptive Histogram Equalization (CLAHE) to enhance image quality during data preprocessing. The algorithm had a 99.6% accuracy rate. LeukoX is a technique developed to identify and categories WBCs based on physical characteristics [28]. The Least Entropy Combiner (LEC) network combined the individual classifier results. Individual class results were fed to the proposed LEC network for learning. With an accuracy of 96.67%, the modal outperformed individual networks, with kappa and Matthew's correlation coefficient (MCC) values of 0.9334 and 0.9550, respectively. According to the literature, the classification of blood cells is widely discussed. A large amount of research work has been done focusing on image classification and segmentation. Although only a few researchers have chosen to use manually crafted features for classification purposes. The earlier methods of classifying blood cells involved several steps such as preprocessing, selecting relevant features, and extracting meaningful information. In recent times, there is a growing trend to use convolutional neural networks (CNNs) to enhance the performance of classifying various types of blood cells. ## III Blood Cell Dataset ### PBC dataset normal DIB The PBC dataset [29] includes 17,092 images of different normal cells, shown in Table 1, that were collected using the CellaVision DM96 analyzer in the Core Laboratory of the Hospital Clinic of Barcelona. The dataset is divided into the following eight categories: Neutrophils, eosinophils, basophils, lymphocytes, monocytes, immature granulocytes (promylocytes, myelocytes, and metamyelocytes), erythroblasts, and platelets or thrombocytes shown in Figure 1. Experienced clinical pathologists labeled the images and have a size of 360 x 363 pixels in the jpg format. The people whose images were taken during blood collection were free of infection, hematologic, or oncologic disease and were not taking any pharmaceutical treatment. The different types of typical peripheral blood cells may be recognized using this high-quality labeled dataset, which can be used to train and evaluate deep learning and machine learning models. \begin{table} \begin{tabular}{c c} \hline Type of Cell & Images total by type \\ \hline **Neutrophils** & 3329 \\ **Eosinophils** & 3117 \\ **Basophils** & 1218 \\ **Lymphocytes** & 1214 \\ **Monocytes** & 1420 \\ **Immutary granulocytes (Metamyelocytes,** & 2895 \\ **Myelocytes and Promylocytes)** & \\ **Erythroblasts** & 1551 \\ **Platelets (Thrombocytes)** & 2348 \\ **Total** & 17,092 \\ \hline \end{tabular} \end{table} Table 1: Each group’s cell types and numbers. Figure 1. The dataset includes images of normal peripheral blood cells organized into eight groups. These groups cover cells commonly seen in infections and regenerative anemias. ### Components The proposed work in Python utilizes various tools such as TensorFlow, Pandas, NumPy, Matplotlib, and Scikit-learn. The entire project is conducted using Google Collaboratory. ## IV Transfer Learning Transfer learning is a valuable machine learning technique that enables us to utilize an existing model, initially created for one task, to solve a different task. This method brings several advantages by saving time and resources that would otherwise be needed to construct neural network models from the ground up. It is especially useful in computer vision and natural language processing fields, where it enhances the accuracy and performance of the resulting model. In this study, transfer learning has been employed by utilizing well-known CNN models, including VGG16, VGG19, ResNet50, ResNet101, ResNet-152, InceptionV3, MobileNetV2 and DenseNet201. This approach enables leveraging the pre-trained weights and architectures of these models, leading to substantial reductions in training time while enhancing the accuracy of the model. VGG16 [30] is a deep convolutional neural network known for its significant contributions to image classification. Proposed by researchers Karen Simonyan and Andrew Zisserman, VGG16 consists of 16 weight layers, including 13 convolutional layers, five max-pooling layers, and three fully connected layers. The network utilizes small receptive fields of 3x3 for uniform feature extraction. ReLU activation functions introduce non-linearity, while max pooling layers reduce spatial dimensions. The final layers include three fully connected layers, with SoftMax activation for class probabilities. VGG19 [31] is a convolutional neural network with 19 layers. It features a simple, uniform design using 3x3 convolutional filters and 2x2 max pooling layers. VGG19's stacked convolutional layers learn complex patterns and hierarchical representations. The final fully connected layers generate class probabilities. Despite its simplicity, VGG19 performs remarkably in image classification and is a popular baseline model in computer vision research. ResNet50 [32] is a robust deep convolutional neural network architecture proposed by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun at Microsoft Research. It revolutionizes image classification and feature extraction tasks by using residual connections. With 50 weight layers, including convolutional, bottleneck, and fully connected layers, ResNet50 effectively captures image features. The architecture employs 3x3 filters, bottleneck layers, and residual connections to enable gradient flow, address the vanishing gradient problem, and train deeper models. The final fully connected layers generate class probabilities. ResNet-101 [33] is a deep neural network with 101 layers, including convolutional, pooling, and fully connected layers. It addresses the challenges of training deep networks using residual blocks with skip connections. These blocks facilitate gradient flow and mitigate the vanishing gradient problem. ResNet-101 employs bottleneck layers to reduce computational complexity while preserving representation capacity. It consists of multiple stages with varying numbers of residual blocks, enabling the extraction of hierarchical features. ResNet-152 [34] is a deep convolutional neural network with 152 layers. It employs skip connections in residual blocks to tackle training challenges in deep networks. The architecture includes bottleneck layers for efficiency and multiple stages with varying numbers of residual blocks to extract hierarchical features. The final fully connected layers generate class probabilities for image recognition. With its depth and feature-capturing capabilities, ResNet-152 achieves superior performance, enabling accurate and robust deep neural networks. The InceptionV3 [35] model is a deep convolutional neural network architecture developed by researchers at Google Research. Renowned for its innovative use of inception modules, InceptionV3 employs parallel convolutional layers with different filter sizes (1x1, 3x3, 5x5) and pooling operations to capture information at multiple scales and resolutions. The model incorporates batch normalization layers for efficient training and gradient flow. With a composition of multiple inception modules, fully connected layers, and a SoftMax activation function, InceptionV3 leverages pretraining on datasets like ImageNet to learn hierarchical representations and exhibit strong generalization capabilities. MobileNetV2 [36] is an efficient convolutional neural network for mobile and embedded vision applications. It achieves a balance between model size and accuracy using depth wise separable convolutions and linear bottlenecks. By employing inverted residual blocks with residual connections, batch normalization layers, and ReLU6 activations, MobileNetV2 captures and propagates information effectively. Pretrained on ImageNet, it learns expressive features and demonstrates state-of-the-art performance on mobile devices. Its lightweight architecture makes it suitable for real-time applications in resource-constrained environments. DenseNet201[37] is a deep convolutional neural network with 201 layers. It employs dense connectivity, connecting each layer to every other layer for efficient information flow and feature reuse. It includes convolutional, pooling, and fully connected layers and bottleneck layers for reduced computational complexity. DenseNet201 achieves high accuracy and improved gradient flow in computer vision tasks, making it valuable for image recognition and feature extraction applications. ## V Proposed CNN Model The proposed CNN model consists of 22 layers, divided into eight convolution blocks responsible for feature extraction, followed by fully connected layers (dense layers) for classification purposes. This architecture is specifically designed to process RGB images and aims to train input images with dimensions of (360 x 363), enabling the classification of ten distinct classes. Each convolution block consists of a convolution layer, a max pooling layer, and a dropout layer. In a deep CNN, the convolutional layers apply filters to the original image or other feature maps. When an input undergoes the same filter, it generates a feature map by performing a convolution operation on the input and passing the output to the subsequent layer. Consequently, a convolution layer performs pixel-wise multiplication between a two-dimensional input array (image) and a two-dimensional weight array (kernel). Since the operation performed is a dot product, it yields a single value for each multiplication. However, due to the repetitive application of the filter to the input array, the resulting output is a two-dimensional array called a feature map. The purpose of employing filters is to identify specific features within the input and systematically scan the entire input image using these filters. This enables the filters to detect those particular features throughout the image. The technique of pooling layers reduces the sampling of feature maps by summarizing the presence of features in specific map regions. By incorporating pooling layers, the size of the feature maps is reduced. More specifically, after applying a non-linear function like ReLU to the feature maps generated by a convolutional layer, the pooling layer operates independently on each feature map. This process leads to the creation a new set of pooled feature maps. The dropout technique is utilized to prevent overfitting in a model. In a neural network, excessive weights indicate a more complex network that may overfit the training data. Dropout is a simple yet effective regularization approach by randomly removing nodes from the network during training. When applying dropout, employing a larger network with abundant training data and considering incorporating weight constraints is recommended. After evaluating the performance of eight pre-trained models on the PBC dataset normal DIB, we found that these models did not achieve satisfactory accuracy for all blood cell classifications. To address this, we propose our convolutional neural network (CNN) architecture, shown in Figure 2. Our architecture comprises eight convolutional layers, eight pooling layers, five fully connected hidden layers, and an output layer. The eight convolutional blocks in our architecture comprise convolutional (CONV) layers. These layers apply 3x3 convolutions with a stride of 1 and'same' padding. They are followed by the Rectified Linear Unit (ReLU) activation function, which introduces non-linearity. Subsequently, MAXPOOL layers are used for 2x2 max pooling with a stride of 1. We also incorporate dropout layers with a dropout rate of 0.25 to improve the model's generalization ability. ### Convolutional Layer As previously mentioned, our architecture consists of eight convolutional layers, which are as follows: 1) First Layer: Kernel size: 3 x 3, number of filters: 32, Activation function: ReLU, Stride: 1, and input size: 100 x 100 (3 channels). 2) Second Layer: Kernel size: 3 x 3, number of filters: 64, Activation function: ReLU, Stride: 1 3) Third Layer: Kernel size: 3 x 3, number of filters: 64, Activation function: ReLU, Stride: 1. 4) Four Layer: Kernel size: 3 x 3, number of filters: 128, Activation function: ReLU, Stride: 1 5) Five Layer: Kernel size: 3 x 3, number of filters: 256, Activation function: ReLU, Stride: 1 6) Six Layer: Kernel size: 3 x 3, number of filters: 256, Activation function: ReLU, Stride: 1 7) Seven Layer: Kernel size: 3 x 3, number of filters: 256, Activation function: ReLU, Stride: 1 8) Eight Layer: Kernel size: 3 x 3, number of filters: 512, Activation function: ReLU, Stride: 1 B. Pooling Layer. Our architecture utilized max pooling with the same parameters for all eight pooling layers. The pooling layers were configured as follows: Pooling type: Maximum, Pooling Size: 2 x 2, Stride: 1, Dropout: 0.25. C. Fully Connected Layer. In our CNN model, the final layers consist of fully connected layers. Our proposed methodology includes five fully connected hidden layers and one fully connected output layer. Here is the breakdown of the layers: 1) Fully Connected Hidden Layer: Total nodes: 128, Activation: ReLU. 2) Fully Connected Output Layer: Total nodes: number of classes, Activation: SoftMax. ## VI Experimentation, Results and Discussion This section presents the experimental results, a comprehensive analysis, and a discussion of our findings. ### Evaluation Parameters Machine learning models are evaluated based on specific parameters that measure their performance. This study employs four commonly used parameters, accuracy, recall, precision, and F-measure, to assess the model's fitness. ### Pre-Trained Model Compilation and Results In the first phase of the evaluation, several pre-trained architectures, including VGG16, VGG19, ResNet50, ResNet101, ResNet-152, Inception V3, MobileNetV2 and DenseNet201 were assessed for the normal DIB of the PBC dataset. These architectures were trained using sparse categorical cross-entropy loss with Adam as the gradient-based optimizer. The pre-trained weights utilized were derived from the ImageNet dataset classification. The training focused on the last dense layers and spanned 150 epochs, with a learning rate 0.001. The performance parameters achieved with the pre-trained architectures are summarized in Table 2 for the normal DIB of the PBC dataset. With the VGG-16 architecture, 100 images out of 17,092 were misclassified. Specifically, there were misclassifications in the following categories: 15 images from eosinophils, 11 images from lymphocytes, 13 images from monocytes, 13 images from neutrophils, nine images from basophils, 15 images from immature granulocytes (metamployees, myelocytes, and promyelocytes), 11 images from erythroblasts, and 13 images from platelets (thrombocytes). The overall accuracy achieved with VGG-16 is 92.8%. With the VGG-19 architecture, a total of 88 images were misclassified. The misclassifications include 14 images from eosinophils, 17 images from lymphocytes, eight images from monocytes, 12 images from neutrophils, eight images from basophils, ten images from immature granulocytes, eight from erythroblasts, and 11 from platelets. The overall accuracy achieved with VGG-19 is 91.8%. For the ResNet-50 architecture, a total of 88 images were misclassified. These include 34 images from eosinophils, six images from lymphocytes, 13 images from monocytes, 14 images from neutrophils, 33 images from basophils, 15 images from immature granulocytes, 15 images from erythroblasts, and 16 images from platelets. The overall accuracy achieved with ResNet-50 is 94.72%. Using the ResNet-101 architecture, a total of 83 images were misclassified. These include 29 images from eosinophils, seven images from lymphocytes, 11 images from monocytes, 13 images from neutrophils, 30 images from basophils, nine images from immature granulocytes, 12 images from erythroblasts, and 11 images from platelets. The overall accuracy achieved with ResNet-101 is 93.7%. With the ResNet-152 architecture, a total of 85 images were misclassified. These include 15 images from eosinophils, four images from Figure 2: Architecture of our proposed CNN model lymphocytes, seven images from monocytes, 18 images from neutrophils, 20 images from basophils, six images from immature granulocytes, eight images from erythroblasts, and ten images from platelets. The overall accuracy achieved with ResNet-152 is 91.375%. Using the InceptionV3 architecture, a total of 126 images were misclassified. These include 17 images from eosinophils, seven images from lymphocytes, 20 images from monocytes, 14 images from neutrophils, 17 images from basophils, 16 images from immature granulocytes, 11 images from erythroblasts, and 31 images from platelets. The overall accuracy achieved with InceptionV3 is 93.125%. With the MobileNetV2 architecture, a total of 169 images were misclassified. These include 15 images from eosinophils, nine images from lymphocytes, 19 images from monocytes, 20 images from neutrophils, 18 images from neutrophils, 30 images from immature granulocytes, and 16 images from platelets. The overall accuracy achieved with MobileNetV2 is 92.01%. For the DenseNet201 architecture, a total of 80 images were misclassified. These include 15 images from eosinophils, four images from lymphocytes, 18 images from monocytes, seven images from neutrophils, nine images from basophils, 11 images from immature granulocytes, seven images from erythroblasts, and 16 images from platelets. The overall accuracy achieved with DenseNet201 is 94.262%. ## V Proposed CNN Model Results To improve the performance of our model, we took inspiration from the existing architectures mentioned earlier and developed our own CNN model. We carefully considered three key parameters for training: the loss function, optimizer, and evaluation metrics. We utilized the sparse categorical cross-entropy loss function for our CNN model in conjunction with the widely used Adam optimizer. The training process involved feeding the training dataset to our model and training it for 150 epochs, with the best weights saved based on the loss function. Subsequently, we evaluated our proposed convolutional neural network model using all the blood cell images from the PBC dataset's normal DIB category. ### Results on PBC dataset normal DIB. The graph in Figure 3 displays the model's loss and accuracy achieved during each epoch for the PBC dataset's normal DIB category. The network's performance is evaluated using the cross-entropy loss function, commonly employed to assess the effectiveness of convolutional neural networks. The cross-entropy value increases when the predicted value differs from the actual value. Ideally, the cross-entropy value should be zero. In our case, we observed that the cross-entropy value reaches its minimum of 0.026 after 140 epochs, indicating a proximity to zero. The maximum error, which combines training and validation, is 0.057. Additionally, the training and validation accuracy of our convolutional neural network for the PBC dataset's normal DIB category is depicted in Figure 3. The highest training accuracy recorded is 0.993 after 142 epochs, while the maximum validation accuracy achieved is 0.985. Using our proposed CNN model, we saved the model weights corresponding to the minimum loss and utilized them to predict labels for the testing dataset. The results are presented as a confusion matrix, as shown in Table 3. For the PBC dataset's normal DIB category, misclassifications were observed for one image each in the eosinophils and basophils classes. This can be attributed to the similarity in shape and size between these two cell types, as explained earlier. However, all images of the other eight classes were correctly classified with 100% accuracy. Table 4 displays each class's accuracy, precision, recall and F-measure rates. Precision rates of 99%, 100%, 100%, 100%, 99.3%, 100%, 100%, and 100% were achieved for eosinophils, lymphocytes, monocytes, neutrophils, basophils, immature granulocytes (metamelyocytes, myelocytes, and promylocytes), erythroblasts, and platelets (thrombocytes), respectively. The F-measure rates were 99.3% for eosinophils, 100% for neutrophils, 100% for lymphocytes, 100% for monocytes, 98% for basophils, 100% for immature granulocytes, 100% for erythroblasts, and 100% for platelets. Recall rates of 99.4%, 100%, 100%, 100%, 98.5%, 100%, 100%, and 100% were achieved for eosinophils, lymphocytes, monocytes, neutrophils, immature granulocytes, erythroblasts, and platelets, respectively. The misclassification of eosinophils and basophils can be attributed to their high similarity in size and shape. All images of lymphocytes, monocytes, neutrophils, immature granulocytes, erythroblasts, and platelets were correctly classified with 100% accuracy. The average accuracy achieved for all classes combined was 99.91%. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Dataset & Type & Trut & Classifier & Accurate & Precision & Recall & F \\ & & h & d & y(\%) & (\%) & (\%) & (\%) & measure \\ \hline & Eosinophils & **3116** & **3117** & **0.995** & **0.99** & **0.994** & **0.993** \\ & Lymphocyt & **1214** & **1214** & **100** & **100** & **100** & **100** \\ & Monocytes & **1420** & **1420** & **100** & **100** & **100** & **100** \\ & Neutrophphilis & **3329** & **3329** & **100** & **100** & **100** & **100** \\ & Basophils & **1217** & **1218** & **0.998** & **0.993** & **0.985** & **0.98** \\ \hline \hline \end{tabular} \end{table} Table 4: Test results of proposed CNN architecture Figure 3: Graphs representing model loss and accuracy of proposed CNN model on PBC dataset \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Author & Datasets & No. of & Model & Accuracy & F1 score & Recall & Precision (\%) \\ & & Images & & (\%) & (\%) & (\%) & (\%) \\ \hline Prasenjit et. al[38] & PBC & 17,092 & Watershed & - & 97.95 & 98.1 & 97.81 \\ & DIB & & algorithm. & & & & \\ Prasenjit et. al[38] & ALL & 108 & Watershed & - & 96.15 & 96.1 & 96.2 \\ & IDB1 & & algorithm. & & & & \\ Prasenjit et. al[38] & Kaggle & 100 & Watershed & - & 98.7 & 98.6 & 98.8 \\ & Hilal Atici and & PBC & 17,092 & ResNet101 & 0.9931 & 96.87 & 97.37 & 97.25 \\ Hasan Erdinc[39] & DIB & & & & & \\ Lubnaa Abdur & ABIDE & 2939 & MobileNet & 90 & 90 & 91 & 88 \\ & Rahman and & & & & & \\ & Poolan & & & & & & \\ Marikannan & & & & & & \\ Boomaj[40] & & & & & & & \\ Erdal Basaran & Public & 12435 & mRMR and & 95.15 & 93.13 & 95.13 \\ & dataset & & LIDME & & & & \\ & & & method with & & & & \\ & & & SVM(Featur & & & \\ & & & e size & & & & \\ & & & 400+5) & & & & \\ Tusneem et. al. & Acquire & & & & & \\ & d from & & & & & \\ & Munich & 18365 & & 97 & - & - & 97.42 \\ & & Univers & & & & & \\ & & & & & & \\ & Hospital & & & & & & \\ R. Ahmad et. & Public & 5000 & Dense Net & 99.6 & - & - & - \\ al.[10] & dataset & & and Darknet & & & & \\ Milkisa et. al. [14] & NIH & 19000 & ResNet and & 99.53 & 97.00 & 94.90 & - \\ & Malaria & & MobileNet & & & & \\ & dataset & & & & & \\ & & & & & & \\ Proposed & PBC & 17092 & CNN & 99.91 & 99.6 & 99.6 & 99.77 \\ Methodology & DIB & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison of proposed CNN architecture Table 5 shows the outcomes of our suggested method compared to other relevant works from the literature. Prasenjit et al [38] achieved a precision of 97.81% on the PBC DIB dataset, 96.2% on ALL DIB, and with the Kaggle dataset, the precision was 98.9%. Hilal Atici and Hasan Erdinc [39] used the PCB DIB dataset using ResNet101; they achieved an accuracy of 99.31%. Lubnaa Abdur Rahman and Poolan Marikannan Booma [40] used MobileNet on the ABIDE dataset to reach an accuracy of 90%. Erdal Basaran [41] used the WBC dataset with 12435 images and achieved an accuracy of 95.15%. R. Ahmad et al. [10] employed Dense Net and Darknet to achieve 99.6% accuracy on a public dataset of 5000 images. Tusneem et al. [19] obtained a 97% accuracy on a public dataset. Milkisa et al. [14] attained an accuracy of 99.53% utilizing the NIH Malaria dataset with ResNet and MobileNet. Compared to all previous work, our model achieves an accuracy of 99.91%, outperforming all previous work.
2303.04665
Jacobian schemes of conic-line arrangements and eigenschemes
The Jacobian scheme of a reduced, singular projective plane curve is the zero-dimensional scheme, whose homogeneous ideal is generated by the partials of its defining polynomial. The degree of such a scheme is called the global Tjurina number and, if the curve is not a set of concurrent lines, some upper and lower bounds depending on the degree of the curve and the minimal degree of a Jacobian syzygy, have been given by A.A. du Plessis and C.T.C. Wall. In this paper we give a complete geometric characterization of conic-line arrangenents, with global Tjurina number attaining the upper bound. Furthermore, we characterize conic-line arrangenents attaining the lower bound for the global Tjurina number, among all curves with a linear Jacobian syzygy. As an application, we characterize conic-line arrangenents with Jacobian scheme equal to an eigenscheme of some ternary tensor, and we study the geometry of their polar maps.
Valentina Beorchia, Rosa M. Miro'-Roig
2023-03-08T15:36:22Z
http://arxiv.org/abs/2303.04665v2
# Jacobian schemes of conic-line arrangements ###### Abstract. The Jacobian scheme of a reduced, singular projective plane curve is the zero-dimensional scheme, whose homogeneous ideal is generated by the partials of its defining polynomial. The degree of such a scheme is called the global Tjurina number and, if the curve is not a set of concurrent lines, some upper and lower bounds depending on the degree of the curve and the minimal degree of a Jacobian syzygy, have been given by A.A. du Plessis and C.T.C. Wall. In this paper we give a complete geometric characterization of conic-line arrangements, with global Tjurina number attaining the upper bound. Furthermore, we characterize conic-line arrangements attaining the lower bound for the global Tjurina number, among all curves with a linear Jacobian syzygy. As an application, we characterize conic-line arrangements with Jacobian scheme equal to an eigenscheme of some ternary tensor, and we study the geometry of their polar maps. The first author is a member of GNSAGA of INdAM and is supported by the fund Universita degli Studi di Trieste - FRA 2023 The second author has been partially supported by the grant PID2019-104844GB-I00 We give a complete geometric characterization of the conic-line arrangements attaining the two bounds for \(r=1\). Concerning the upper bound, examples are given by line arrangements consisting of the union of \(d-1\) concurrent lines and one general line, as described in [9, Proposition 4.7(5)]. Our first main result is the following (see Theorem 3.5): _Theorem_ A.: Let \(C=V(f)\) be a conic-line arrangement in \(\mathbb{P}^{2}\) of degree \(d\geq 5\). Then, \(\tau(C)=(d-1)(d-2)+1\) if and only if \(C\) is either: * \(\mathcal{L}\): a line arrangement with \(d-1\) concurrent lines and a general line; * \(\mathcal{C}_{1}\): a union of conics belonging to a hyperosculating pencil, that is with base locus supported in a point; * \(\mathcal{CL}_{1}\): a union of conics belonging to a hyperosculating pencil and the tangent line in the hyperosculating point; * \(\mathcal{CL}_{2}\): the union of conics belonging to a bitangent pencil and a tangent line in one of the bitangency points; * \(\mathcal{CL}_{3}\): the union of conics belonging to a bitangent pencil and the two tangent lines in the bitangency points; * \(\mathcal{CL}_{4}\): the union of conics belonging to a bitangent pencil, one tangent line in a tangency point, and the line connecting the two tangency points; * \(\mathcal{CL}_{5}\): the union of conics belonging to a bitangent pencil, the two tangent lines in the bitangency points, and the line connecting the two tangency points. In the case \(\tau(C)=(d-1)(d-2)\), the characterization of conic-line arrangements is given by the following result (see Theorem 3.6): _Theorem_ B.: Let \(C=V(f)\) be a reduced conic-line arrangement in \(\mathbb{P}^{2}\) of degree \(d\geq 6\). Then, \(\tau(C)=d^{2}-3d+2\) if and only if \(C\) is either: * \(\mathcal{C}_{2}\): a conic arrangement given by the union of conics belonging to a bitangent pencil; * \(\mathcal{CL}_{6}\): a conic-line arrangement given by the union of conics belonging to a bitangent pencil and the line passing through the two bitangency points. As a consequence of our first result, we can determine the degree of the polar map of conic-line arrangements with quasihomogenous singularities, that is with total Milnor number \(\mu(C)\) (see Definition 2.3) equal to the total Tjurina number. Indeed, recall that, when \(C=V(f)\) is not a set of concurrent lines, the polar map \(\nabla f\) associated with \(f\) defines a generically finite rational map \(\nabla f:\mathbb{P}^{2}\dashrightarrow\mathbb{P}^{2}\) of degree \((d-1)^{2}-\mu(C)\). Therefore, when \(\mu(C)=\tau(C)\) and \(\tau(C)\) is maximal, such a degree is minimal and equal to \(d-2\). This occurs in cases \(\mathcal{L}\) and \(\mathcal{CL}_{2}\). Another related problem is a Torelli-type question, posed by Dolgachev and Kapranov (see [15]), which asks whether the rank \(2\) vector bundle of logarithmic vector fields \(\mathcal{T}\langle C\rangle\), given as the kernel of the map \[(\partial_{x}f,\partial_{y}f,\partial_{z}f):\mathcal{O}_{\mathbb{P}^{2}}^{ \oplus 3}(1)\rightarrow\mathcal{J}_{f}(d),\] with \(\mathcal{J}_{f}\) the sheaffied Jacobian ideal, determines uniquely the curve, see also [12] and [7]. In the cases under consideration this result does not hold. Finally, we observe that in the case of maximal Tjurina number and linear syzygy given by three linearly independent forms, the Jacobian schemes \(\Sigma_{f}\) turn out to be also eigenschemes (for the definition see 2.6) of suitable partially symmetric tensors of order \(d-1\). This happens in the cases \(\mathcal{L}\) and \(\mathcal{CL}_{2}\), and we can apply the results of [20] and [4]. In particular, such schemes arise also as zeroes of a section of the twisted tangent bundle \(\mathcal{T}_{\mathbb{P}^{2}}(d-3)\), and we have that the blow-up \(\mathrm{Bl}_{\Sigma_{f}}\mathbb{P}^{2}\subset\mathbb{P}^{2}\times\mathbb{P}^{2}\) is a complete intersection of the projective bundle \(\mathbb{P}(\mathcal{T}_{\mathbb{P}^{2}})\) and a divisor of bidegree \((1,d-2)\), the possible contracted curves by the polar map are only lines, and as a consequence no subscheme of degree \(k(d-1)\) is contained in a curve of degree \(k\), for any \(2\leq k\leq d-2\). We further specify that in the line arrangement case \(\mathcal{L}\), the contracted lines are precisely the components of \(V(f)\), while in the conic-line arrangement, the only contracted line is the tangent line appearing in the configuration \(V(f)\). The study of singular curves with a minimal Jacobian syzygy of higher degree seem to be very involved and wild. We believe that the approach concerning the study of the fibers of the polar map deserves further investigations. The techniques involved in our study rely on a result relating the syzygy module of a product of polynomials, with no common factor, given in [10, Theorem 5.1 and Corollary 5.3], and on the characterization of the Hilbert-Burch matrix in the case of particular linear Jacobian syzygies given in [5, Theorem 3.5]. The proofs of our main results are based on a careful analysis of the possible Jacobian syzygies of any subcurve of the considered curves. The organization of the paper is the following: in the next section we recall definitions and preliminary results regarding Jacobian ideals, Jacobian sygyzies and eigenschemes of ternary tensors. In Section 3 we determine the geometric classification of Jacobian schemes corresponding to Jacobian ideals with a linear syzygy. Finally, in Section 4, we characterize conic-line arrangements with a Jacobian scheme, which is also an eigenscheme of some ternary tensor. For such curves we describe the degree \(d-2\) generically finite polar map, characterizing its contracted curves. **Acknowledgement**. Most of this work was done while the first author was a guest of the Universitat de Barcelona, and she would like to thank the people of the Departament de Matematiques i Informatica for their warm hospitality. ## 2. Preliminaries This section contains the basic definitions and results on Jacobian ideals associated to reduced singular plane curves as well as on eigenschemes, and it lays the groundwork for the results in the later sections. From now on, we fix the polynomial ring \(R=\mathbb{C}[x,y,z]\) and we denote by \(C=V(f)\) a reduced curve of degree \(d\) in the complex projective plane \(\mathbb{P}^{2}=\operatorname{Proj}(R)\) defined by a homogeneous polynomial \(f\in R_{d}\). ### Jacobian ideal of a reduced curve The Jacobian ideal \(J_{f}\) of a reduced singular plane curve \(C=V(f)\) of degree \(d\) is definied as the homogeneous ideal in \(R\) generated by the \(3\) partial derivatives \(\partial_{x}f\), \(\partial_{y}f\) and \(\partial_{z}f\). We denote by \(\operatorname{Syz}(J_{f})\) the graded \(R\)-module of all Jacobian relations for \(f\), i.e., \[\operatorname{Syz}(J_{f}):=\{(a,b,c)\in R^{3}\mid a\partial_{x}f+b\partial_{y} f+c\partial_{z}f=0\}.\] We will denote by \(\operatorname{Syz}(J_{f})_{t}\) the homogeneous part of degree \(t\) of the graded \(R\)-module \(\operatorname{Syz}(J_{f})\); for any \(t\geq 0\), we have that \(\operatorname{Syz}(J_{f})_{t}\) is a \(\mathbb{C}\)-vector space of finite dimension. The minimal degree of a Jacobian syzygy for \(f\) is the integer \(\operatorname{mrd}(f)\) defined to be the smallest integer \(r\) such that there is a nontrivial relation \(a\partial_{x}f+b\partial_{y}+c\partial_{z}=0\) among the partial derivatives \(\partial_{x}f\), \(\partial_{y}f\) and \(\partial_{z}f\) of \(f\) with coefficients \(a,b,c\in R_{r}\). More precisely, we have: \[\operatorname{mrd}(f)=min\{n\in\mathbb{N}\mid\operatorname{Syz}(J_{f})_{n} \neq 0\}.\] It is well known that \(\operatorname{mrd}(f)=0\), i.e. the three partials \(\partial_{x}f\), \(\partial_{y}f\) and \(\partial_{z}f\) are linearly dependent, if and only if \(C\) is a union of lines passing through one point \(p\in\mathbb{P}^{2}\). So, we will always assume that \(\operatorname{mrd}(f)>0\) and one of our goals will be to give a geometric classification of conic-line arrangements \(C=V(f)\) of degree \(d\) with \(\operatorname{mrd}(f)=1\), see Theorems 3.5 and 3.6. **Definition 2.1**.: Let \(C=V(f)\) be a reduced singular plane curve of degree \(d\). We say that \(C\) is _free_ if the graded \(R\)-module \(\operatorname{Syz}(J_{f})\) of all Jacobian relations for \(f\) is a free \(R\)-module, i.e. \[\operatorname{Syz}(J_{f})=R(-d_{1})\oplus R(-d_{2}) \tag{2.1}\] with \(d_{1}+d_{2}=d-1\). In this case \((d_{1},d_{2})\) are called the _exponents_ of \(C\). We say that \(C\) is _nearly free_ if the minimal free resolution of \(\operatorname{Syz}(J_{f})\) looks like: \[0\longrightarrow R(-d-d_{2})\longrightarrow R(1-d-d_{1})\oplus R(1-d-d_{2})^{ 2}\longrightarrow\operatorname{Syz}(J_{f})\longrightarrow 0 \tag{2.2}\] with \(d_{1}\leq d_{2}\) and \(d_{1}+d_{2}=d\). _Example 2.2_.: (1) The rational cuspidal quintic \(C\subset\mathbb{P}^{2}\) of equation \(C=V(f)=V(y^{4}z+x^{5}+x^{2}y^{3})\) is free. Indeed, \(J_{f}=(5x^{4}+2xy^{3},3x^{2}y^{2}+4y^{3}z,y^{4})\subset R\) and it has a minimal free \(R\)-resolution of the following type: \[0\longrightarrow R(-6)^{2}\longrightarrow R(-4)^{3}\longrightarrow J_{f} \longrightarrow 0.\] We have \(\operatorname{mrd}(f)=2\), \(\deg(J_{f})=12\) and \(C\) is free. (2) The rational cuspidal quintic \(C\subset\mathbb{P}^{2}\) of equation \(C=V(f)=V(y^{4}z+x^{5})\) is nearly free. Indeed, \(J_{f}=(5x^{4},4y^{3}z,y^{4})\subset R\) and it has a minimal free \(R\)-resolution of the following type: \[0\longrightarrow R(-9)\longrightarrow R(-5)\oplus R(-8)^{2}\longrightarrow R (-4)^{3}\longrightarrow J_{f}\longrightarrow 0.\] We have \(\operatorname{mrd}(f)=1\), \(\deg(J_{f})=12\) and \(C\) is not free but it is nearly free. (3) The nodal quintic \(C\subset\mathbb{P}^{2}\) of equation \(C=V((x^{2}+y^{2}+z^{2})(x^{3}+y^{3}+z^{3}))=0\) is neither free nor nearly free. Indeed, \(J_{f}=(5x^{4}+3x^{2}(y^{2}+z^{2})+2x(y^{3}+z^{3}),2x^{3}y+3x^{2}y^{2}+5y^{4}+3 y^{2}z^{2}+2yz^{3},2x^{3}z+2y^{3}z+3x^{2}z^{2}+3y^{2}z^{2}+5z^{4})\subset R\) and it has a minimal free \(R\)-resolution of the following type: \[0\longrightarrow R(-9)\oplus R(-10)\longrightarrow R(-7)\oplus R(-8)^{3} \longrightarrow R(-4)^{3}\longrightarrow J_{f}\longrightarrow 0.\] We have \(\operatorname{mrd}(f)=3\) and \(\deg(J_{f})=6\). In general, the condition that a reduced singular curve \(C=V(f)\) in \(\mathbb{P}^{2}\) is free is equivalent to the Jacobian ideal \(J_{f}\) of \(f\) being arithmetically Cohen-Macaulay of codimension two; such ideals are completely described by the Hilbert-Burch theorem [16]: if \(I=\langle g_{1},\ldots,g_{m}\rangle\subset R\) is a Cohen-Macaulay ideal of codimension two, then \(I\) is defined by the maximal minors of the \((m+1)\times m\) matrix of the first syzygies of the ideal \(I\). Combining this with Euler's formula for a homogeneous polynomial, we get that a free curve \(C=V(f)\) in \(\mathbb{P}^{2}\) has a very constrained structure: \(f=\det(M)\) for a \(3\times 3\) matrix \(M\), with one row consisting of the \(3\) variables, and the remaining \(2\) rows are the minimal first syzygies of \(J_{f}\). Free curves are related with the total Tjurina number. We first recall some notions from singularity theory. Let \(C=V(f)\subset\mathbb{A}^{2}\) be a reduced, not necessarily irreducible, plane curve and fix a singular point \(p\in C\). Let \(\mathbb{C}\{x,y\}\) denote the ring of convergent power series. **Definition 2.3**.: The _Milnor number_ of a reduced plane curve \(C=V(f)\) at \((0,0)\in C\) is \[\mu_{(0,0)}(C)=\dim\mathbb{C}\{x,y\}/\langle\partial_{x}f,\partial_{y}f\rangle.\] The _Tjurina number_ of a reduced plane curve \(C=V(f)\) at \((0,0)\in C\) is \[\tau_{(0,0)}(C)=\dim\mathbb{C}\{x,y\}/\langle\partial_{x}f,\partial_{y}f,f\rangle.\] To define \(\mu_{p}(C)\) and \(\tau_{p}(C)\) for an arbitrary point \(p\), translate \(p\) in the origin. We clearly have \(\tau_{(0,0)}(C)\leq\mu_{(0,0)}(C)\). For a projective plane curve \(C=V(f)\subset\mathbb{P}^{2}\), it holds: \[\tau(C):=\deg J_{f}=\sum_{P\in Sing(C)}\tau_{p}(C)\] where \(J_{f}\) is the Jacobian ideal. We call to \(\tau(C)\) the _total Tjurina number_ of \(C\). A nice result of du Plessis and Wall gives upper and lower bounds for the total Tjurina number \(\tau(C)\) of a reduced plane curve \(C=V(f)\subset\mathbb{P}^{2}\) in terms of its degree \(d\) and the minimal degree \(\operatorname{mrd}(f)\) of a syzygy of its Jacobian ideal \(J_{f}\), and relates the freeness of a curve with \(\tau(C)\). More precisely, we have: **Proposition 2.4**.: _Let \(C=V(f)\) be a reduced singular plane curve of degree \(d\) and let \(r:=\operatorname{mrd}(f)\). It holds:_ \[(d-1)(d-r-1)\leq\tau(C)\leq(d-1)(d-r-1)+r^{2}.\] _Moreover, if \(\tau(C)=(d-1)(d-r-1)+r^{2}\), then the curve \(C\) is free, and such as condition is also sufficient if \(d>2r\)._ Proof.: See [21, Theorem 3.2] and [6, Corollary 1.2]. Next result will play an important role in next section. It relates the minimal degree \(\operatorname{mrd}(f)\) of a syzygy of the Jacobian ideal of a reducible plane curve \(C=C_{1}\cup C_{2}=V(f_{1}f_{2})\) with the minimal degrees \(\operatorname{mrd}(f_{1})\) and \(\operatorname{mrd}(f_{2})\) of a Jacobian syzygy of \(C_{1}=V(f_{1})\) and \(C_{2}=V(f_{2})\), respectively. Observe that the syzygy module can be identified with the module of derivations _killing_ the polynomial \(g\), that is the submodule \(D_{0}(g)\) of the free \(R\)-module \(D(g)=\{\delta=a\partial_{x}+b\partial_{y}+c\partial_{z}\ :\ a,b,c\in R\}\) of \(\mathbb{C}\)-derivations of the polynomial ring \(R\) annihilated by \(g\): \[D_{0}(g)=\{\delta\in D(g)\ :\ \delta g=0\}.\] In the case of a smooth curve \(V(g)\), the syzygy module is trivial, so we will indeed consider the module \(D_{0}(g)\) instead of \(\operatorname{Syz}(J_{g})\). **Theorem 2.5**.: _Let \(C_{i}=V(f_{i})\) for \(i=1,2\) be two reduced curves in \(\mathbb{P}^{2}\) without common irreducible components. Set \(d_{i}=\deg f_{i}\) and \(r_{i}=\operatorname{mrd}(f_{i})\) for \(i=1,2\). Let \(C=V(f_{1}f_{2})\) be the union of \(C_{1}\) and \(C_{2}\), let \(d=d_{1}+d_{2}=\deg f\) and \(r=\operatorname{mrd}(f)\). Then it holds:_ 1. _If_ \(\delta_{1}\in D_{0}(f_{1})\)_, then_ \[\delta=f_{2}\delta_{1}-\frac{1}{d}\delta_{1}(f_{2})\ E\in D_{0}(f),\] _where_ \(E=x\partial_{x}+y\partial_{y}+z\partial_{z}\) _denotes the Euler derivation;_ 2. \(D_{0}(f)\subset D_{0}(f_{1})\cap D_{0}(f_{2})\)_; more precisely, for_ \(\delta\neq 0\)_, one has_ \(\delta\in D_{0}(f)\) _if and only if_ \(\delta\) _can be written in a unique way in the form_ \(\delta=h\ E+\delta_{1}=-h\ E+\delta_{2}\)_, where_ \(h\) _is a suitable homogeneous polynomial and_ \(\delta_{j}\in D_{0}(f_{j})\) _are non-zero for_ \(j=1,2\)_._ 3. _In particular, we have_ \[\max(r_{1},r_{2})\leq r\leq\min(r_{1}+d_{2},r_{2}+d_{1}).\] _and_ \(r\) _is the minimal integer_ \(t\) _such that either_ \(D_{0}(f_{1})_{t}\cap D_{0}(f_{2})_{t}\neq 0\) _or_ \(D_{0}(f_{1})_{t}+D_{0}(f_{2})_{t}\) _contains a non-zero multiple of the Euler derivation_ \(E\)_._ Proof.: See [10, Theorem 5.1] and [10, Corollary 5.3]. ### Eigenschemes in \(\mathbb{P}^{2}\) Since we shall investigate whether a Jacobian scheme is also an eigenscheme of some tensor, we conclude the preliminary section with the basic definitions and results concerning tensor eigenschemes. There are several notions of eigenvectors and eigenvalues for tensors, as introduced independently in [19] and [24]. Here we focus our attention on the algebraic-geometric point of view. We choose a basis for \(\mathbb{C}^{3}\), we identify a partially symmetric tensor \(T\) with a triple of homogeneous polynomials of degree \(d-2\) and we describe the eigenpoint of a tensor \(T\) algebraically by the vanishing of the minors of a homogeneous matrix. More precisely, we have: **Definition 2.6**.: Let \(T=(g_{1},g_{2},g_{3})\in(Sym^{d-2}\mathbb{C}^{3})^{\oplus 3}\) be a partially symmetric tensor. The _eigenscheme_ of \(T\) is the closed subscheme \(E(T)\subset\mathbb{P}^{2}\) defined by the \(2\times 2\) minors of the homogeneous matrix \[M=\begin{pmatrix}x&y&z\\ g_{1}&g_{2}&g_{3}\end{pmatrix}. \tag{2.3}\] If \(T\) is general, then \(E(T)\) is a \(0\)-dimensional scheme (see, for instance, [1]). Moreover, by the Hochster-Eagon Theorem [18], the coordinate ring \(R/I(E(T))\) is a Cohen-Macaulay ring, and as a consequence, the homogeneous ideal \(I(E(T))\) is saturated. Hence \(E(T)\) is a standard determinantal scheme. When the tensor \(T\) is symmetric, i.e., there is a homogeneous polynomial \(f\) and \(g_{0}=\partial_{x}f\), \(g_{1}=\partial_{y}f\) and \(g_{2}=\partial_{z}f\), we denote its eigenscheme by \(E(f)\). It is worthwhile to point out that in the case of a symmetric tensor corresponding to some homogeneous polynomial \(f\), the eigenpoints are the fixed points of the polar map \[\nabla f=(\partial_{x}f,\partial_{y}f,\partial_{z}f):\mathbb{P}^{2}\dasharrow \mathbb{P}^{2}\] of \(f\). _Example 2.7_.: We consider the Fermat cubic \(V(f)\) with \(f=x^{3}+y^{3}+z^{3}\in\mathbb{C}[x,y,z]\). The eigenscheme \(E(f)\) is the \(0\)-dimensional subscheme of \(\mathbb{P}^{2}\) of length \(7\) defined by the maximal minors of \[M=\begin{pmatrix}x&y&z\\ x^{2}&y^{2}&z^{2}\end{pmatrix}.\] Therefore, \(E(f)=\{(1,0,0,),(0,1,0),(0,0,1),(1,1,0),(1,0,1),(0,1,1),(1,1,1)\}\). If we fix an integer \(d\geq 2\) and \(T=(g_{1},g_{2},g_{3})\in(Sym^{d-2}\mathbb{C}^{3})^{\oplus 3}\) is a general partially symmetric tensor, then it holds (see [16, Theorem A2.10]): 1. \(E(T)\) is a reduced \(0\)-dimensional scheme of length \(d^{2}-3d+3\). 2. The homogeneous ideal \(I(E(T))\subset R\) has a minimal free \(R\)-resolution \[0\longrightarrow R(-2d+3)\oplus R(-d)\longrightarrow R(-d+1)^{3} \longrightarrow I(E(T))\longrightarrow 0.\] The two conditions above are not sufficient for a planar \(0\)-dimensional subscheme to be an eigenscheme, and a characterization is given by the following result (see [2, Proposition 5.2]. **Proposition 2.8**.: _Let Z be a \(0\)-dimensional subscheme of \(\mathbb{P}^{2}\) of degree \(d^{2}-3d+3\). Then \(Z\) is the eigenscheme of a tensor if and only if its Hilbert-Burch matrix has the form_ \[\begin{pmatrix}L_{1}&G_{1}\\ L_{2}&G_{2}\\ L_{3}&G_{3}\end{pmatrix},\] _where \(L_{1},L_{2},L_{3}\) are linearly independent linear forms._ ## 3. Conic-line arrangements with a linear Jacobian syzygy We start this section with two series of examples of reduced conic-line arrangements. All these examples will play an important role since, as we will see, they are the only examples of reduced conic-line arrangements \(C\) in \(\mathbb{P}^{2}\), whose Jacobian ideal has a linear syzygy. In what follows we shall use the result [5, Theorem 3.5], due to R. O. Buchweitz and A. Conca, which determines the Hilbert - Burch matrix of Jacobian schemes admitting a linear syzygy of the type \((ax,by,cz)\) for some coefficients \(a,b,c\in\mathbb{C}\). For completeness, we recall its statement: **Theorem 3.1** (Buchweitz - Conca).: _Let \(K\) be a field of characteristic zero and \(f\in K[x,y,z]\) a reduced polynomial of degree \(d\) in three variables such that \(f\) is contained in the ideal of its partial derivatives. Assume further that there is a triple \((a,b,c)\) of elements of \(K\) that are not all zero such that \(ax\ \partial_{x}f+by\ \partial_{y}f+cz\ \partial_{z}f=0\)._ _We then have the following possibilities, up to renaming the variables:_ 1. _If_ \(abc\neq 0\)_, then_ \(f\) _is a free divisor with Hilbert-Burch matrix_ (3.1) \[\begin{pmatrix}ax&\left(\frac{1}{c}-\frac{1}{d}\right)&(d+2)^{-1}\partial_{yz }f\\ by&\left(\frac{1}{c}-\frac{1}{d}\right)&(d+2)^{-1}\partial_{xz}f\\ cz&\left(\frac{1}{b}-\frac{1}{a}\right)&(d+2)^{-1}\partial_{xy}f\end{pmatrix},\] _where_ \(\partial_{**}f\) _denotes the corresponding second order derivative of_ \(f\) 2. _If_ \(a=0\)_, but_ \(bc\neq 0\)_, then_ \(f\) _is a free divisor if, and only if,_ \(\partial_{x}f\in(y,z)\)_. If that condition is verified and_ \(\partial_{x}f=yg+zh\)_, then_ \(\frac{\partial f_{y}}{cz}=\frac{-\partial_{x}f}{by}\) _is an element of_ \(K[x,y,z]\)_, and a Hilbert-Burch matrix is given by_ (3.2) \[\begin{pmatrix}0&\partial_{y}f/cz\\ by&-h/c\\ cz&g/b\end{pmatrix}.\] 3. _If_ \(a=b=0\)_, then_ \(f\) _is independent of_ \(z\) _and, so, being the suspension of a reduced plane curve, is a free divisor._ We focus now on conic-line arrangements. The first series of examples corresponds to reduced conic-line arrangements \(C=V(f)\) of degree \(d\) with \(\operatorname{mrd}(f)=1\) and maximal Tjurina number \(\tau(C)=(d-1)(d-2)+1=d^{2}-3d+3\). _Example 3.2_.: 1. We fix an integer \(d\geq 3\). Let \(\mathcal{L}\) be a line arrangement with \(d-1\) lines through a point \(p\), and one other line in general position. Without loss of generality we can assume that \(p=(0:0:1)\) and that the general line is \(V(x)\), so that the equation of the line arrangement \(\mathcal{L}\) is given by \[\mathcal{L}:\ z\prod_{i=1}^{d-1}(a_{i}x+b_{i}y)=0\] with \((a_{i}:b_{i})\neq(a_{j}:b_{j})\) for \(i\neq j\). It is simple to determine a linear syzygy between the three partials of \(f\), by observing that \(\partial_{z}f=\prod_{i=1}^{d-1}(a_{i}x+b_{i}y)\). So, we have \[f=z\ \partial_{z}f.\] On the other hand, by Euler formula we also have \(f=\frac{1}{d}(x\ \partial_{x}f+y\partial_{y}f+z\ \partial_{x}f)\), hence we get the identity \[x\ \partial_{x}f+y\ \partial_{y}f+(1-d)z\ \partial_{x}f=0.\] Therefore, according to Theorem 3.1, (1), the Hilbert-Burch matrix of \(J_{f}\) is given by (3.3) \[\begin{pmatrix}x&\frac{1}{(1-d)}\ \partial_{yz}f\\ y&\frac{1}{(d-1)}\partial_{xz}f\\ (1-d)z&0\end{pmatrix},\] a minimal free \(R\)-resolution of \(J_{f}\) is given by: \[0\longrightarrow R(-d)\oplus R(-2d+3)\longrightarrow R(-d+1)^{3} \longrightarrow J_{f}\longrightarrow 0,\] and \(\mathcal{L}\) is free with exponents \((1,d-2)\) and global Tjurina number \(d^{2}-3d+3\). It is worthwhile to point out that the 3 entries \((x,y,(1-d)z)\in\operatorname{Syz}(J_{f})_{1}\) of the linear syzygy are linearly independent. 2. We fix an even integer \(d=2m\geq 4\). Let \(\mathcal{C}_{1}\) be a conic arrangement with \(m\) conics \(C_{1},\dots,C_{m}\) such that there exists a point \(p\in\mathbb{P}^{2}\), for all \(i,j\), \(1\leq i<j\leq m\), satisfying \(C_{i}\cap C_{j}=\{p\}\), and the intersection point \(p\) is a singularity \(A_{7}\) for \(C_{i}\cup C_{j}\). In other words, the \(m\) conics belong to a hyperosculating pencil; such a curve is called an _even Ploski curve_ in [25]. Without loss of generality, we can assume that \(p=(0:0:1)\) and the equation of the conic arrangement \(\mathcal{C}_{1}\) is given by \[\mathcal{C}_{1}:\ f=\prod_{i=1}^{m}(x^{2}+a_{i}(xz+y^{2}))=0,\] with \(a_{i}\neq 0\) and \(a_{i}\neq a_{j}\) for \(i\neq j\). The reduced plane curve \(\mathcal{C}_{1}\) has degree \(d=2m\). A linear Jacobian syzygy can be determined by observing that \((0,x,-2y)\) is a linear syzygy of the Jacobian ideal of \(f_{i}:=x^{2}+a_{i}(xz+y^{2})\), for any \(i=1,\ldots,m\), so we deduce that \(\mathrm{Syz}(J_{f})_{1}\) is also generated by \((0,x,-2y)\). Moreover, we claim that \(\mathcal{C}_{1}\) is free with exponents \((1,d-2)\), so that the global Tjurina number is \(d^{2}-3d+3\). To prove the claim, observe that since \(r=1\), the conic arrangement \(\mathcal{C}_{1}\) is either free or nearly free, and by (2.1) and (2.2), it is free if and only if \(J_{f}\) admits a sygyzy of degree \(d-2\), which is not proportional to \((0,x,-2y)\). Let us prove the latter fact by induction on the number \(m\) of conics. If \(m=2\), a degree \(2\) syzygy, which is not proportional to \((0,x,-2y)\), is given by \[(-(a_{1}+a_{2})x^{2}-2a_{1}a_{2}(y^{2}+xz),a_{1}a_{2}yz,4x^{2}+2(a_{1}+a_{2}) y^{2}+3(a_{1}+a_{2})xz+2a_{1}a_{2}z^{2}).\] Now assume that \(m\geq 3\), and that any conic arrangement of \(m-1\) conics belonging to a hyperosculating pencil admits a syzygy of degree \(2m-4\), not proportional to \((0,x,-2y)\). Let \(f=\prod_{i=1}^{m}(x^{2}+a_{i}(xz+y^{2}))\), and set \[f_{1}=\prod_{i=1}^{m-1}(x^{2}+a_{i}(xz+y^{2})),\qquad f_{2}=x^{2}+a_{m}(xz+y^{ 2}).\] By induction hypothesis \(J_{f_{1}}\) admits a syzygy \(\delta_{1}\in\mathrm{Syz}(J_{f_{1}})_{2m-4}\), with \(\delta_{1}\not\in\langle(0,x,-2y)\rangle\), and by Theorem 2.5, (1), we have \[\delta=f_{2}\delta_{1}-\frac{1}{2m}(\delta_{1}\cdot\nabla f_{2})\ E\in\mathrm{ Syz}(J_{f})_{2m-2},\] where we set \(\delta_{1}\cdot\nabla f_{2}=h_{1}\partial_{x}f_{2}+h_{2}\partial_{y}f_{2}+h_{3} \partial_{z}f_{2}\), and \(E=(x,y,z)\) is the Euler relation. As observed in the proof of such a Theorem (see [10, Theorem 5.1]), since \(\delta_{1}\neq 0\), it is also \(\delta\neq 0\). Finally, we claim that \(\delta\not\in\langle(0,x,-2y)\rangle\). Indeed, if \(\delta_{1}\cdot\nabla f_{2}=0\), we have \(\delta=f_{2}\delta_{1}\) and since \(\delta_{1}\not\in\langle(0,x,-2y)\rangle\) by induction hypothesis, the claim follows. If \(\delta_{1}\cdot\nabla f_{2}\neq 0\), we see that \[\delta\cdot\nabla f_{1}=f_{2}\delta_{1}\cdot\nabla f_{1}-\frac{1}{2m}(\delta _{1}\cdot\nabla f_{2})\ E\cdot\nabla f_{1}=-\frac{2m-2}{2m}(\delta_{1}\cdot \nabla f_{2})f_{1}\neq 0.\] On the other hand, if we had \(\delta=h\ (0,x,-2y)\) for some polynomial \(h\), we would have \[\delta\cdot\nabla f_{1}=h\ (0,x,-2y)\cdot\nabla f_{1}=0,\] as \((0,x,-2y)\in\mathrm{Syz}(J_{f_{1}})\). It is important to point out that in this case the linear syzygy of \(J_{f}\) has only \(2\) linearly independent entries, and that such an example is not of the type considered in Buchweitz - Conca Theorem 3.1. 3. We fix an odd integer \(d=2m+1\geq 5\). Let \(\mathcal{CL}_{1}\) be a conic-line arrangement with \(m\) conics \(C_{1},\ldots,C_{m}\) and a line \(\ell\) such that there exists a point \(p\in\mathbb{P}^{2}\), \(\ell\) is a common tangent line to all \(C_{i}\)'s, \(C_{i}\cap C_{j}=\{p\}\) and the intersection point \(p\) is a singularity \(A_{7}\) for \(C_{i}\cup C_{j}\), for all \(i,j\), \(1\leq i<j\leq m\). In other words, the conics belong to a hypersculating pencil; such a curve is called an _odd Ploski curve_ in [25]. Without loss of generality we can assume that \(p=(0:0:1)\), the line \(\ell=V(x)\) and the equation of the conic-line arrangement \(\mathcal{CL}_{1}\) is given by \[\mathcal{CL}_{1}:\ f=x\prod_{i=1}^{m}(x^{2}+a_{i}(xz+y^{2}))=0\] with \(a_{i}\neq 0\) and \(a_{i}\neq a_{j}\) for \(i\neq j\), and \(a_{i}\neq a_{j}\) if \(1\neq j\). The reduced plane curve \(\mathcal{CL}_{1}\) has degree \(d=2m+1\); by using the same argument as in the previous example, it is not difficult to see that \(\mathcal{CL}_{1}\) is free with exponents \((1,d-2)\) and global Tjurina number \(d^{2}-3d+3\). The linear syzygy of \(\mathrm{Syz}(J_{f})_{1}\) is generated by \((0,x,-2y)\). So, again the linear syzygy of the Jacobian ideal of \(f\) has only two independent entries, and is not of the type considered in Buchweitz - Conca Theorem 3.1. 4. We fix an odd integer \(d=2m+1\geq 5\). Let \(\mathcal{CL}_{2}\) be a conic-line arrangement with \(m\) conics \(C_{1},\dots,C_{m}\) and a line \(\ell\) such that there exist two points \(p,q\in\mathbb{P}^{2}\) such that \(C_{i}\cap C_{j}=\{p,q\}\) and the two intersection points \(p,q\) are tacnodes for \(C_{i}\cup C_{j}\), for all \(i,j\), \(1\leq i<j\leq m\), and \(\ell\) is a common tangent line to all \(C_{i}\)'s at \(p\). Without loss of generality we can assume that \(p=(0:0:1)\), \(q=(1:0:0)\), \(\ell=V(x)\), so that the equation of the conic-line arrangement \(\mathcal{CL}_{2}\) is given by \[\mathcal{CL}_{2}:\ f=x\prod_{i=1}^{m}(xz+a_{i}y^{2})=0\] with \(a_{i}\neq 0\) for all \(i\), \(1\leq i\leq m\), and \(a_{i}\neq a_{j}\) if \(1\neq j\). The reduced singular plane curve \(\mathcal{CL}_{2}\) has degree \(d=2m+1\). We claim that \(Syz(J_{f})_{1}=\langle((d-1)x,-y,-(d+1)z)\rangle\). Indeed, set \(q_{i}(x,y,z)=xz+a_{i}y^{2}\). We have \[\partial_{x}f=\prod_{i=1}^{m}q_{i}+xz\left(\sum_{j=1}^{m}\prod_{i=1,i\neq j}^{ m}q_{i}\right),\ \partial_{y}f=2xy\left(\sum_{j=1}^{m}a_{j}\prod_{i=1,i\neq j}^{m}q_{i}\right), \ \partial_{z}f=x^{2}\left(\sum_{j=1}^{m}\prod_{i=1,i\neq j}^{m}q_{i}\right),\] which, in particular, gives \[f=x\ \partial_{x}f-z\ \partial_{z}f.\] Thus by the Euler identity we get \[(d-1)x\ \partial_{x}f-y\ \partial_{y}f-(d+1)z\ \partial_{z}f=0.\] Therefore, the linear syzygy of the Jacobian ideal of \(f\) has again \(3\) linear independent entries. Therefore, according to Theorem 3.1, (1), the curve \(\mathcal{CL}_{2}\) is free with global Tjurina number \(d^{2}-3d+3\), and the Hilbert-Burch matrix of \(J_{f}\) is (3.4) \[\begin{pmatrix}(d-1)x&\frac{1}{(d+1)}\ \partial_{yz}f\\ -y&\frac{2}{(d^{2}-1)}\partial_{xz}f\\ -(d+1)z&-\frac{1}{(d-1)}\partial_{xy}f\end{pmatrix}.\] 5. The following three examples are of the type given in [5, Example 3.7], and they all correspond to free curves of exponents \((1,d-2)\) and \(\mathrm{Syz}(J_{f})_{1}\) generated by \((x,0,-z)\). * Let \(d=2m+2\geq 6\) and let \(\mathcal{CL}_{3}\) be a conic-line arrangement of degree \(d\) with \(m\) conics \(C_{1},\dots,C_{m}\) and two lines \(\ell_{1}\) and \(\ell_{2}\) such that there exist two points \(p,q\in\mathbb{P}^{2}\) satisfying \(C_{i}\cap C_{j}=\{p,q\}\) and the two intersection points \(p,q\) are tacnodes for \(C_{i}\cup C_{j}\), for all \(i,j\), \(1\leq i<j\leq m\), \(\ell_{1}\) is a common tangent line to all \(C_{i}\)'s at \(p\), \(\ell_{2}\) is a common tangent line to all \(C_{i}\)'s at \(q\). Without loss of generality we can assume that \(p=(0:0:1)\), \(q=(1:0:0)\), \(\ell_{1}=V(x)\), \(\ell_{2}=V(z)\), so that the equation of the conic-line arrangement \(\mathcal{CL}_{3}\) is given by \[\mathcal{CL}_{3}:\ f=xz\prod_{i=1}^{m}(xz+a_{i}y^{2}))=0\] with \(a_{i}\neq 0\) for all \(i\), \(1\leq i\leq m\), and \(a_{i}\neq a_{j}\) if \(1\neq j\). * Let \(d=2m+2\geq 6\) and let \(\mathcal{CL}_{4}\) be a conic-line arrangement of degree \(d\) with \(m\) conics \(C_{1},\ldots,C_{m}\) and two lines \(\ell_{1}\) and \(\ell_{2}\) such that there exist two points \(p,q\in\mathbb{P}^{2}\) satisfying \(C_{i}\cap C_{j}=\{p,q\}\) and the two intersection points \(p,q\) are tacnodes for \(C_{i}\cup C_{j}\), for all \(i,j\), \(1\leq i<j\leq m\), \(\ell_{1}\) is a common tangent line to all \(C_{i}\)'s at \(p\), \(\ell_{2}\) is the line joining \(p\) and \(q\). Without loss of generality we can assume that \(p=(0:0:1)\), \(q=(1:0:0)\), \(\ell_{1}=V(x)\), \(\ell_{2}=V(y)\), so that the equation of the conic-line arrangement \(\mathcal{CL}_{4}\) is given by \[\mathcal{CL}_{4}:\ f=xy\prod_{i=1}^{m}(xz+a_{i}y^{2}))=0\] with \(a_{i}\neq 0\) for all \(i\), \(1\leq i\leq m\), and \(a_{i}\neq a_{j}\) if \(1\neq j\). * Let \(d=2m+3\geq 6\) and let \(\mathcal{CL}_{5}\) be a conic-line arrangement of degree \(d\) with \(m\) conics \(C_{1},\ldots,C_{m}\) and three lines \(\ell_{1}\), \(\ell_{2}\) and \(\ell_{3}\) such that there exist two points \(p,q\in\mathbb{P}^{2}\) satisfying \(C_{i}\cap C_{j}=\{p,q\}\) and the two intersection points \(p,q\) are tacnodes for \(C_{i}\cup C_{j}\), for all \(i,j\), \(1\leq i<j\leq m\), \(\ell_{1}\) is a common tangent line to all \(C_{i}\)'s at \(p\), \(\ell_{2}\) is a common tangent line to all \(C_{i}\)'s at \(q\), \(ell_{3}\) is the line joining \(p\) and \(q\). Without loss of generality we can assume that \(p=(0:0:1)\), \(q=(1:0:0)\), \(\ell_{1}=V(x)\), \(\ell_{2}=V(z)\) and \(\ell_{3}=V(y)\), so that the equation of the conic-line arrangement \(\mathcal{CL}_{5}\) is given by \[\mathcal{CL}_{5}:\ f=xyz\prod_{i=1}^{m}(xz+a_{i}y^{2}))=0\] with \(a_{i}\neq 0\) for all \(i\), \(1\leq i\leq m\), and \(a_{i}\neq a_{j}\) if \(1\neq j\). Next series of example corresponds to reduced nearly free plane curves \(C=V(f)\) of degree \(d\) with \(\mathrm{mrd}(f)=1\) and \(\tau(C)=d^{2}-3d+2\). _Example 3.3_.: By [5, Example 3.7], next examples have the property that \(\mathrm{Syz}(J_{f})_{1}\) is generated by \((x,0,-z)\) and they are not free. Since \(r=1\), we have \(\tau(C)=(d-1)(d-2)\) and they are nearly free by Lemma 3.4. 1. Let \(\mathcal{C}_{2}\) be a conic arrangement with \(m\) conics \(C_{1},\ldots,C_{m}\) such that there exist two points \(p,q\in\mathbb{P}^{2}\) satisfying \(C_{i}\cap C_{j}=\{p,q\}\) and the two intersection points \(p,q\) are tacnodes for \(C_{i}\cup C_{j}\), for all \(i,j\), \(1\leq i<j\leq m\). Without loss of generality we can assume that \(p=(0:0:1)\), \(q=(1:0:0)\), so that the equation of the conic arrangement is \[\mathcal{C}_{2}:\ f=\prod_{i=1}^{m}(xz+a_{i}y^{2})=0\] with \(a_{i}\neq 0\) for all \(i\), \(1\leq i\leq m\), and \(a_{i}\neq a_{j}\) if \(1\neq j\). 2. Let \(\mathcal{CL}_{6}\) be a conic-line arrangement with \(m\) conics \(C_{1},\ldots,C_{m}\) and a line \(\ell\) such that there exist two points \(p,q\in\mathbb{P}^{2}\) such that \(C_{i}\cap C_{j}=\{p,q\}\) and the two intersection points \(p,q\) are tacnodes for \(C_{i}\cup C_{j}\), for all \(i,j\), \(1\leq i<j\leq m\), and \(\ell\) is the line through \(p\) and \(q\). We can assume that \(p=(0:0:1)\), \(q=(1:0:0)\), \(\ell=V(y)\) and the equation of the conic-line arrangement is \[\mathcal{CL}_{6}:\ f=y\prod_{i=1}^{m}(xz+a_{i}y^{2})=0\] with \(a_{i}\neq 0\) for all \(i\), \(1\leq i\leq m\), and \(a_{i}\neq a_{j}\) if \(i\neq j\). Our next goal is to establish a geometric characterization of all reduced conic-line arrangements \(C=V(f)\) in \(\mathbb{P}^{2}\) of degree \(d\geq 3\), whose Jacobian ideal \(J_{f}\) has a linear syzygy, i.e., \(\operatorname{mrd}(f)=1\). **Lemma 3.4**.: _Let \(C=V(f)\) be a reduced singular curve in \(\mathbb{P}^{2}\) of degree \(d\geq 3\) and let \(J_{f}=(f_{x},f_{y},f_{z})\) be its Jacobian ideal. Assume that \(\operatorname{mrd}(f)=1\). Then \(d^{2}-3d+2\leq\tau(C)\leq d^{2}-3d+3\). Moreover, if \(\tau(C)=d^{2}-3d+3\) (resp. \(\tau(C)=d^{2}-3d+2\)) then \(C\) is free (nearly free)._ Proof.: By Proposition 2.4 we have \(d^{2}-3d+2\leq\tau(C)\leq d^{2}-3d+3\). By [6, Theorem 1.2] (see also [17, Proposition 26]), if \(\tau(C)=d^{2}-3d+3\) (respectively \(d^{2}-3d+2\)) then \(C\) is free (nearly free). ### Free conic-line arrangements with a linear Jacobian syzygy Our first goal is to classify free conic-line arrangements \(C=V(f)\) of degree \(d\) in \(\mathbb{P}^{2}\) with \(r=\operatorname{mrd}(f)=1\); such curves have maximum Tjurina number \(\tau(C)=d^{2}-3d+3\). We have **Theorem 3.5**.: _Let \(C=V(f)\) be a conic-line arrangement in \(\mathbb{P}^{2}\) of degree \(d\geq 5\). Then, \(\tau(C)=d^{2}-3d+3\) if and only if \(C\) is either a line arrangement \(\mathcal{L}\) as in example 3.2(1), or a conic arrangement \(\mathcal{C}_{1}\) as in example 3.2(2), or a conic-line arrangement \(\mathcal{CL}_{1}\), \(\mathcal{CL}_{2}\), \(\mathcal{CL}_{3}\), \(\mathcal{CL}_{4}\) or \(\mathcal{CL}_{5}\) as in examples 3.2(3)-(7)._ Proof.: All conic-line arrangements \(C\subset\mathbb{P}^{2}\) described in examples 3.2(1)-(7) have total Tjurina number \(\tau(C)=d^{2}-3d+3\) and \(\operatorname{mrd}(f)=1\). Let us prove the converse. The hypothesis \(\tau(C)=d^{2}-3d+3\), \(d\geq 5\) and Proposition 2.4 imply that \(\operatorname{mrd}(f)=1\). We distinguish several cases: Case 1: \(C\) is a line arrangement. By [9, Proposition 4.7(5)], \(C\) is the union of \(d-1\) lines through a point \(p\), and one other line in general position. Case 2: Let \(C=\cup_{i=1}^{m}C_{i}:\ f=\prod_{i=1}^{m}f_{i}=0\) be a conic arrangement. By Theorem 2.5, (3), whenever we extract the union \(C^{\prime}=C_{1}\cup C_{2}\) of two conics, the relative \(r^{\prime}=\operatorname{mrd}(C^{\prime})=1\), so by the classification given in [10, Proposition 5.5], the only possible cases are: either \(|C_{1}\cap C_{2}|=2\) and the two intersection points are two tacnodes for \(C^{\prime}\), or \(|C_{1}\cap C_{2}|=1\) and the singular point is an \(A_{7}\) singularity (the two conics are hyperosculating). Since in the first case the total Tjurina number is \(\tau=6\), it does not occur. This settles the case \(d=4\). Assume now \(d\geq 6\). **Claim:** Whenever we extract the union \(C_{i_{1}}\cup C_{i_{2}}\cup C_{i_{3}}\) of three conics, they belong to the same pencil, so they are either bitangent, or they are hyperosculating. **Proof of the Claim.** Indeed, assume first that \(C_{i_{1}}\cap C_{i_{2}}=\{p,q\}\) and \(p,q\) are two tacnodes for \(C_{i_{1}}\cup C_{i_{2}}\). Without loss of generality we can assume that \[C_{i_{1}}=V(f_{i_{1}})=V(xz+a_{i_{1}}y^{2}),\qquad C_{i_{2}}=V(f_{i_{2}})=V(xz+ a_{i_{2}}y^{2})\] with \(a_{i_{1}},a_{i_{2}}\in\mathbb{C}\). Therefore, \(D_{0}(f_{i_{1}})_{1}=D_{0}(f_{i_{2}})_{1}\cong\mathrm{Syz}(f_{i_{1}}f_{i_{2}} )_{1}=\langle(x,0,-z)\rangle\). By hypothesis \(\mathrm{mrd}(f_{i_{1}}f_{i_{2}}f_{i_{3}})=1\). So, applying Theorem 2.5, we have \[D_{0}(f_{i_{3}})_{1}\cap\langle(x,0,-z)\rangle\neq 0,\;\mathrm{or}\;\alpha(x,0,- z)+\beta(x,y,z)\in D_{0}(f_{i_{3}})_{1}.\] A straightforward computation shows that necessarily \(C_{i_{3}}=V(f_{i_{3}})=V(\lambda xz+\mu y^{2})\). Assume now that \(C_{i_{1}}\cap C_{i_{2}}=\{p\}\) and \(p\) is a singularity \(A_{7}\) for \(C_{i_{1}}\cup C_{i_{2}}\). Without loss of generality we can assume \[C_{i_{1}}=V(f_{i_{1}})=V(x^{2}+a_{i_{1}}(xz+y^{2})),\qquad C_{i_{2}}=V(f_{i_{ 2}})=V(x^{2}+a_{i_{2}}(xz+y^{2}))\] with \(a_{i_{1}},a_{i_{2}}\in\mathbb{C}\). Therefore, \(D_{0}(f_{i_{1}})_{1}=D_{0}(f_{i_{2}})_{1}\cong\mathrm{Syz}(f_{i_{1}}f_{i_{2}} )_{1}=\langle(0,x,-2y)\rangle\). By hypothesis \(\mathrm{mrd}(f_{i_{1}}f_{i_{2}}f_{i_{3}})=1\), hence by Theorem 2.5, we have in this case \[D_{0}(f_{i_{3}})_{1}\cap\langle(0,x,-2y)\rangle\neq 0,\;\mathrm{or}\;\alpha(0,x, -2y)+\beta(x,y,z)\in D_{0}(f_{i_{3}})_{1}.\] A direct computation shows that in this case necessarily \(C_{i_{3}}=V(f_{i_{3}})=V(\lambda x^{2}+\mu(xz+y^{2}))\). It follows from the claim that the irreducible components of \(C=\cup_{i=1}^{m}C_{i}\) belong either to a bitangent pencil of conics or to a hyperosculating pencil of conics. In the first case we have \(\tau(C)=(2m)^{2}-3(2m)+2\) (see Example 3.3(1)) and in the second case it is \(\tau(C)=(2m)^{2}-3(2m)+3\) (see Example 3.2(2)), which proves what we want. Case 3: : Let \(C=\cup_{i=1}^{m}C_{i}\bigcup\cup_{j=1}^{n}L_{j}:\;f=\prod_{i=1}^{m}f_{i}\cdot \prod_{j=1}^{s}\ell_{j}=0\) be a conic-line arrangement. By Theorem 2.5, the conic arrangement \(\cup_{i=1}^{m}C_{i}\) satisfies \(\mathrm{mrd}(\prod_{i=1}^{m}f_{i})=1\). By the above discussion, the components of \(\cup_{i=1}^{m}C_{i}\) belong either to a hyperosculating pencil or to a bitangent pencil of conics. We analyze this two cases separately. Let us first assume that \(\cup_{i=1}^{m}C_{i}=V(\prod_{i=1}^{m}(x^{2}+a_{i}(xz+y^{2})))\) belong to a hyperosculating pencil of conics with hyperosculating point \(p=(0:0:1)\). Therefore, \(\mathrm{Syz}(\prod_{i=1}^{m}f_{i})_{1}=\langle(0,x,-2y)\rangle\). We look for a line \(L=V(\ell)=V(ax+by+cz)\) such that \(\mathrm{mrd}((ax+by+cz)\prod_{i=1}^{m}(x^{2}+a_{i}(xz+y^{2}))=1\). By Theorem 2.5, (3), if we set \[f_{1}=ax+by+cz,\quad f_{2}=\prod_{i=1}^{m}(x^{2}+a_{i}(xz+y^{2})),\] we have that either \(D_{0}(f_{1})_{1}\cap D_{0}(f_{2})_{1}\neq 0\), or \(\alpha(x\partial_{x}+y\partial_{y}+z\partial_{z})\in D_{0}(f_{1})_{1}+D_{0}(f_ {2})_{1}\) for some nonzero constant \(\alpha\). As to the line \(V(f_{1})\), if \(a\neq 0\), we have \[D_{0}(f_{1})_{1}=\{L_{1}(-b\partial_{x}+a\partial_{y})+L_{2}(-c\partial_{x}+a \partial_{z})\;:\;L_{1},L_{2}\in R_{1}\},\] while \(D_{0}(f_{2})=\langle x\partial_{y}-2y\partial_{z}\rangle\). The condition \(D_{0}(f_{1})_{1}\cap D_{0}(f_{2})_{1}\neq 0\) is never satisfied, while the condition \(\alpha(x\partial_{x}+y\partial_{y}+z\partial_{z})=L_{1}(-b\partial_{x}+a \partial_{y})+L_{2}(-c\partial_{x}+a\partial_{z})+\beta x\partial_{y}-2\beta y \partial_{z}\) for \(\alpha\neq 0\) has as a unique solution \(b=c=0\). It follows that \[\ell=x.\] The case \(a=0\) can be treated similarly and it never occurs. Next it is possible to check in a similar way that \(h=x(ax+by+cz)\prod_{i=1}^{m}(x^{2}+a_{i}(xz+y^{2}))\) has a linear Jacobian syzygy if and only if \(b=c=0\), but this would give rise to a non reduced polynomial. So, \(C\) is as in example 3.2. Let us now assume that \(\cup_{i=1}^{m}C_{i}=V(\prod_{i=1}^{m}(xz+a_{i_{1}}y^{2}))\) belongs to a pencil of conics, all of them bitangent at \(\{p=(1:0:0),q=(0:0:1)\}\), so that we have the linear syzygy \((x,0,-z)\). Arguing as above we can determine the lines that we can add to this conic arrangement in such a way that the new conic-line arrangement has Jacobian ideal with a linear syzygy. It turns out that we have only six possibilities: \[x\prod_{i=1}^{m}(xz+a_{i_{1}}y^{2}),\quad y\prod_{i=1}^{m}(xz+a_{ i_{1}}y^{2}),\quad z\prod_{i=1}^{m}(xz+a_{i_{1}}y^{2}),\] \[xz\prod_{i=1}^{m}(xz+a_{i_{1}}y^{2}),\quad yz\prod_{i=1}^{m}(xz+ a_{i_{1}}y^{2}),\quad xyz\prod_{i=1}^{m}(xz+a_{i_{1}}y^{2}).\] The case \(y\prod_{i=1}^{m}(xz+a_{i_{1}}y^{2})=0\) is not free by Theorem 3.1 (2). By observing that the first and the third case are projectively equivalent, this concludes the proof. ### Nearly free conic-line arrangements with a linear Jacobian syzygy In this subsection, we classify conic-line arrangements \(C=V(f)\) of degree \(d\) in \(\mathbb{P}^{2}\) with \(r=\mathrm{mrd}(f)=1\) and minimal Tjurina number \(\tau(C)=d^{2}-3d+2\). **Theorem 3.6**.: _Let \(C=V(f)\) be a conic-line arrangement in \(\mathbb{P}^{2}\) of degree \(d\geq 6\). Then, \(\tau(C)=d^{2}-3d+2\) if and only if \(C\) is either a conic arrangement \(\mathcal{C}_{2}\) as in example 3.3(1), or a conic-line arrangement \(\mathcal{CL}_{6}\) as in example 3.3(2)._ Proof.: All conic-line arrangements \(C\subset\mathbb{P}^{2}\) described in examples 3.3(1)-(2) have total Tjurina number \(\tau(C)=d^{2}-3d+2\). Let us prove the converse. The hypothesis \(\tau(C)=d^{2}-3d+2\), \(d\geq 6\) and Proposition 2.4 imply that \(\mathrm{mrd}(f)=1\). By [14, Proposition 4.3], there are no line arrangements with \(\mathrm{mrd}(f)=1\) and \(\tau(C)=d^{2}-3d+2\). Therefore, we only have two possibilities: either \(C\) is a conic arrangement, or \(C\) is a conic-line arrangement. Arguing as in the proof of Theorem 3.5 is it possible to conclude. ## 4. Applications As application of the previous results we obtain the main result of this paper, namely, we determine when the Jacobian ideal of a conic-line arrangement is the ideal of an eigenscheme. More precisely, we have: **Theorem 4.1**.: _Let \(C=V(f)\) be a conic-line arrangement in \(\mathbb{P}^{2}\) of degree \(d\geq 4\). The Jacobian ideal \(J_{f}\) of \(f\) is the ideal of an eigenscheme \(E(T)\) if and only if \(C\) is either a line arrangement \(\mathcal{L}\), or a conic-line arrangement \(\mathcal{CL}_{2}\)._ Proof.: Let \(C=V(f)\) be a line arrangement (resp. conic-line arrangement) as described in the statement of the theorem. We have seen in example 3.2(1) (resp. example 3.2(4)) that the Jacobian ideal \(J_{f}\) of \(C\) has is defined by the maximal minors of the matrix \[\begin{pmatrix}(d-1)x&g_{0}\\ -y&g_{1}\\ -(1+d)z&g_{2}\end{pmatrix},\quad\text{resp. }\begin{pmatrix}x&h_{0}\\ y&h_{1}\\ (1-d)z&h_{2}\end{pmatrix}.\] Equivalently, the Jacobian ideal is generated by the minors of the matrix \[\begin{pmatrix}x&\frac{1}{d-1}g_{0}\\ y&-g_{1}\\ z&-\frac{1}{d+1}g_{2}\end{pmatrix},\quad\text{resp. }\begin{pmatrix}x&h_{0}\\ y&h_{1}\\ z&\frac{1}{d-1}h_{2}\end{pmatrix}.\] By definition we have \(J_{f}=I(E(T))\), where \(T=(\frac{1}{d-1}g_{0},-g_{1},-\frac{1}{d+1}g_{2})\in(Sym^{d-1}\mathbb{C}^{3}) ^{\oplus(3)}\), resp. \(T=(h_{0},h_{1},\frac{1}{d-1}h_{2})\in(Sym^{d-1}\mathbb{C}^{3})^{\oplus(3)}\) are partially symmetric tensors. Let us prove the converse. Assume that there is a partially symmetric tensor \(T=(g_{1},g_{2},g_{3})\in(Sym^{d-1}\mathbb{C}^{3})^{\oplus(3)}\) such that \(I(E(T))=J_{f}\). This implies that \(C\) is free, \(\tau(C)=d^{2}-3d+3\), \(\operatorname{mrd}(f)=1\) and that \(\operatorname{Syz}(J_{f})_{1}\) is generated by three linearly independent linear forms. Example 3.2 together with Proposition 3.5 proves what we want. _Remark 4.2_.: There are examples of reduced plane curves \(C=V(f)\subset\mathbb{P}^{2}\) whose Jacobian ideal \(J_{f}\) is the ideal of an eigenscheme \(E(T)\) and they are not conic-line arrangements. For instance, \(f=y(x^{3}-y^{2}z)\). _Remark 4.3_.: In particular, the geometry of the Jacobian scheme of a line arrangement of type \(\mathcal{L}\) or a conic-line arrangement \(\mathcal{CL}_{2}\) is completely described by [4, Theorem 5.5 and Remark 5.8]. We observe that the cited result concerns only reduced eigenschemes, but it is not difficult to see, that it can be extended to all non reduced zero-dimensional eigenschemes. Specifically, we have that if \(k\in\{2,\ldots,d-1\}\) then no subscheme of degree \(kd\) of \(\Sigma_{f}\) lies on a curve of degree \(k\). Moreover, the class of \(S=\operatorname{Bl}_{\Sigma_{f}}\mathbb{P}^{2}\) in the Chow ring \(A(\mathbb{P}^{2}\times\mathbb{P}^{2})\) can be determined. By choosing \(L_{1}\) and \(L_{2}\) as generators of the Picard groups of the two factors, and by setting \(p_{i}:\mathbb{P}^{2}\times\mathbb{P}^{2}\to\mathbb{P}^{2}\) to be the two projections, we have that the two divisors \(h_{1}=p_{1}^{*}L_{1}\) and \(h_{2}=p_{2}^{*}L_{2}\) are generators for \(A(\mathbb{P}^{2}\times\mathbb{P}^{2})\). Then it is simple to check that the class of \(S\) in \(A(\mathbb{P}^{2}\times\mathbb{P}^{2})\) is given by \[[S]=(d-1)h_{1}^{2}+dh_{1}h_{2}+h_{2}^{2},\] and, by taking into account the Hilbert-Burch matrices given in (3.3) and (3.4), the surface \(S\) turns out to be the complete intersection of the two divisors \(T\sim h_{1}+h_{1}\) and \(D\sim(d-2)h_{1}+h_{2}\) given by \[T=V(p_{0}x+p_{1}y+(1-d)p_{2}z),\quad D=V(p_{0}\partial_{yz}f+p_{1}\partial_{xz }f)\] in case \(\mathcal{L}\), respectively \[T=V((d-1)p_{0}x-p_{1}y-(d+1)p_{2}z),\quad D=V((d-1)\ p_{0}\partial_{yz}f+2p_{1 }\partial_{xz}f-(d+1)p_{2}\partial_{xy}f),\] in case \(\mathcal{CL}_{2}\), where \(((x:y:z),(p_{0}:p_{1}:p_{2}))\in\mathbb{P}^{2}\times\mathbb{P}^{2}\). Finally, we observe that by [20] or [1, Lemma 5.6], every planar eigenscheme is the zero locus of section \(s\in H^{0}(\mathcal{T}_{\mathbb{P}^{2}}(d-2))\), where \(\mathcal{T}_{\mathbb{P}^{2}}\) denotes the tangent bundle of \(\mathbb{P}^{2}\). Next we shall study the polar map associated with \(\mathcal{L}\) and \(\mathcal{CL}_{2}\) arrangements. Observe that since we are concerned with curves of maximal total Tjurina number and quasihomogeneous singularities, the degree of the generically finite polar map is \((d-1)^{2}-\mu(C)=d-2\). _Remark 4.4_.: We can apply the argument used in the proof of [4, Theorem 5.5] and we see that the possible contracted curves by the polar map associated with line arrangements of type \(\mathcal{L}\) or conic-line arrangements \(\mathcal{CL}_{2}\) are only lines. Indeed, we observe that for any \(p=(p_{0}:p_{1}:p_{2})\notin\Sigma_{f}\), the point \(\nabla f(p)\) is the intersection point of the two distinct lines \[\nabla f(p):\ \left\{\begin{array}{l}p_{0}x+p_{1}y+(1-d)p_{2}z=0\\ \partial_{yz}f(p)x+\partial_{xz}f(p)y=0.\end{array}\right.\] As a consequence, the fiber of \(\nabla f\) over any point \(q=(q_{0}:q_{1}:q_{2})\in\mathbb{P}^{2}\) is given by the zero locus of \[\left\{\begin{array}{l}q_{0}x+q_{1}y+(1-d)q_{2}z=0\\ q_{0}\partial_{yz}f+q_{1}\partial_{xz}f=0.\end{array}\right.\] Since the first equations represents a line for any choice of \(q\in\mathbb{P}^{2}\), the claim follows. We shall see in the next result that the presence of contracted lines is indeed always confirmed for \(\mathcal{L}\) and \(\mathcal{CL}_{2}\) arrangements. Recall that the critical locus of the polar map is given by the hessian curve, and it consists of the contracted curves and the ramification points for the polar map. **Proposition 4.5**.: _Let \(C=V(f)\) be a conic-line arrangement in \(\mathbb{P}^{2}\) of degree \(d\) such that the Jacobian ideal \(J_{f}\) of \(f\) is the ideal of an eigenscheme \(E(T)\)._ _Then, in case \(\mathcal{L}\), the critical locus of \(\nabla f\) is given by an arrangement of \(3(d-2)\) lines of the same type of \(\mathcal{L}\), it contains \(\mathcal{L}\) and the contracted lines by \(\nabla f\) are precisely the lines of \(\mathcal{L}\)._ _In case \(\mathcal{CL}_{2}\), the critical locus contains the tangent line \(\ell\), and it is the only contracted line._ Proof.: It is classically known that all the lines of a line arrangement are contained in the hessian curve. Now we verify that the residual curve to \(\mathcal{L}\) in \(\operatorname{Hess}(f)\) consists of \(3(d-2)-d=2d-6\) concurrent lines through \(O=(0:0:1)\) and that such residual lines are not contracted by \(\nabla f\). The first claim follows by writing the hessian matrix explicitly: \[\operatorname{Hess}(f)=\begin{pmatrix}\partial_{xx}f&\partial_{xy}f&\partial_ {xz}f\\ \partial_{xy}f&\partial_{yy}f&\partial_{yz}f\\ \partial_{xz}f&\partial_{yz}f&0\end{pmatrix}.\] Since both \(\partial_{xz}f\) and \(\partial_{yz}f\) are polynomials in \(x\) and \(y\) only, by developing the determinant \(h(f)=\det\operatorname{Hess}(f)\) with respect to the last row we see that \(\frac{h(f)}{f}\) is a polynomial in \(x\) and \(y\). Moreover, as the polar map is given by \[\nabla f=\left(x_{2}\left(\sum_{i=1}^{d-1}a_{i}\prod_{j\neq i,j=1}^{d-1}(a_{j }x+b_{j}y)\right),z\left(\sum_{i=1}^{d-1}b_{i}\prod_{j\neq i,j=1}^{d-1}(a_{j}x +b_{j}y)\right),\prod_{i=1}^{d-1}(a_{i}x+b_{i}y)\right),\] we see that the line \(z=0\) is contracted to the point \((0:0:1)\) and the lines \(a_{i}x+b_{i}y=0\) to the points \((a_{i}:b_{i}:0)\). Finally, to prove that there are no other contracted lines, we recall that the Hilbert-Burch matrix of \(J_{f}\) is given by (3.1), and that \(\nabla f\) is given by its \(2\times 2\) minors. It follows that for any \(p=(p_{0}:p_{1}:p_{2})\notin\Sigma_{f}\), the point \(\nabla f(p)\) is the intersection point of the two distinct lines \[\nabla f(p):\ \left\{\begin{array}{l}p_{0}x+p_{1}y+(1-d)p_{2}z=0\\ \partial_{yz}f(p)x+\partial_{xz}f(p)y=0.\end{array}\right.\] As a consequence, the fiber of \(\nabla f\) over a point \(q=(q_{0}:q_{1}:q_{2})\in\mathbb{P}^{2}\) is given by the zero locus of \[\left\{\begin{array}{l}q_{0}x+q_{1}y+(1-d)q_{2}z=0\\ q_{0}\partial_{yz}f+q_{1}\partial_{xz}f=0.\end{array}\right.\] In particular, a ramification point appears in a fiber if and only if the set of \(d-2\) concurrent lines through \((0:0:1)\) given by the equation \(q_{0}\partial_{yz}f+q_{1}\partial_{xz}f=0\) contains a (non reduced) double line, so the question is to determine the non reduced elements of the pencil \(q_{0}\partial_{yz}f+q_{1}\partial_{xz}f\). But the latter can be seen as a pencil of divisors in \(\mathbb{P}^{1}\), and precisely the Jacobian pencil of the polynomial \(\partial_{2}f\). If the factors of \(f\) are general, the polynomial \(\partial_{2}f\in\mathbb{C}[x,y]_{d-1}\) is general too. Therefore, the ramification points of the polar map \(\nabla\partial_{2}f\) are given by its hessian. We finally treat the case of an \(\mathcal{CL}_{2}\) arrangement. It is well known that any linear component of a plane curve is contained in the Hessian curve. Moreover, if \(f=x\prod_{i=1}^{m}(xz+a_{i}y^{2})\), the polar map is given by \[\nabla f=\left(\prod_{i=1}^{m}q_{i}+xz\left(\sum_{j=1}^{m}\prod_{i=1,i\neq j}^ {m}q_{i}\right),2xy\left(\sum_{j=1}^{m}a_{j}\prod_{i=1,i\neq j}^{m}q_{i} \right),x^{2}\left(\sum_{j=1}^{m}\prod_{i=1,i\neq j}^{m}q_{i}\right)\right),\] where we set \(q_{i}=xz+a_{i}y^{2}\); we see that the line \(V(x)\) is contracted to a point. To see that there are no other contracted lines, we observe that such a line should contain a subscheme of degree at least \(d-1\) in the Jacobian scheme; the only possible candidates are the tangent line in the second osculating point of the conics, that is the line \(V(z)\), or the line \(V(y)\) connecting the two singular points; but we can directly check that these cases do not occur. We conclude by observing that the geometry of the polar map seems to encode some information concerning the topological type of the singularities of a given curve, so we believe that it deserves further investigations.
2307.07843
Transformers are Universal Predictors
We find limits to the Transformer architecture for language modeling and show it has a universal prediction property in an information-theoretic sense. We further analyze performance in non-asymptotic data regimes to understand the role of various components of the Transformer architecture, especially in the context of data-efficient training. We validate our theoretical analysis with experiments on both synthetic and real datasets.
Sourya Basu, Moulik Choraria, Lav R. Varshney
2023-07-15T16:19:37Z
http://arxiv.org/abs/2307.07843v1
# Transformers are Universal Predictors ###### Abstract We find limits to the Transformer architecture for language modeling and show it has a universal prediction property in an information-theoretic sense. We further analyze performance in non-asymptotic data regimes to understand the role of various components of the Transformer architecture, especially in the context of data-efficient training. We validate our theoretical analysis with experiments on both synthetic and real datasets. Machine Learning, Transformer architecture, Transformer architecture, Transformer architecture, Transformer architecture, Transformer architecture ## 1 Introduction Language models that aim to predict the next token or word to continue/complete a prompt have their origins in the work of Shannon (1948, 1950, 1951). In recent years, neural language models have taken the world by storm, especially the Transformer architecture (Vaswani et al., 2017). Following work on other neural network architectures (Cybenko, 1989), one can show that the Transformer architecture has a _universal approximation_ property for sequence-to-sequence functions (Yun et al., 2020). Transformer architectures have excellent performance and parallelization capability on natural language processing (NLP) tasks, becoming central to several state-of-the-art models including GPT-4 (OpenAI, 2023) and PaLM 2 (Anil et al., 2023). Such large language models (LLMs) are not only very good at the statistical problem of predicting the next token, but also have emergent capabilities in tasks that seemingly require higher-level semantic ability (Wei et al., 2022). Moreover, transformer-based architectures have achieved tremendous attention on domains beyond NLP, such as images (Dosovitskiy et al., 2020), audio (Li et al., 2019), reinforcement learning (Chen et al., 2021), and even multi-modal tasks (Jaegle et al., 2021). Transformers also show cross-domain transfer learning capabilities, i.e., models trained on NLP tasks show good performance when fine-tuned for non-NLP tasks such as image processing. In this sense, the Transformer architecture is said to have a _universal computation_ property (Lu et al., 2021), reminiscent of predictive coding hypotheses of the brain that posit one basic operation in neurobiological information processing (Golkar et al., 2022). The basic predictive workings of Transformers and previous findings of universal approximation and computation properties motivate us to ask whether they also have a _universal prediction_ property in the information-theoretic sense (Feder et al., 1992; Weissman and Merhav, 2001), which itself is well-known to be intimately related to universal data compression (Merhav and Feder, 1998). As far as we know, the predictive capability of Transformers has not been studied in an information-theoretic sense, cf. Gurevych et al. (2022). We investigate not only the underlying mathematical principles that govern the performance of Transformers, but also aim to find limitations to their learning capabilities. We show that Transformers are indeed universal predictors, i.e. they can achieve information-theoretic limits asymptotically in the amount of data available. We also analyze their performance in the finite-data regime by understanding the role of various components of the Transformer architecture, providing theoretical explanations wherever applicable. To summarize our main results, we find the limits to performance of Transformers and show they are optimal predictors. Our limits only assume the Markov nature of data and are otherwise universal. Moreover, we analyze the role of the major components of a Transformer and provide better understanding and directions for their data-efficient training. Finally, we validate our theoretical analysis by performing experiments on both synthetic and real datasets. ## 2 Definitions and Preliminaries ### Finite-State Markov Processes (FSMPs) Let \(x=\{x_{1},\ldots x_{n}\}\in\mathcal{X}^{n}\) be sequential data, where \(\mathcal{X}\) is some finite set. The state sequence of an FSMP, \(s=\{s_{1},\ldots,s_{n-1}\}\), is generated recursively according to \(s_{i}=g(x_{i},\ldots,x_{(i-k+1)_{+}})\), where \(g(\cdot)\) is the state function for this FSMP, \(s_{i}\in\mathcal{S}\), and \((i)_{+}=\max\left\{i,1\right\}\). An FSMP has a predictor function \(f(\cdot)\) that outputs a probability distribution over possible values for \(x_{i+1}\), i.e. \(\hat{x}_{i+1}=f(s_{i})\in\mathbb{R}^{|\mathcal{X}|}\), \(\sum_{j}f(s_{i})_{j}=1\). Hence, a FSMP is given by the pair \((g,f)\). ### Transformers A Transformer architecture consist of three components placed in a series: an input embedding layer, multiple attention layers, and an output projection matrix. Let us describe each of the subcomponents of a Transformer. Input embedding layerLet the input be sequential data \(x=\{x_{1},\dots x_{n-1}\}\in\mathcal{X}^{n-1}\). The embedding layer \(E\) processes the input sequence individually to give a sequence \(z=\{z_{1},\dots z_{n-1}\}\in\mathbb{R}^{(n-1)\times d_{in}}\). Attention layerThe attention layer further consists of two subcomponents: the self attention mechanism and the position-wise feedforward network. The masked self-attention layer takes input \(X=[x_{1},x_{2},\dots,x_{n-1}]^{\mathsf{T}}\in\mathbb{R}^{(n-1)\times d_{in}}\). Queries (\(Q\)), keys (\(K\)), and values (\(V\)) are computed from \(X\) by multiplying with three corresponding matrices as \(Q=XW_{Q},K=XW_{K}\), and \(V=XW_{V}\), where each matrix \(W_{Q},W_{K}\), and \(W_{V}\) is of dimension \(d_{in}\times d_{model}\). The output of the self-attention layer, \(H^{(0)}=[h_{1}^{(0)},h_{2}^{(0)},\dots,h_{n-1}^{(0)}]\) is given by \[H^{(0)}\] \[=\text{Attention}(Q,K,V)=\text{softmax}(\text{mask}_{-\infty}( \frac{QK^{\mathsf{T}}}{\sqrt{d_{model}}}))V,\] where \(\text{mask}_{-\infty}\) is the causal binary mask used to preserve the auto-regressive property of language modeling by ensuring \(h_{i}^{(0)}\) depends only on \(x_{j\leq i}\) by setting all the entries in the matrix \(QK^{\mathsf{T}}\) corresponding to connections to \(x_{j>i}\) to \(-\infty\). This attention mechanism is extended to multi-head attention with \(m\) heads by simply dividing the inputs of dimension \(d_{model}\) into \(m\) sub-parts and computing attention separately and then concatenating them. Masked self-attention followed by position-wise feedforward network, \(FFN\), gives a Transformer decoder layer \(H^{(1)}=[h_{1}^{(1)},h_{2}^{(1)},\dots,h_{n-1}^{(1)}]\). Output projection matrixThe output projection matrix \(W_{p}\) takes as input a sequence of dimension \(\mathbb{R}^{(n-1)\times d_{model}}\) and outputs probabilities on the output space of dimension \(\mathbb{R}^{(n-1)\times|\mathcal{V}|}\), where \(\mathcal{V}\) is the vocabulary space. An \(L\)-layered Transformer decoder consists of an embedding layer, followed by \(L\) attention layers, followed by an output projection matrix. A single-layered Transformer decoder is shown in Fig. 1. ## 3 Performance Limits of Transformers Here we provide theoretical limits to the performance of the Transformer architecture. First, we show that the Transformer architecture can be viewed as an approximate FSMP. ### Transformers as approximate FSMPs An FSMP as defined in Sec. 2.1 is given by a function pair \((g,f)\), where \(g\) is a state function that first _aggregates_ certain past observations \((x_{j},\dots,x_{i-1})\), where \(j<(i-1)\) is a choice for the \(g\) function and \(f\) is a probability function from the states given by \(g\) to \(\mathbb{R}^{|\mathcal{V}|}\). In an \(L\)-layered Transformer, we model the \(g\) function by the embedding layer \(E\) followed by a sequence of \(L\) attention layers. The \(l\)th attention layer computes the weighted sum of past observations \((h_{j}^{(2l)},\dots,h_{i-1}^{(2l)})\) followed by an \(FFN\). Note that the weighted sum in the attention mechanism is performed in the higher dimension \(d_{model}\), which can retain as much information as concatenation in lower dimension if \(d_{model}\) is large enough. The output of the embedding layer followed by \(L\)-attention layers can be seen as approximating the \(g\) function, call it \(\bar{g}\). This is followed by the output projection matrix \(W_{P}\), which can be seen as approximating the output probability function \(f\), call it \(\bar{f}\). Fig. 2 shows a single-layer Transformer, comparing its components with that of an FSMP. ### Theoretical Limits Here, we use the similarity between Transformers and FSMPs to find the limits of the Transformer architecture. First we provide the training setup and loss criterion for our results. Note that the only assumption we use in the following data generation process is of Markovity and hence, the results obtained are general otherwise. We first describe the dataset and the loss criterion. Figure 1: Single-layered Transformer decoder with attention span, \(k,=3\). **Dataset** We consider the train dataset \(\mathcal{D}_{Train}=\{x_{1},\ldots,x_{n};y_{1},\ldots,y_{n}\}\) of size \(n\) and test dataset \(\mathcal{D}_{Test}=\{x_{n+1},\ldots,x_{n+m};y_{n+1},\ldots,y_{n+m}\}\) of size \(m\). In the datasets, \(\{x_{i}\}_{i=1}^{n+m}\) is an arbitrary sequence with distribution \(p(x_{1},\ldots,x_{n+m})\) and \(y_{i}\) is generated conditioned on \(g(x_{i},\ldots,x_{(i-l+1)_{+}})\), where \(g\) is a state function.Thus, we consider a sequence of Markov order \(l\). For our theoretical analysis, we use binary vocabulary, i.e. \(x_{i},y_{i}\in\{0,1\}\), and will let \(m\rightarrow\infty\). Loss criterionA model is trained on \(\mathcal{D}_{Train}\) with some state function \(\bar{g}\) to obtain an order \(k\) stationary Markov estimator \(p^{n}_{\theta_{k}}(\cdot)\), which is the \(f\) function in the context of an FSMP. The train loss on \(\mathcal{D}_{Train}\) is given by \(\mathcal{L}^{(l,k,n)}_{Train}(\mathcal{D}_{Train})=-\frac{1}{n}\sum_{i=1}^{ n}\log{(p^{n}_{\theta_{k}}(y_{i}|\bar{g}(x_{\leq i})))}\). Denote the loss on \(\mathcal{D}_{Test}\) by \(\mathcal{L}^{(l,k,m,n)}_{Test}(\mathcal{D}_{Test})=-\frac{1}{n}\sum_{i=1}^{m} \log{(p^{n}_{\theta_{k}}(y_{i+n}|\bar{g}(x_{\leq i+n})))}\). Following (Manning and Schutze, 1999), we consider languages to show stationary, ergodic properties. Now, taking \(m\rightarrow\infty\), and using stationarity of \(p(\cdot)\), we define \(\mathcal{L}^{(l,k,n)}_{Test}(\mathcal{D}_{Test})=\lim_{m\rightarrow\infty} \mathcal{L}^{(l,k,m,n)}_{Test}(\mathcal{D}_{Test})\) to get \[\mathcal{L}^{(l,k,n)}_{Test}(\mathcal{D}_{Test})=\] \[-\sum_{(y_{k},x_{k},\ldots,x_{1})\in\{0,1\}^{k}}p(y_{k},x_{k} \ldots,x_{1})\log{(p^{n}_{\theta_{k}}(y_{k}|\bar{g}(x_{k},\ldots,x_{1})))}. \tag{1}\] For fixed data Markov order \(l\) with known state function \(g\) and estimator Markov order \(k\), suppose the minimum loss is obtained by an estimator \(p^{n,*}_{\theta_{k}}\). Then, this minimum loss is \[\mathcal{L}^{(l,k,n),*}_{Test}(\mathcal{D}_{Test})=\] \[-\sum_{(y_{k},x_{k},\ldots,x_{1})\in\{0,1\}^{k+1}}p(y_{k},x_{k}, \ldots,x_{1})\log{(p^{n}_{\theta_{k}}(y_{k}|g(x_{k},\ldots,x_{1})))}. \tag{2}\] Bayesian estimatorAn order-\(k\) Bayesian estimator with a given state function \(g\) is of the form \[p^{n}_{\theta_{k}}(Y_{k+i}=y_{k}|g(X_{k+i},\ldots,X_{1+i})=g(x_{ k},\ldots,x_{1}))\] \[=\begin{cases}\frac{N(y_{k};g(x_{k},\ldots,x_{1}))}{N(g(x_{k}, \ldots,x_{1}))}&\text{if }k>0,\\ \frac{N(y_{0})}{n}&\text{if }k=0,\end{cases} \tag{3}\] where \(N(y_{k};g(x_{k},\ldots,x_{1}))\) denotes the number of counts of \(Y_{k+i}=y_{k}\) and \(g(X_{k+i},\ldots,X_{1+i})=g(x_{k},\ldots,x_{1})\) for \(0\leq i\leq(n-k)\) in the train set, \(\mathcal{D}_{Train}\). Similarly, \(N(g(x_{k},\ldots,x_{1}))\) is the number of occurrences of the subsequence \(g(X_{k+i},\ldots,X_{1+i})=g(x_{k},\ldots,x_{1})\), \(N(y_{0})\) is the number of occurrences of \(Y_{i}=y_{0}\) for \(0\leq i\leq(n-k)\), and \(n\) is the size of the train dataset. Note that the order \(l\) and the state function \(g\) of the source is unknown to the estimator, hence it is not obvious what \(k\) and \(g\) should be chosen for best test performance. Now we state and prove the theorem giving limits to the test loss obtained by any Transformer architecture with attention span \(k\). Thereafter we show the obtained limits are indeed optimal. Proofs are in Appendix A. **Theorem 3.1**.: _Let \(\{\mathcal{D}_{Train},\mathcal{D}_{Test}\}\) be an order-\(l\) Markov dataset with some state function \(g\). Let \(\bar{g}\) be the state function of a Transformer such that \(H(Y_{m}|g(X_{m},\ldots,X_{1}))=H(Y_{m}|\bar{g}(X_{m},\ldots,X_{1}))\) for \(m=k,l\), e.g. choose \(\bar{g}\) as the identity function and let \(p^{n}_{\theta_{k}}\) be the order-\(k\) Bayesian estimator in (3) with state function \(\bar{g}\). Let \(\mathcal{L}^{(l,k,n)}_{Test}(\mathcal{D}_{Test})\) be the corresponding test loss of the Transformer._ _Then, if \(l\leq k\),_ \[\lim_{n\rightarrow\infty}\mathcal{L}^{(l,k,n),*}_{Test}(\mathcal{D}_{Test})=H(Y _{l}|X_{l},\ldots,X_{1}), \tag{4}\] _whereas, if \(l>k\),_ \[\lim_{n\rightarrow\infty}\mathcal{L}^{(l,k,n),*}_{Test}(\mathcal{D}_{Test})=H(Y _{k}|X_{k},\ldots,X_{1}). \tag{5}\] _Here \(H(Y_{m}|X_{m},\ldots,X_{1})\) is the conditional entropy of the distribution \(p(y_{m}|x_{m},\ldots,x_{1})\) for \(m=l,k\)._ _Moreover, \(\mathcal{L}^{(l,k,n)}_{Test}(\mathcal{D}_{Test})\rightarrow\mathcal{L}^{(l,k,n ),*}_{Test}(\mathcal{D}_{Test})\) with convergence error \(\mathcal{O}(\frac{|\mathcal{S}|}{n})\) in expectation in both the cases, (4), (5), where \(\mathcal{S}\) is the range of the state function \(\bar{g}\)._ Now, we show that the limits obtained in Thm. 3.1 are optimal. Figure 2: A single Transformer decoder layer with attention span equals to three (\(k=3\)) and its comparison to a finite-state Markov predictor (FSMP). The output of the Transformer decoder, \(h^{l}_{i-1}\) can be interpreted as the output of the state function, \(g(x_{i},x_{i-1},\ldots,x_{i-k+1})\), of a FSMP. The matrix \(W_{P}\) can be interpreted as the predictor function \(f(\cdot)\) of a FSMP that takes the state as input and outputs a probability distribution over the vocabulary space. The embedding layer is ignored for brevity. **Theorem 3.2**.: _Let \(\{\mathcal{D}_{Train},\mathcal{D}_{Test}\}\) be an order-\(l\) Markov dataset with some state function \(g\). Then, let \(h\) be some arbitrary function taking as input \((X_{i},\ldots,X_{i+k})\), and outputs some probability \(h(Y_{i+k}|X_{i},\ldots,X_{i+k})\) over the space of \(Y_{i+k}\). Then, the optimal cross-entropy loss obtained by \(h\), \(\sum_{(y_{k},x_{k},\ldots,x_{1})\in\{0,1\}^{k}}p(y_{k},x_{k}\ldots,x_{1})\log \left(h(y_{k}|x_{k}\ldots,x_{1})\right)\) is greater than or equal to \(H(Y_{i}|X_{l},\ldots,X_{1})\) if \(l\leq k\), else, greater than or equal to \(H(Y_{k}|X_{k},\ldots,X_{1})\)._ ### Understanding the limits From the limits in Thm. 3.1, we know that for \(k<l\), increasing the value of \(k\) improves the obtained limit. This is because conditioning reduces entropy and hence using the stationarity property \(H(Y_{k}|X_{k},\ldots,X_{1})\leq H(Y_{k-1}|X_{k-1},\ldots,X_{1})\)(Cover & Thomas, 1999). But, for \(k>l\), \(H(Y_{l}|X_{l},\ldots,X_{1})=H(Y_{k}|X_{k},\ldots,X_{1})\). To better understand the limits in Thm. 3.1, we show the performance of an FSMP with the aggregation function \(g\) as the identity function with span of the input \(k\), and the output probability function \(f\) is set as the Bayesian predictor in (3). We test the performance of the FSMP on a synthetic Boolean dataset, where the input is randomly generated binary values and the output at position \(i\) is 1 if the sum of the past \(l\) observations is greater than a threshold, else, 0. The main observation to note from Fig. 3 is that for smaller values of \(k\), the convergence is faster compared to larger values, but, the value the losses converge to is much worse than for larger values of \(k\). The limits in Thm. 3.1 gives us two important design choices for efficiently designing and training Transformers. First, the choice of the state function \(\bar{g}\) and its attention span \(k\). Second, the amount of data, \(n\), to train on. The choice of \(\bar{g}\) and \(k\) must be such that it does not lose any information important to the output, i.e. \(H(Y_{m}|g(X_{m},\ldots,X_{1}))=H(Y_{m}|\bar{g}(X_{m},\ldots,X_{1}))\). One such \(\bar{g}\) can be simply the identity function. The corresponding \(k\) must be chosen large enough, i.e. \(k\geq l\). Note that for all such \(\bar{g}\) and \(k\) the optimal value remains the same as \(n\rightarrow\infty\), but, in the finite data regime, one needs to choose appropriate \(\bar{g}\) and \(k\) to get the best performance. In particular, one needs to choose \(\bar{g}\) and \(k\) that has the smallest range \(\mathcal{S}\) while also satisfying \(H(Y_{m}|g(X_{m},\ldots,X_{1}))=H(Y_{m}|\bar{g}(X_{m},\ldots,X_{1}))\). Thus, it indicates the need for sparsity in the design. For the second choice, \(n\), while it may seem obvious that training on larger data would give better performance, we show that with increase in data, the improvement in performance is significant, even with training for the same number of training steps. Moreover, this also motivates an augmentation technique that helps statistically explain the benefit gained from using relative positional encodings (Shaw et al., 2018). In Appendix B, we further aim to understand Transformers and data-efficient training. In Appendix C, we provide further experiments on real-world data. ## 4 Conclusion Here, we prove that the Transformer architecture yields universal predictors in the information-theoretic sense, which may help explain why this architecture seems to be a kind of state-of-the-art universal computation system. We further analyzed the different components of the Transformer architecture, such as attention weights and positional encodings. Going forward, it may be of interest to take inspiration from information-theoretic work in universal prediction (Merhav & Feder, 1998) to inspire novel neural architectures. Figure 3: Performance of Finite State Markov Predictor on a Boolean synthetic dataset, where the output is 1 if the sum of previous \(l\) observations is greater than a threshold, else 0. Plot illustrates the nature of the obtained limit in Thm. 3.1. Smaller values of \(k\) converges faster but to a higher value of test loss compared to larger values.
2308.09033
Uni-NLX: Unifying Textual Explanations for Vision and Vision-Language Tasks
Natural Language Explanations (NLE) aim at supplementing the prediction of a model with human-friendly natural text. Existing NLE approaches involve training separate models for each downstream task. In this work, we propose Uni-NLX, a unified framework that consolidates all NLE tasks into a single and compact multi-task model using a unified training objective of text generation. Additionally, we introduce two new NLE datasets: 1) ImageNetX, a dataset of 144K samples for explaining ImageNet categories, and 2) VQA-ParaX, a dataset of 123K samples for explaining the task of Visual Question Answering (VQA). Both datasets are derived leveraging large language models (LLMs). By training on the 1M combined NLE samples, our single unified framework is capable of simultaneously performing seven NLE tasks including VQA, visual recognition and visual reasoning tasks with 7X fewer parameters, demonstrating comparable performance to the independent task-specific models in previous approaches, and in certain tasks even outperforming them. Code is at https://github.com/fawazsammani/uni-nlx
Fawaz Sammani, Nikos Deligiannis
2023-08-17T15:15:55Z
http://arxiv.org/abs/2308.09033v2
# Uni-NLX: Unifying Textual Explanations for Vision and Vision-Language Tasks ###### Abstract Natural Language Explanations (NLE) aim at supplementing the prediction of a model with human-friendly natural text. Existing NLE approaches involve training separate models for each downstream task. In this work, we propose Uni-NLX, a unified framework that consolidates all NLE tasks into a single and compact multi-task model using a unified training objective of text generation. Additionally, we introduce two new NLE datasets: 1) ImageNetX, a dataset of 144K samples for explaining ImageNet categories, and 2) VQA-ParaX, a dataset of 123K samples for explaining the task of Visual Question Answering (VQA). Both datasets are derived leveraging large language models (LLMs). By training on the 1M combined NLE samples, our single unified framework is capable of simultaneously performing seven NLE tasks including VQA, visual recognition and visual reasoning tasks with 7\(\times\) fewer parameters, demonstrating comparable performance to the independent task-specific models in previous approaches, and in certain tasks even outperforming them.1 Footnote 1: [https://github.com/fawazsammani/uni-nlx](https://github.com/fawazsammani/uni-nlx) ## 1 Introduction Moving away from general and high-level explanations such as heatmaps [30, 33, 3, 31], Natural Language Explanations (NLE)2[6, 18] offer a detailed, human-friendly textual format explanation. Recently, NLE has been extended to encompass vision and vision-language (VL) tasks [21, 39, 16, 11]. The general pipeline comprises a vision model to encode the image, a task model \(M_{T}\) to generate a prediction for the task at hand (_e.g.,_ answer for VQA, class for image classification) and an explainer model \(M_{E}\) which takes the form of a language model to produce an explanation for the prediction via natural text. A subsequent study [27] unifies \(M_{T}\) and \(M_{E}\) into a single compact model that performs both tasks simultaneously by converting all tasks into generative tasks with a single casual language modeling training objective (Figure 0(a)). This greatly reduces the number of parameters and inference time and associates the reasoning process of \(M_{E}\) to the same answer prediction process in \(M_{T}\). It also attributes to the fact that explainability techniques are applied on the _same_ model responsible for generating the prediction. However, both these approaches require separate finetuning on each NLE task. This results in \(N\) separately-parameterized models for \(N\) tasks of NLE. Moreover, it requires a separate specialized model to perform each task. In this work, we build upon the work of [27] and consolidate all NLE tasks into a single compact model, dubbed as Uni-NLX (Figure 0(b)). This unification offers several advantages that previous approaches lack: Firstly, it offers a single model to simultaneously perform all \(N\) NLE tasks, thereby requiring \(N\times\) less parameters. Secondly, the integration enables mutual learning among all NLE tasks, as they possess similar reasoning capabilities. Lastly, the shared information across diverse tasks enables greater flexibility in answers and explanations (_e.g.,_ free-form text generation). Footnote 2: it is worth noting that “explanations” in this context do not refer to explanations of the underlying decision-making process of a model as typical in post-hoc explainability methods, but rather to supplementary information concerning the predicted outcome, incorporated through training Furthermore, we propose to leverage knowledge from Large Language Models (LLMs) to obtain two additional NLE datasets: _VQA-ParaX_ and _ImageNetX_. VQA-ParaX is Figure 1: The current SoTA model (a) [27] unifies the answering and explainer models into a single compact model, training separate models for each of the \(N\) tasks. Our proposed approach (b) takes a further step by unifying all tasks into a single compact model, resulting in \(N\times\) fewer parameters. Our single unified model is capable of simultaneously handling diverse tasks ranging from Visual Question Answering, Visual Recognition and Visual Reasoning. a re-formulation of long-text captioning datasets (_e.g.,_ Image Paragraph Captioning [12] or Local Narratives [22]) into question-answer-explanation formats using LLMs in a scalable manner. Moreover, LLMs posses vast knowledge about the world, and can be leveraged to obtain fine-grained, distinctive features and descriptions about different objects. ImageNetX is a dataset encompassing such textual data, which are regarded as explanations for ImageNet [13] categories. The integration of these two additional datasets with the existing NLE datasets results in a total of 7 NLE datasets, containing approximately 1M (image, text) pairs. The textual component of these pairs comprises the question, answer, and explanation. By training on these pairs, Uni-NLX achieves performance levels comparable to state-of-the-art task-specific NLE models on 4 tasks, while surpassing them on 3 tasks. ## 2 Related Work Early works in NLE for vision and vision-language tasks include [10, 21, 14, 39, 16, 11]. They rely on a task model (_e.g.,_ UNITER [7]) for multimodal feature extraction and answer prediction, and an explainer model (_e.g.,_ GPT-2 [25]) to generate an explanation for the prediction. Most recently, NLX-GPT [27] proposed to unify both these models into a single, compact-sized model (_e.g.,_ Distilled-GPT-2) that simultaneously generates and explains an answer using a single casual language modelling objective, while also eliminating the computationally-expensive object-level feature extraction stage [2]. This generative formulation has also proven to be effective in vision-language pretraining methods such as VL-T5 [8], OFA [37] and GIT [36]. Multimodal-CoT [41] builds upon the Chain of Though Prompting [38] technique and instead generates a rationale (explanation) prior to generating an answer, which serves as a reasoning step for inferring the answer. However, the aforementioned methods require training or finetuning for each task individually, which consequently leads to separately-parameterized models specialized to each task. Different from these methods, our work unifies all tasks into a single compact-sized model, greatly reducing parameters and computational cost. The authors of [17] perform zero-shot visual classification by measuring the similarity between an image and various distinctive textual features that describe the object in the image. These descriptors are obtained from LLMs. However, this approach relies on a strong retrieval model (_e.g.,_ CLIP [24]) and does not have the ability to generate text. Additionally, it is primarily aimed to vision-only tasks. In contrast, our method generates flexible free-form answers and explanations for both vision and vision-language tasks. ## 3 Method Following NLX-GPT [27], we formulate the discriminative answer prediction task as a generative text prediction task, along with the explanation. Both the answer and explanation tasks are unified into the model which outputs a single sequence containing the answer and explanation in a textual form. We first describe how we construct additional NLE datasets, and then elaborate on our multi-task unified model. ### Data Synthesis Strategies We propose to harness the powerful reasoning capabilities of LLMs to formulate two additional NLE datasets: _VQA-ParaX_ and _ImageNetX_, in a scalable manner. We utilize GPT-3 [5] with instructional finetuning [19] (ChatGPT) as our LLM. **VQA-ParaX**: LLMs posses remarkable ability in reading and re-formulating passages such as summarization and information extraction. The image paragraph captioning dataset [12] contains 19,561 samples and provides detailed descriptions of images which allows the LLM to gain a complete understanding of the image solely through the textual description. Using a LLM, we re-formulate the image paragraph captioning dataset into question-answer-explanation formats. We prompt the LLM with \(<\)I, \(S^{i}\)\(>\), where \(S^{i}\) represents the paragraph sample, and I represents the instruction given to the LLM. For each sample \(i\), we formulate \(6\) question-answer-explanation triplets, resulting in approximately 123K triplet samples. The instruction I we use is provided in the supplementary material. **ImageNetX**: ImageNet-1K [13] is a dataset used for image classification containing 1K categories. LLMs posses wealth knowledge about the world, which can be harnessed to obtain distinctive features and descriptions of various objects at a granular level. We propose to obtain such textual descriptions from LLM for the ImageNet-1K categories, which are then regarded as explanations for the class category (answer). We prompt the LLM with \(<\)I,\(<\)\(>\), where I represents the instruction and \(c\in C\) represents the class category for each of the 1K categories \(C\). We generate 50 descriptions for each class \(c\). In order to account for variations in visual representations of the same textual description within a given class, we assign three distinct training images per description for each class. Consequently, this approach yields a dataset of approximately 141K training samples. The remaining 3K textual descriptions are associated to a single image from the ImageNet validation set, and are divided into validation and test set. The instruction I we use is provided in the supplementary material. We provide further analysis, quality assessment and qualitative samples of these two new datasets in the supplementary material. ### Unifying Explanations To achieve a unified NLE framework across diverse tasks, it is necessary to establish a standardized format of question-answer-explanation. However, certain tasks (_e.g.,_ visual recognition) lack inherent questions. To address this, we introduce a consistent question relevant to each task, such as _"What category is this?"_ for image recognition, _"What action is this?"_ for action recognition, or _"is the following hypothesis true or false?"_ for visual entailment. By employing this unified format, all tasks can be formulated using the sequence \(S\): <question> the answer is <answer> because <explanation>. The compilation of all available datasets yields a collective corpus of approximately 1M samples. During training, we provide \(S\) as input to the model and predict the answer and explanation component of \(S\) in an autoregressive manner, utilizing a single causal language modeling training objective with cross-entropy loss. During inference, only the question is fed into the model, which subsequently predicts the answer and explanation using greedy decoding. It is worth noting that the answer can also be provided during inference, in which case the model solely generates the explanation. To allow the model to distinguish between the question, answer and explanation components of \(S\), we utilize three different segment embeddings for each. ## 4 Experiments Our unified dataset comprises seven NLE datasets encompassing visual question answering (VQA), vision recognition and visual reasoning tasks. VQA tasks consists of VQA-X [21] (33K samples), A-OKVQA [29] (25K samples) and VQA-ParaX (123K samples). Visual recognition tasks include ACT-X [21] (18K samples) for action recognition and ImageNetX (144K samples) for image classification. Visual reasoning tasks comprises e-SNLI-VE [11] (430K samples) for visual entailment and Visual Commensense Reasoning (VCR) [40] of 192K samples. To establish a fair comparison, our model follows NLX-GPT [27], which uses a distilled version [28] of the GPT-2 transformer language model [5] as the answering and explanation model, and a CLIP visual encoder part [24] as the visual backbone. Our model is trained for a maximum of 20 epochs with a batch size of 64 and a learning rate of 2e-5 which decays linearly to 0. ### Quantitative Results We evaluate our model quantitatively using automatic natural language generation (NLG) metrics (BLEU [20], METEOR [4], ROUGE-L [15], CIDER [35] and SPICE [1]); all scores are computed with the publicly available code3. Following previous works, the evaluation is carried on in two settings: _filtered_ and _unfiltered_. In the filtered setting, we only consider the explanations for which the predicted answer is correct. In the unfiltered setting, all explanations are considered, irrespective of whether the \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{10}{|c|}{VQA-X} \\ \hline & B-1 & B-2 & B-3 & B-4 & M & R & C & S \\ \hline NLX-GPT & **59.1** & **43.8** & **32.2** & **23.8** & **20.3** & **47.2** & **89.2** & **18.3** \\ Uni-NLX & 57.9 & 42.1 & 30.2 & 21.7 & 19.3 & 45.9 & 81.1 & 17.8 \\ \hline \multicolumn{10}{|c|}{ACT-X} \\ \hline NLX-GPT & 64.4 & 47.5 & 34.7 & 25.6 & 21.4 & 48.0 & 63.5 & 15.4 \\ Uni-NLX & **65.4** & **49.1** & **36.0** & **26.5** & **22.0** & **48.5** & **67.7** & **16.7** \\ \hline \multicolumn{10}{|c|}{e-SNLI-VE} \\ \hline NLX-GPT & 34.3 & 22.7 & 15.6 & 10.9 & 17.5 & 31.7 & **106.6** & **31.5** \\ Uni-NLX & **35.3** & **23.6** & **16.5** & **11.8** & **17.8** & **32.2** & 106.5 & 31.3 \\ \hline \multicolumn{10}{|c|}{VQA-ParaX} \\ \hline NLX-GPT & **37.1** & **27.0** & **20.4** & **15.5** & **18.5** & **40.9** & **142.6** & 31.4 \\ Uni-NLX & 35.1 & 25.7 & 19.4 & 14.8 & 18.2 & 40.8 & 139.9 & **31.6** \\ \hline \multicolumn{10}{|c|}{A-OKVQA} \\ \hline NLX-GPT & 55.0 & **39.9** & **29.3** & **20.2** & 16.4 & **46.2** & **64.4** & 15.2 \\ Uni-NLX & **58.2** & 39.6 & 27.6 & 18.5 & **17.1** & 44.0 & 58.1 & **16.0** \\ \hline \multicolumn{10}{|c|}{ImageNetX} \\ \hline NLX-GPT & **64.5** & **48.1** & **36.9** & **28.9** & **22.0** & **39.4** & **87.5** & **22.4** \\ Uni-NLX & 62.9 & 46.3 & 35.2 & 27.4 & 21.4 & 38.7 & 82.8 & 21.3 \\ \hline \multicolumn{10}{|c|}{VCR} \\ \hline NLX-GPT & 18.5 & 9.7 & 5.4 & **3.3** & **9.0** & **19.9** & 24.2 & 12.4 \\ Uni-NLX & **18.7** & **9.9** & **5.7** & **3.5** & **9.0** & **19.9** & **24.7** & **12.5** \\ \hline \end{tabular} \end{table} Table 1: Unfiltered Scores for Uni-NLX compared to NLX-GPT [27] on the 7 downstream tasks. Both models are w/o pretraining. B-N, M R, C, S are short for: BLEU-N, METEOR, ROUGE-L, CIDER and SPICE. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{10}{|c|}{VQA-X} \\ \hline & B-1 & B-2 & B-3 & B-4 & M & R & C & S \\ \hline NLX-GPT & **64.2** & **49.5** & **37.6** & **28.5** & **23.1** & **51.5** & **110.6** & **22.1** \\ Uni-NLX & 62.1 & 46.8 & 34.9 & 26.0 & 21.8 & 48.8 & 97.8 & 20.8 \\ \hline \multicolumn{10}{|c|}{ACT-X} \\ \hline & B1 & B2 & B3 & B4 & M & R & C & S \\ \hline NLX-GPT & **71.6** & 56.2 & 43.2 & **33.5** & **25.7** & **53.7** & **111.8** & **23.3** \\ Uni-NLX & 71.5 & **56.7** & **43.6** & **33.5** & **25.7** & 53.5 & 109.4 & 22.8 \\ \hline \multicolumn{10}{|c|}{e-SNLI-VE} \\ \hline & B1 & B2 & B3 & B4 & M & R & C & S \\ \hline NLX-GPT & **35.7** & 24.0 & 16.8 & 11.9 & 18.1 & 33.4 & 114.7 & **32.1** \\ Uni-NLX & 35.3 & **24.1** & **17.0** & **12.3** & **18.2** & **33.7** & **115.4** & **32.1** \\ \hline \multicolumn{10}{|c|}{VQA-ParaX} \\ \hline & B1 & B2 & B3 & B4 & M & R & C & S \\ \hline NLX-GPT & **41.9** & **31.5** & **24.7** & **19.9** & **22.3** & **47.2** & **203.7** & 41.9 \\ Uni-NLX & 41.3 & 31.2 & 24.5 & 19.7 & 22.0 & **47.2** & 203.6 & **42.1** \\ \hline \multicolumn{10}{|c|}{A-OKVQA} \\ \hline & B1 & B2 & B3 & B4 & M & R & C & S \\ \hline NLX-GPT & **62.3** & **46.8** & **36.1** & **27.7** & **20.5** & **51.5** & **93.0** & 19.3 \\ Uni-NLX & 62.1 & 43.3 & 30.8 & 20.8 & 19.6 & 48.1 & 78.1 & **19.7** \\ \hline \multicolumn{10}{|c|}{ImageNetX} \\ \hline & B1 & B2 & B3 & B4 & M & R & C & S \\ \hline NLX-GPT & 69.7 & 54.1 & 42.5 & 33.8 & 24.7 & 43.1 & 107.4 & 26.1 \\ Uni-NLX & **71.9** & **56.5** & **45.0** & **36.1** & **25.8** & **44.8** & **117.2** & **27.3** \\ \hline \multicolumn{10}{|c|}{VCR} \\ \hline NLX-GPT & - & - & - & - & - & - & - & - \\ Uni-NLX & 29.7 & 23.4 & 19.9 & 17.4 & 17.1 & 33.6 & 85.7 & 23.5 \\ \hline \end{tabular} \end{table} Table 2: Filtered Scores for Uni-NLX compared to NLX-GPT [27] on the 7 downstream tasks. Both models are w/ pretraining. predicted answer associated with each explanation is true or false. We utilize the recent state-of-the-art NLX-GPT [27] model as our baseline for evaluating our approach. NLX-GPT also presents results of NLE tasks by fine-tuning a pretrained model on image captioning. In our study, we consider this setting and utilize the pretrained model provided by the official code4. Table 1 presents the unfiltered results of Uni-NLX without finetuning the pretrained model, while Table 2 reports the filtered results obtained after finetuning the pretrained model. Additional results on unfiltered results with pretraining and filtered results without pretraining can be found in the supplementary material. In Table 1, Uni-NLX demonstrates superior performance compared to NLX-GPT on ACT-X, e-SNLI-VE, and VCR. Additionally, Uni-NLX achieves performance that is comparable to NLX-GPT across all VQA tasks (VQA-X, VQA-ParaX, and A-OKVQA) and ImageNetX, and surpasses NLX-GPT on certain metrics. Table 2 shows that Uni-NLX outperforms NLX-GPT on e-SNLI-VE and ImageNetX and demonstrates comparable performance to other tasks, and in certain metrics even outperforms them. It is worth noting that NLX-GPT does not present unfiltered results on VCR. Footnote 4: [https://github.com/fawazsammani/nlxgpt](https://github.com/fawazsammani/nlxgpt) ### Qualitative Results Figure 2 shows qualitative results for each of the seven NLE tasks. As observed, our model generates an answer to the given question and image, supported by a detailed explanation. We discuss limitations such as collapse cases in the supplementary material. For ImageNetX, we additionally show a heatmap visualization obtained from ResNet-18 [9] using Grad-CAM [30], which only displays high-level and general features influencing the prediction. On the other hand, Uni-NLX provides distinctive and fine-grained concepts which influenced the prediction (_e.g.,_ red breast, grayish-brown back, black with white spots) in the form of human-friendly text. Furthermore, the attribution maps associated with these distinctive textual attributes have the potential to represent automatically-extracted prototypes. In Table 3, the generated ImageNetX explanations yields a noteworthy improvement of 1.7% on ImageNet CLIP zero-shot transfer performance with ViT-B/16, without requiring a LLM as in previous works [17, 23]. More information is provided in the supplementary material. ## 5 Conclusion We proposed Uni-NLX, a unified model which simultaneously performs seven NLE tasks. Leveraging a LLM, we also introduced two additional NLE datasets: VQA-ParaX for the VQA task, and ImageNetX for the ImageNet recognition task. Experiments demonstrate that Uni-NLX achieves comparable performance to task-specific models in certain tasks, while surpassing them on others. Figure 2: Qualitative Examples of Uni-NLX on the 7 NLE tasks. We show the _question_, **answer** and explanation under each image.
2302.05652
On Structural and Spectral Properties of Distance Magic Graphs
A graph $G=(V,E)$ is said to be distance magic if there is a bijection $f$ from a vertex set of $G$ to the first $|V(G)|$ natural numbers such that for each vertex $v$, its weight given by $\sum_{u \in N(v)}f(u)$ is constant, where $N(v)$ is an open neighborhood of a vertex $v$. In this paper, we introduce the concept of $p$-distance magic labeling and establish the necessary and sufficient condition for a graph to be distance magic. Additionally, we introduce necessary and sufficient conditions for a connected regular graph to exhibit distance magic properties in terms of the eigenvalues of its adjacency and Laplacian matrices. Furthermore, we study the spectra of distance magic graphs, focusing on singular distance magic graphs. Also, we show that the number of distance magic labelings of a graph is, at most, the size of its automorphism group.
Himadri Mukherjee, Ravindra Pawar, Tarkeshwar Singh
2023-02-11T11:12:14Z
http://arxiv.org/abs/2302.05652v2
# Congruence distance magic graphs ###### Abstract In this paper, we introduce a series of necessary conditions namely the concept of \(p\)-distance magic labeling and give a relation between labeling and graph-theoretic properties of a graph. We also prove that a graph which is \(p\)-distance magic, for all \(p\) is also distance magic. As a result we have a new necessary and sufficient condition for a distance magic graph. We also obtain conditions for when the "group distance magic constant will be unique". **Keywords:** Distance magic labeling, Group distance magic labeling, \(p\)-distance magic labeling **AMS Subject Classification 2021: 05C 78.** ## 1 Introduction Let \(G=(V,E)\) denote the connected, finite, simple graph on \(n\) vertices. Let \(N(u)=\{v\in V:uv\in E\}\) be the neighborhood of a vertex \(u\). We denote the degree of a vertex \(v\) by \(deg(v)\) and is given by \(deg(v)=|N(v)|\). Let \(f:V\rightarrow\{1,2,\ldots,n\}\) be a bijective function. If the sum \(\sum_{uv\in E}f(v)\) is constant \(k\), for all \(v\in V\), then \(f\) is called a _distance magic labeling_, \(k\) is called a _magic constant_, and \(G\) is called a _distance magic graph_. Arumugam et al. [1] proved that a magic constant of a graph \(G\), if it exists, is unique, i.e., independent of distance magic labeling. Researchers have studied the existence of distance magic labeling for many families of graphs, but the general characterisation of a graph being distance magic is not known yet. For a detailed survey on graph labeling, one can refer to [3]. Dalibor [2] introduced the concept of group distance magic labeling. A _group distance magic labeling_ or a \(\Gamma\)_-distance magic labeling_ of a graph \(G=(V,E)\) is an injection from \(V\) to an abelian group \(\Gamma\) of order \(n\) such that weight of every vertex \(x\in V\) is equal to the same element \(\mu\in\Gamma\). The group distance magic labeling has some interesting properties. Group distance magic labeling need not necessarily admit the uniqueness of magic constant as seen in distance magic labeling. Every distance magic graph is \(\mathbb{Z}_{n}\)-distance magic, but the converse is not necessarily true. Several articles are available on the existence of group distance magic labeling of various families of graphs. In this paper, we introduce the concept of \(p\)-distance magic labeling as a generalization of distance magic labeling of graphs. These also constitute as an infinite class of necessary conditions which when true for all \(p\) becomes sufficient. It is well known that there cannot be a subgraph avoidance criterion for the graphs which admit magic labeling (due to the fact that any connected graph can be embedded in a distance magic graph [4, 5]), as a result the problem of magic labeling does not allow constructing a magic labeling from a labeling on a subgraph. One can say the problem of distance magic labeling is a difficult problem with little room for "construction" in the underlying graph in sense that it does not allow "cutting and pasting" type of techniques, but the \(p\)-distance magic labeling does allow such cutting and pasting techniques as evident from the application of Chinese remainder theorem (see Theorem 3.5). Due to the simplification through existence of a \(p\)-magic labeling and the patching up of two different such labeling through the Chinese remainder theorem gives us some constructive approach. Such possibilities give us hope that to tackle the problem of distance magic labeling this "construction" idea through \(p\)-distance magic labeling will prove to be very important. With sufficiently larger values of \(p\), the distance magic labeling becomes the particular case of \(p\)-distance magic labeling, and with \(p=n\), \(p\)-distance magic labeling becomes \(\mathbb{Z}_{n}\)-distance magic labeling. In some cases, we provide ways to construct different magic constants in a \(\mathbb{Z}_{n}\)-distance magic labeling. Further, in some cases, we prove the uniqueness of the magic constant in \(\mathbb{Z}_{n}\)-distance magic labeling. Recall that a graph \(G\) is _Eulerian_ if and only if it has at most one nontrivial component and its vertices all have even degrees, and a _matching_ in a graph is an independent subset of a set of edges. Throughout the paper, the ring \(\mathbb{Z}_{n}\) is considered with the usual addition and multiplication modulo \(n\). For graph theoretic terminology and notations, we refer to West [6]. ## 2 Known Results **Theorem 2.1**.: _[_1_]_ _If a graph \(G\) is distance magic, then its magic constant \(k=\frac{n(n+1)}{2\gamma_{ft}}\)._ ## 3 Main Results The multiset is the set in which repetition of elements is allowed. We formally define the multiset of our interest. **Definition 3.1**.: Given an integer \(p\geq 1\) by \(\{1,2,\ldots,n\}_{p}\) we mean a _multiset_ obtained by reducing all numbers \(1,2,\ldots,n\) modulo \(p\) and replacing \(0\)'s if any by \(p\). Note that \(p>n\), \(\{1,2,\ldots,n\}_{p}=\{1,2,\ldots,n\}\). For example, with \(p=4\) and \(n=9\), we have the multiset \(\{1,2,3,4,1,2,3,4,1\}_{4}\) obtained by reducing the elements \(1,2,3,4,5,6,7,8,9\) to modulo \(4\) and replacing \(0\) by \(4\). **Definition 3.2**.: Let \(G\) be a graph and given an integer \(p\geq 1\). Consider a multiset \(L=\{1,2,\ldots,n\}_{p}\). We call graph \(G\) a \(p\)-_distance magic_ if there is a bijective map \(f\) from the vertex set \(V\) to a multiset \(L\) called an \(p\)_-distance magic labeling_ such that the weight of each vertex \(x\in V\) denoted by \(w(x)\) is equal to the same element \(\mu(\bmod\,p)\), where \(w(x)=\sum_{y\in N(x)}f(y)\). For example, consider a graph \(G\) in Figure 1 on 11 vertices. With \(p=2\), we have a multiset \(\{1,2,1,2,1,2,1,2,1,2,1\}_{2}\). Then with labeling, as shown in the below figure, \(G\) is 2-magic with magic constant \(0\). **Theorem 3.1**.: _A graph \(G\) is distance magic if and only if it is \(p\)-distance magic for all \(p\geq 1\)._ Proof.: Suppose that \(G\) is a distance magic graph of order \(n\) with magic constant \(k\) and distance magic labeling \(f\). Let \(p\geq 1\). Define \(f^{*}:V\rightarrow\{1,2,\ldots,n\}_{p}\) by \(f^{*}(v)=f(v)(\bmod\,p)\). Therefore, the weight \(w_{f^{*}}(x)\) of every vertex \(x\) in \(V\) is equal to same element \(k^{*}=k(\bmod p)\) of \(\mathbb{Z}_{p}\). This proves the necessary part. Conversely, suppose that \(G\) is \(p\)-distance magic for all \(p\geq 1\). Therefore, \(G\) induces a \(p\)-distance magic labeling for \(p=\frac{n(n+1)}{2}\). Then \(w(x)=k^{\prime}\) for some \(k^{\prime}\in\mathbb{Z}_{\frac{n(n+1)}{2}}\). But \(k^{\prime}<\frac{n(n+1)}{2}\). Hence \(\frac{n(n+1)}{2}\)-distance magic labeling is the required distance magic labeling. It follows from the Theorem 3.1 that the \(p\)-distance magic labeling coincides with distance magic labeling whenever \(p\) is sufficiently large. This shows that \(p\)-distance magic labeling is a generalisation of distance magic labeling. Also, some structural characterisations are possible with \(p=2\). Since \(p=2\) is the least possible, we focus on 2-distance magic labelings. Let \(G\) be a graph whose vertices are labeled under the labeling \(f\) using the numbers in multiset \(L=\{1,2,\ldots,n\}_{p}\) for some \(p\geq 1\). Let \(V_{i}=\{v\in V:f(v)\equiv i(\bmod\,p)\}\), for all \(i=1,2,\ldots,p\). By \(G_{i}\), we mean a subgraph of \(G\) induced by \(V_{i}\) i.e. \(G_{i}=<V_{i}>\). **Theorem 3.2**.: _Let \(G\) admit \(2\)-distance magic labeling. If_ 1. _a magic constant is_ \(0\) _then the subgraph_ \(G_{1}\) _is an Eulerian graph._ 2. _a magic constant is_ \(1\) _then the subgraph_ \(G_{1}\) _contains a matching._ Figure 1: 2-magic labeling Proof.: Suppose a graph \(G\) is 2-distance magic. We prove the theorem in the following two cases. _Case \(i\)._ Let the magic constant be 0. If \(G\) itself is an Eulerian, then nothing to prove. Suppose \(G\) is not Eulerian. For the magic constant to be 0, each vertex in \(G_{1}\) must have an even number of neighbors in \(V_{1}\) and any number of neighbors in \(V_{2}\). Therefore, each vertex in \(G_{1}\) is of even degree. Hence \(G_{1}\) is an Eulerian graph. _Case \(ii\)._ Let a magic constant be 1. Let \(v\in V_{1}\). For \(w(v)=1,v\) must be adjacent to the odd number of vertices in \(V_{1}\). Now we construct a matching say \(M\). Let \(v_{1}\in V_{1}\) such that \(vv_{1}\in E(G_{1})\). We add this edge \(vv_{1}\) to the matching \(M\). We repeat the process, which must terminate after a finite number of steps as \(G_{1}\) is a finite graph. Hence, we obtain a maximal matching \(M\). This proves the theorem. Figure 2 shows the 2-distance magic labeling of two graphs. A graph in Figure 2 has magic constant 0, and the subgraph \(G_{1}\) induced by \(V_{1}\) is depicted in dotted edges and hollow vertices is an Eulerian subgraph. A graph in Figure 2 has a magic constant 1 and \(G_{1}\) contains a matching depicted in dotted edges. Note that in Figure 2, matching in \(G_{1}\) is not unique, but it is a perfect matching. However, it is not the case that we always obtain a perfect matching. Also, in Figure 2, none of the vertices in \(G_{1}\) are of even degree. Hence obtaining an Eulerian circuit is not possible. **Definition 3.3**.: We call graph \(G\) a \(r\)_-modulo \(p\) regular_ if \(deg(u)\equiv r(\bmod\,p)\) and we write \(G\) is \(r(\bmod\,p)\)_-regular_. **Theorem 3.3**.: _If \(G\) is \(r(\bmod\,p)\)-regular, \(p\)-distance magic graph with magic constant \(k\) and \(p\)-distance magic labeling \(f\). Then for each positive integer \(i\), the map \(f^{\prime}=f+i\) is also an \(p\)-distance magic labeling with magic constant \(k^{\prime}=(k+ir)(\bmod\,n)\)._ Figure 2: 2-magic graphs with an Eulerian subgraph or a matching. Proof.: Let \(G\) be \(r(\bmod\)\(p)\)-regular, \(p\)-distance magic graph with magic constant \(k\) and labeling \(f\) and let \(i\in\mathbb{Z}^{+}\). Define \(p\)-distance magic labeling \(g\) by \(g(u)=i+f(u)\). Then, for \(u\in V\), \[w(u) =\sum_{uv\in E}g(v)\] \[=\sum_{uv\in E}(i+f(v))\] \[=i\ deg(u)+\sum_{uv\in E}f(v)\] \[\equiv(ir+k)(\bmod\ p).\] This completes the proof. Theorem 3.3 shows that a \(p\)-magic constant need not be unique. Nevertheless, in some cases, one can obtain the uniqueness of the same as described in the following theorem. **Theorem 3.4**.: _Let \(G\) be a graph on \(n\) vertices. If a graph \(G\) is a \(p\)-distance magic with magic constant \(k\) and if \(\frac{n(n+1)}{2}\) is unit in the ring \(\mathbb{Z}_{p}\) then \(k\) is unique._ Proof.: Let \(G\) be a graph on \(n\) vertices \(\{x_{1},x_{2},\ldots,x_{n}\}\) having two \(p\)-distance magic labelings \(f\) and \(g\) with respective magic constants \(k\) and \(l\). Let \(\bar{u}\) be the vector with all \(n\) entries \(1\). Let \(A\) be an adjacency matrix of \(G\) and put \(X=(f(x_{1}),\ldots,f(x_{n}))^{\top}\) and \(Y=(g(x_{1}),\ldots,g(x_{n}))^{\top}\). Since, \(f\) and \(g\) are distance magic labelings with magic constants \(k\) and \(l\) it follows that \(AX=k\bar{u}\) and \(AY=l\bar{u}\). Since, \(X^{\top}AY\) is \(1\times 1\) matrix, we have \(X^{\top}AY=(X^{\top}AY)^{\top}=Y^{\top}AX\). This gives \[lX^{\top}\bar{u} =kY^{\top}\bar{u}\] \[\implies l(1+2+\cdots+n) =k(1+2+\cdots+n)\] Since, \((1+2+\cdots+n)\) is unit in \(\mathbb{Z}_{p}\), by the cancellation laws we have \(l=k\). Following corollary of the theorem is evident in the case when \(p=2\). **Corollary 3.4.1**.: _If \(G\) is a \(2\)-distance magic graph of order \(n\geq 3\), where \(n\equiv 1\) or \(2(\bmod\ 4)\) with magic constant \(k\), then \(k\) is unique._ **Theorem 3.5**.: _If \(G\) is a \(p\)-distance magic as well as \(q\)-distance magic graph on \(n\) vertices for some relatively prime integers \(p\) and \(q\) such that \(pq\leq n\), then \(G\) is \(pq\)-distance magic._ Proof.: Let \(G\) be a graph with vertex set \(\{x_{1},x_{2},\ldots,x_{n}\}\) which is both \(p\)-distance magic and \(q\)-distance magic for some relatively prime integers \(p\) and \(q\) such that \(pq\leq n\). Let \(f_{p},f_{q}\) be corresponding labelings, and \(k_{p},k_{q}\) be corresponding magic constants, respectively. Let \(f_{p}(x_{i})\equiv a_{i}(\bmod\ p)\) and \(f_{q}(x_{i})\equiv a_{i}(\bmod\ q)\) for each \(1\leq i\leq n\). Since \(p\) and \(q\) are coprime, by the Chinese remainder theorem, the system of congruences \[f_{p}(x_{i}) \equiv a_{i}(\bmod\ p)\] \[f_{q}(x_{i}) \equiv a_{i}(\bmod\ q)\] has unique solution say \(y_{i}\) modulo (mod \(pq\)), for each \(i(1\leq i\leq n)\). Now we define new labeling \(f_{pq}\) by \(f_{pq}(x_{i})=y_{i}\), for each \(i(1\leq i\leq n)\). The uniqueness of labels \(y_{i}\)'s guarantees that \(f_{pq}\) is a bijective map. Now we calculate the weight of a vertex \(x_{i}\in V\) under \(f_{pq}\). \[w(x_{i}) =\sum_{x_{j}\in N(x_{i})}f_{pq}(x_{j})\] \[=\sum_{x_{j}\in N(x_{i})}y_{j}\] \[\equiv\begin{cases}k_{p}(\text{mod }p)\\ k_{q}(\text{mod }q)\end{cases}\quad.\] For each \(i(1\leq i\leq n)\), again we solve the system of congruences \[w(x_{i}) \equiv k_{p}(\text{mod }p) \tag{1}\] \[w(x_{i}) \equiv k_{q}(\text{mod }q)\] and by the Chinese remainder theorem, we conclude that the system of congruences (1) has a unique solution, say \(k_{pq}(\text{mod }pq)\). This proves that \(f_{pq}\) is required labeling and \(G\) is a \(pq\)-distance magic graph. Given a graph \(G\) on \(n\) vertices which is both \(p\)-distance magic and \(q\)-distance magic for some relatively prime integers \(p\) and \(q\) such that \(pq\leq n\) then using the Theorem 3.5, we can find \(pq\)-distance magic labeling for \(G\). Note that when \(pq\) is sufficiently large, we obtain distance magic labeling. However, when \(pq>n\), we need not always obtain a \(pq\)-distance magic graph. For example consider a cycle \(C_{4}\) with vertex set \(\{v_{1},v_{2},v_{3},v_{4}\}\) in clockwise sense. Define 2-magic labeling of \(C_{4}\) by \(f_{2}(v_{1})=1,f_{2}(v_{2})=2,f_{2}(v_{3})=2,f_{2}(v_{4})=1\) and a 3-magic labeling by \(f_{3}(v_{1})=2,f_{3}(v_{2})=1,f_{3}(v_{3})=3,f_{3}(v_{4})=1\) as shown in the Figure 3. In Figure 3, the graph \(G_{1}\) shows the 2-distance magic labeling with magic constant 1, and the graph \(G_{2}\) shows a 3-distance magic labeling with magic constant 2 of the same graph \(C_{4}\). For each \(i(1\leq i\leq 4)\), we solve the system \[x_{1} \equiv f_{2}(x_{i})(\text{mod }2)\] \[x_{2} \equiv f_{3}(x_{i})(\text{mod }3) \tag{2}\] by the Chinese remainder theorem. For each \(i(1\leq i\leq 4)\), let the unique solution of the system 2 is \(2,4,6,1\) respectively. We label the vertices of \(C_{4}\) using these solutions: \(f_{6}(v_{1})=2,f_{6}(v_{2})=4,f_{6}(v_{3})=6,f_{6}(v_{4})=1\) as shown in the graph \(G_{3}\) of Figure 3. Observe that \(f_{6}\) is not a map from \(V(C_{4})\) to \(\{1,2,3,4\}_{6}\). Thus, in this case, we can not obtain 6-distance magic graph using 2-distance magic and 3-distance magic graph. This does not contradict the Theorem 3.5 because \((p=2)\times(q=3)=6>4=n\). Hence, the condition \(pq<n\) can not be ignored in the statement of the Theorem 3.5. However, it may happen that in some cases, each label \(y(\text{mod }pq)\) obtained by the procedure described in the above theorem satisfies \(1\leq y\leq n\). We call such a labeling a _consistent_ labeling. In such cases, the labeling is indeed a \(pq\)-distance magic labeling. We state the following proposition without proof, as its proof is quite similar to Theorem 3.5. **Proposition 1**.: _If \(G\) is a \(p\)-distance magic as well as \(q\)-distance magic graph on \(n\) vertices for some relatively prime integers \(p\) and \(q\) such that \(pq>n\) and if the new labeling obtained as described in the proof of Theorem 3.5 is consistent then the graph \(G\) is \(pq\)-distance magic._ ### Group Distance Magic A magic constant is not unique in group distance magic labeling. We give a few constructions to obtain multiple magic constants in group distance labeling. We omit the proof pf following theorem as it is similar to the proof of Theorem 3.3. **Theorem 3.6**.: _If \(G\) is \(r(\operatorname{mod}\,\,n)\)-regular, \(\mathbb{Z}_{n}\)-distance magic graph with magic constant \(k(\operatorname{mod}\,\,n)\) then for any \(i\), \((ir+k)(\operatorname{mod}\,\,n)\) is also a magic constant._ Thus a magic constant in group distance magic labeling is not unique in general. Although for a graph \(G\) of order \(n\), the uniqueness of the \(\mathbb{Z}_{n}\)-magic constant follows from the Theorem 3.7. **Theorem 3.7**.: _If a graph \(G\) of order \(n\) is \(\mathbb{Z}_{n}\)-distance magic with magic constant \(k\) and if \(\sum_{x\in\mathbb{Z}_{n}}x\in\mathbb{Z}_{n}^{*}\), then \(k\) is unique._ We present observations on \(\mathbb{Z}_{n}\)-distance magic labelings. Let \(G\) be a graph on \(n\) vertices. Define \(\mathscr{G}_{n}:=\{f:f\text{ is a }\mathbb{Z}_{n}\text{-distance magic labeling}\}\bigcup\{ \mathbf{0}\}\), where \(\mathbf{0}\) is a map from vertex set of \(G\) to \(\mathbb{Z}_{n}\) which sends every vertex of \(G\) to the additive identity of \(\mathbb{Z}_{n}\). **Theorem 3.8**.: _The set \(\mathscr{G}_{n}\) forms a \(\mathbb{Z}_{n}\)-module under the usual addition and scalar multiplication of functions._ Proof.: Since the collection \(M\) of all mappings from vertex set of \(G\) to \(\mathbb{Z}_{p}\) is a \(\mathbb{Z}_{p}\) module, it suffices to prove that \(\mathscr{G}_{p}\) is a submodule of \(M\). Let \(f_{1},f_{2}\in\mathscr{G}_{p}\) with magic constants \(k_{1}\) and \(k_{2}\) respectively and let \(\alpha\in\mathbb{Z}_{p}\) be arbitrary. Set \(f=\alpha f_{1}+f_{2}\) Then for \(u\in V\), weight of \(u\) \[w_{f}(u) =\sum_{uv\in E}(f(u))\] \[=\sum_{uv\in E}\left((\alpha f_{1}+f_{2})(u)\right)\] \[=\sum_{uv\in E}\alpha f_{1}(u)+\sum_{uv\in E}f_{2}(u)\] \[=\alpha k_{1}+k_{2}\] is independent of \(u\). Therefore \(\alpha f_{1}+f_{2}\in\mathscr{G}_{p}\). This proves the theorem. **Observation**.: _The group \(Aut(G)\) of automorphisms of a graph \(G\) acts on the module \(\mathscr{G}_{n}\) under the action \(g\cdot f=f\circ g\), for all \(f\in\mathscr{G}_{n}\) and all \(g\in Aut(G)\). Therefore, \(\mathscr{G}_{n}\) is \(Aut(G)\) representation._ Figure 3: Further directions in research We raise the following two problems: _Problem 1_.: Given a graph \(G\), which is not distance magic, find a prime number \(q\) such that \(G\) is not \(q\)-distance magic. _Problem 2_.: Given a distance magic graph \(G\), characterise the all primes \(q\leq\frac{n(n+1)}{2}\) such that \(G\) is \(q\)-distance magic graph. Affirmative solution to the Problems 1 and 2 provides a complete characterisation of distance magic graphs in terms of \(q\)-distance magic labeling.
2303.15768
RobustSwap: A Simple yet Robust Face Swapping Model against Attribute Leakage
Face swapping aims at injecting a source image's identity (i.e., facial features) into a target image, while strictly preserving the target's attributes, which are irrelevant to identity. However, we observed that previous approaches still suffer from source attribute leakage, where the source image's attributes interfere with the target image's. In this paper, we analyze the latent space of StyleGAN and find the adequate combination of the latents geared for face swapping task. Based on the findings, we develop a simple yet robust face swapping model, RobustSwap, which is resistant to the potential source attribute leakage. Moreover, we exploit the coordination of 3DMM's implicit and explicit information as a guidance to incorporate the structure of the source image and the precise pose of the target image. Despite our method solely utilizing an image dataset without identity labels for training, our model has the capability to generate high-fidelity and temporally consistent videos. Through extensive qualitative and quantitative evaluations, we demonstrate that our method shows significant improvements compared with the previous face swapping models in synthesizing both images and videos. Project page is available at https://robustswap.github.io/
Jaeseong Lee, Taewoo Kim, Sunghyun Park, Younggun Lee, Jaegul Choo
2023-03-28T07:03:31Z
http://arxiv.org/abs/2303.15768v1
# RobustSwap: A Simple yet Robust Face Swapping Model ###### Abstract Face swapping aims at injecting a source image's identity (i.e., facial features) into a target image, while strictly preserving the target's attributes, which are irrelevant to identity. However, we observed that previous approaches still suffer from source attribute leakage, where the source image's attributes interfere with the target image's. In this paper, we analyze the latent space of StyleGAN and find the adequate combination of the latents geared for face swapping task. Based on the findings, we develop a simple yet robust face swapping model, **RobustSwap**, which is resistant to the potential source attribute leakage. Moreover, we exploit the coordination of 3DMM's implicit and explicit information as a guidance to incorporate the structure of the source image and the precise pose of the target image. Despite our method solely utilizing an image dataset without identity labels for training, our model has the capability to generate high-fidelity and temporally consistent videos. Through extensive qualitative and quantitative evaluations, we demonstrate that our method shows significant improvements compared with the previous face swapping models in synthesizing both images and videos._ ## 1 Introduction Face swapping has become a prominent task with various applications such as digital resurrection, virtual human avatars, and movie films. The goal of face swapping is to inject a source's identity (_e.g_., eyes, nose, lips, and eye-brows) into a target, while strictly preserving the target's attributes (_e.g_., hair, background, light condition, expression, head pose, and eye gazing), which are irrelevant to identity. Due to the notorious intractability of protecting the target person's attributes against potential interference by the source person's attributes, previous research has endeavored to overcome this challenge. Two primary categories of face swapping approaches exist. In one approach to face swapping, the reconstruction loss between the swapped and target images is employed when the source and target images share the same identity. [5, 32, 20, 35, 34, 33]. However, applying the reconstruction loss in certain scenarios necessitates the use of identity-labeled image datasets [24, 4] or video datasets [23, 6]. Unfortunately, it is challenging to obtain high-quality images with identity labels, hence limiting the applicability of these methods. Moreover, these methods require careful hyperparameter tuning to determine the appropriate ratio between the same and cross-identity images. To synthesize high-resolution images, the other approaches utilize a pre-trained StyleGAN model as a strong prior with layer-wise information injection [41, 22, 34]. Despite the power of the pre-trained StyleGAN, MegaFS [41] and FSLSD [34] often fail to preserve the target person's attributes. This issue stems from utilizing solely \(\mathcal{W}+\) space for assembling the latent codes in StyleGAN from the source and target images. To preserve the target person's attributes, MFIM [22] replaces the spatial noise maps of StyleGAN with the spatially-dimensioned feature maps of the target image. However, we found that their empirically designed architecture still induces low-fidelity results that are affected by the source person's attributes, such as the source person's hair and eyeglasses. Although previous studies struggle to balance the information between the source and target images, they are still vulnerable to **source attribute leakage** problem, defined as _source person's identity irrelevant information leaking to the target person's image_. For example, as shown in the first row of Fig. 2, the existing face swapping methods often bring the source image's appearance to the target image, such as hair and skin color, which is defined as _appearance leakage_. In the second row of Fig. 2, the source's pose (_e.g_., head pose, expression, and eye gazing) interferes with the target's pose, which is defined as _pose leakage_. To solve these **source attribute leakages**, we thoughtfully design a simple yet robust face swapping model called **RobustSwap**, which employs a pre-trained StyleGAN [18]. Behind our model, we explore StyleGAN's latent space \(\mathcal{F}/\mathcal{W}+\) to find the promising combination of latents in the subspaces for preventing **source attribute leakage**. In specific, we investigate the suitable latents by assessing the extent to which the target's pose can be changed at each combination of latents in subspaces. Armed with the investigation, we elaborately design a face swapping model, which is robust to preserving the target image's attributes, while effectively reflecting the source image's identity. To impose the detailed face shape information of the source image, our model takes the source's shape parameter of 3D Morphable Model (3DMM) [21, 13, 3] as the input. In addition to inject the shape parameters into the model, we introduce a novel partial landmark loss, which is effective to retain the head pose and expression of the target image, while injecting the inner facial geometry of the source image. Thanks to our well-designed simple architecture and the coordination of the 3DMM information, **RobustSwap** is secured from the **source attribute leakage** and injects the more abundant identity information. Moreover, **RobustSwap** is built on megapixels (_e.g_., 1024 \(\times\) 1024), which is practical and applicable in various applications. In summary, our contributions are three-fold. * Based on the analysis of StyleGAN latent space, we introduce **RobustSwap**, preserving target attributes while preventing the **source attribute leakage**. Figure 2: **Examples of source attribute leakage** and our improved results; In the first row, FSLSD [34] often fails to preserve the skin color and lighting condition of the target image. MFIM [22] brings hairstyle from the source; In the second row, FSLSD [34] and MFIM [22] hardly preserve the target image’s pose such as eye gazing and expression. Besides, our result has no artifacts like those. Yellow boxes indicate the _appearance leakage_. Red boxes indicate the _pose leakage_. * For casting detailed source identity information and precise target's pose, we propose a shape-guided identity condition and a partial landmark loss with 3DMM. * Extensive experiments demonstrate that **RobustSwap** outperforms previous approaches quantitatively and qualitatively. Moreover, **RobustSwap** can produce high-quality videos without training on video datasets. ## 2 Related Work **Face Swapping.** There are numerous face swapping methods employing identity-labeled datasets. FaceShifter [20] designs its occlusion-aware architecture with two stages. SimSwap [5] devises a robust method via weak feature-matching loss. InfoSwap [11] utilizes the information-bottleneck principle for disentangling identity-attribute information. HifiFace [32] firstly exploits 3DMM's semantic information in face swapping. StyleSwap [35] uses simple modification of StyleGAN with the identity-labeled datasets for training. However, the usability of these methods is restricted due to the challenge of obtaining high-quality images with identity labels or video datasets. Moreover, they necessitate careful hyperparameter tuning to determine the appropriate ratio between the same and cross-identity images. In contrast, **RobustSwap** is trained on a high-quality image dataset [17], eliminating the need for searching for the appropriate ratio. To generate high-resolution images, recent face swapping approaches, such as MegaFS [41], FSLSD [34], and MFIM [22], employ a pre-trained StyleGAN [18] as a strong prior. However, we discover that those methods based on the pre-trained StyleGAN fail to prevent **source attribute leakage** problem. Different from previous studies, we conduct a depth experiment to seek the face swapping adaptive latent space of StyleGAN and appropriate architecture. **StyleGAN's Latent Space.** StyleGANs [17, 18, 16] have shown remarkable success in generating realistic images. Following the success of the StyleGANs, the latent space of StyleGAN has been the subject of recent studies, with exploring various aspects of its properties and dynamics. In the previous StyleGAN inversion studies [1, 2, 25, 29], they expand the \(\mathcal{W}\) space to \(\mathcal{W}+\) to amplify the StyleGAN's representation capacity. Moreover, a previous study [14] proposes a method that maps images to an alternative latent space \(\mathcal{F}/\mathcal{W}+\) in StyleGAN, which allows for more accurate reconstruction and semantic editing of out-of-range images with geometric transformations and local variations. Also, numerous recent work [40, 19, 31, 36] utilize the latent feature map space \(\mathcal{F}\), which is spatial-aware, to keep spatial information to be maintained while manipulating other traits. They demonstrate the potential of the latent feature map space \(\mathcal{F}\) in StyleGAN for a variety of image manipulation tasks. Inspired by these findings and applications, we investigate the suitability of \(\mathcal{F}/\mathcal{W}+\) for face swapping task and find which combination of the subspaces is proper in respective of face swapping. To achieve this goal, we conduct a detailed experiment to explore the \(\mathcal{F}/\mathcal{W}+\) space of StyleGAN, and analyze the subspaces to design a robust face swapping model. **3D Morphable Models.** A 3D morphable face model (3DMM) [13, 21, 3] is a strong representation for modeling human faces, including head pose, shape, and expression. The 3DMM's shape is transformed into a PCA-based vector space, which can fit the human faces into the vector space. Consequently, their corresponding encoders [8, 9, 28] have came out to alleviate the time-consuming optimization. We utilize the 3DMM's shape parameter from the state-of-the-art [9] 3DMM encoder, and corresponding decoder [21] for our partial landmark loss. ## 3 Method Given a source identity image \(I_{src}\in\mathbb{R}^{H\times W\times 3}\) and target attribute image \(I_{tgt}\in\mathbb{R}^{H\times W\times 3}\), our goal is to inject the identity of \(I_{src}\) to \(I_{tgt}\), while preserving the attribute of \(I_{tgt}\) to synthesize the swapped image \(\hat{I}\). \(H\) and \(W\) indicate the height and width of the image, respectively. We explore latent subspaces \(\mathcal{F}/\mathcal{W}+\) of StyleGAN [18] to analyze the degree of variation in aspects of identity and attributes (Section 3.1). Through the analysis, we find the appropriate combination of latents, which can preserve the attribute of \(I_{tgt}\), while reflecting the identity of \(I_{src}\). We introduce our face swapping model, **RobustSwap**, which is robust to the **source attribute leakage** (Section 3.2). Last but not least, we describe the objective functions for our method, including a novel partial landmark loss, which coordinates with 3DMM's implicit shape information (Section 3.3). ### Exploring StyleGAN for Face Swapping. In this section, we analyze the latent space of StyleGAN [18] from the perspective of developing the face swapping model. Then, we justify the proper combination of latents for a **source attribute leakage**-free model. Figure 3: **Analysis process of \(\mathcal{F}/\mathcal{W}+\) with pre-trained StyleGAN; we generate _random sampled_ images with a fixed feature map \(\mathbf{F}_{h\times w}^{*}\) and \(\mathbf{w}_{\mathbf{m}+}\), and an _anchor_ image is obtained from \(\mathbf{w}_{1+}\).** **Revisiting Latent Space of StyleGAN.** StyleGAN is a generative model that produces a high-resolution image \(\hat{I}\in\mathbb{R}^{H\times W\times 3}\) using \(n\) identical vector \(\textbf{w}\in\mathcal{W}\subsetneq\mathbb{R}^{1\times 512}\). Recent work on GAN inversion [14, 31] split the latent space of StyleGAN into two subspaces: the latent vector space \(\mathcal{W}+\) and latent feature map space \(\mathcal{F}\). The extended latent vectors \(\{w_{1},w_{2},\cdots,w_{n}\}\in\mathcal{W}+\subsetneq\mathbb{R}^{n\times 512}\), used for the different StyleGAN layers, allow StyleGAN to represent the diverse images and fine-grained control over the generated images. However, due to the deficient spatial information in \(\mathcal{W}+\), it is difficult to reconstruct the structural details of images. To address this problem, latent spatial feature map \(\mathbf{F}_{h\times w}\in\mathbb{R}^{h\times w\times c}\), which is in \(\mathcal{F}\), is used to represent the details of spatial information, where \(h\), \(w\), and \(c\) are height, width, and channel dimension of the feature map, respectively. With \(\mathcal{W}+\) and \(\mathcal{F}\), StyleGAN is reformulated as: \[\hat{I}=G(\mathbf{F}_{h\times w},\mathbf{w}_{m+}), \tag{1}\] where \(\textbf{w}_{m+}=\{w_{m},w_{m+1},\cdots,w_{n}\}\subset\mathcal{W}+\). Note that the \(\mathbf{F}_{h\times w}\) and \(\textbf{w}_{m+}\) are complementary to each other. 1 Footnote 1: The \(h\times w\) block maps (\(\{w_{m-2},w_{m-1}\}\), \(\mathbf{F}_{h/2\times w/2}\)) to \(\mathbf{F}_{h\times w}\). Please refer the Fig. 3. and Fig. 6 (B) Motivated by the advantages of \(\mathcal{F}/\mathcal{W}+\), the following question arises: Is it appropriate to map the target spatial attribute to \(\mathcal{F}\) while injecting source identity via \(\mathcal{W}+\)? However, there is a lack of studies analyzing the suitability of the StyleGAN latent space for the face swapping. Thus, we explore the latent space \(\mathcal{F}/\mathcal{W}+\); the combination of \((\mathbf{F}_{h\times w},\textbf{w}_{m+})\) for building a face swapping model. **Analysis on \(\mathcal{F}/\mathcal{W}+\) for Face Swapping.** We study the profitable combination of \(\mathbf{F}_{h\times w}\) containing spatial attributes of the target and \(\textbf{w}_{m+}\) embedded the source identity from the perspective of face swapping task. To achieve this goal, we conduct the following experiment. As shown in Fig. 3, we fix the \(\mathbf{F}_{h\times w}\) (corresponds to target attributes) at the certain spatial resolution denoted as \(\mathbf{F}_{h\times w}^{*}\), and generate images with randomly initialized \(\textbf{w}_{m+}\) (corresponds to the identity of source). Formally, it is denoted as: \[\hat{I}_{\textbf{w}_{m+}}=G(\mathbf{F}_{h\times w}^{*},\textbf{w}_{m+}), \tag{2}\] where \(\mathbf{F}_{h\times w}^{*}\) is generated from fixed vectors \(\{w_{1},\cdots,w_{m-1}\}\). Then, we examine the generated image as gradually increasing the resolution of \(\mathbf{F}_{h\times w}^{*}\) from \(4\times 4\) to \(512\times 512\). Now, we analyze quantitative factors to be considered in the face swapping task to find a suitable combination. As shown in Fig. 4, when the resolution of feature map enlarges, identity similarity increases, and head pose, expression, and eye gazing discrepancy decrease between _anchor_ and _random sampled_ images. We assume that the most adequate combination of \(\mathbf{F}_{h\times w}\) and \(\textbf{w}_{m+}\) for the robust face swapping should show low identity similarity, and head pose, expression, and eye gazing discrepancy between _anchor_ and _random sampled_, since that combination can change identity with preserving the pose information. To observe the three highest overall scored \(\mathbf{F}_{h\times w}^{*}\), from 16 to 64, we visualize the _anchor_ and _random sampled_ in Fig. 5. (A) varies a lot of attributes like expression and eye gazing, and (C) does not vary except for lighting conditions, skin, and background colors. However, (B) varies inner facial parts, while the pose, eye gazing, and expression are similar to the _anchor_ image's. Therefore, we select the combination of \((\mathbf{F}_{32\times 32},\textbf{w}_{8+})\) since it effectively preserves the pose, eye gazing, and expression while changing identity relevant features such as eyes, nose, lip, and eyebrows. It implies that as long as Figure 4: **Quantitative analysis** between _anchor_ and _random sampled_. The larger \(\mathbf{F}_{h\times w}^{*}\) results in improved preservation of expression, head pose, and eye gazing, while the identity undergoes less change. Overall score is calculated by (ID sim)\({}^{3}\)\(*\) (HP dis) \(*\) (Exp dis) \(*\) (EG dis) which is standardized. Details for each score metric are described in the supplementary materials. Figure 5: **Qualitative analysis.** Examples from the analysis on the latent space for face swapping. An _anchor_ image is obtained from the inverted vectors \(\textbf{w}_{1+}\) by using GAN inversion method [25]. _Random sampled_ images of (A) are generated by the fixed feature map \(\mathbf{F}_{16\times 16}^{*}\) and randomly initialized \(\textbf{w}_{6+}\). (B)’s _random sampled_ images are produced by \(\mathbf{F}_{32\times 32}^{*}\) and \(\textbf{w}_{8+}\). _Random sampled_ images of (C) are obtained from \(\mathbf{F}_{64\times 64}^{*}\) and \(\textbf{w}_{10+}\). we utilize \(\mathbf{F}_{32\times 32}\) and \(\mathbf{w}_{8+}\), the source identity is well-reflected, minimizing damage to the target attributes. ### RobustSwap: Simple yet Robust Architecture for Face Swapping. As shown in Fig. 6, we utilize pre-trained StyleGAN without any architectural modification since \((\mathbf{F}_{32\times 32},\mathbf{w}_{8+})\) preserve the target attributes while switching the identity. **Target Attributes Encoder.** Our generation pipeline starts from \(\mathbf{F}_{32\times 32}\), which is encoded from \(I_{tgt}\). To directly map the spatial information of \(I_{tgt}\) to the \(\mathbf{F}_{32\times 32}\), we design a simple convolution target encoder \(E_{t}\). The \(4\times\) down-sampled \(I_{tgt}^{\downarrow}\) is fed to \(E_{t}\), then encoded features \(\mathbf{F}_{32\times 32}=E_{t}(I_{tgt}^{\downarrow})\) are conveyed to the StyleGAN \(G\). **Source Identity Encoder.** Since \(\mathbf{w}_{8+}\) has the potential of injecting the source's identity information into the target without damaging the target's attributes, we map \(I_{src}\) to the source identity embedding \(\mathbf{w}_{id}^{+}=E_{i}(I_{src}^{\downarrow})\) by using the source identity encoder \(E_{i}\). Here, we utilize pSp encoder [25] as the source identity encoder \(E_{i}\) to map the overall source's identity attributes to \(\mathcal{W}+\) space. **Shape-Guided Identity Injection.** Additionally, we exploit the 3DMM parameter space to focus on the source's structural information. To be specific, we leverage the 3DMM's shape parameter extracted from shape encoder \(E_{s}\) which is a state-of-the-art 3DMM encoder [9]. Here, we only utilize the shape parameter, since the \(G\) already has the capability of preserving the target image's poses by employing the \(\mathbf{F}_{32\times 32}\). Then, a mapping network \(M:\mathcal{A}\rightarrow\mathcal{W}+\) produces \(\mathbf{w}_{shape}^{+}\) with 3DMM's shape parameter \(\alpha\in\mathcal{A}\). \[\mathbf{w}_{shape}^{+}=M(\alpha)=M(E_{s}(I_{src})). \tag{3}\] Finally, \(\mathbf{w}_{8+}\) is constructed with summation of shape embedding \(\mathbf{w}_{shape}^{+}\) and identity embedding \(\mathbf{w}_{id}^{+}\). Formally, \[\mathbf{w}_{8+}=\mathbf{w}_{shape}^{+}+\mathbf{w}_{id}^{+}, \tag{4}\] where \(\mathbf{w}_{shape}^{+}\) is broadcast with the same size as \(\mathbf{w}_{id}^{+}\). To summarize, our pipeline is described as \[\hat{I}=G(\mathbf{F}_{32\times 32},\mathbf{w}_{8+}). \tag{5}\] ### Objective Functions **Partial Landmark Loss.** To encourage cooperation of the 3DMM's implicit and explicit information, we propose a partial landmark loss, only focusing on designated 51 landmarks out of 68 which supervise the source's inner facial shape. Moreover, such supervision also guides to more precise expression and head pose. To construct the ground truth of partial landmarks, we mix the target image's head pose and expression parameters and the source image's shape parameter and then feed the mixed parameters to the 3DMM decoder (_i.e._, FLAME [21]) reconstructing the mesh, \(Mesh^{mix}\), which is composed of 5023 vertices. More details are described in the supplementary materials. \[\mathcal{L}_{pl}=\sum_{(i,j)\in Lmk}\norm{Mesh_{i}^{mix}-Mesh_{j}^{swap}}, \tag{6}\] Figure 6: (A) Our **RobustSwap** architecture; the blurred trapezoidal box is the area of the discarded block of StyleGAN. The target encoder \(E_{t}\) encodes \(I_{tgt}^{\downarrow}\) to \(\mathbf{F}_{32\times 32}\). The encoded \(\mathbf{w}_{8+}\) from the two source encoder \(E_{i}\) and \(E_{s}\) is injected to StyleGAN \(G\). (B) illustrates the details of 64 x 64 Block. It produces \(\mathbf{F}_{64\times 64}\) from \(\{\mathbf{w}_{8},\mathbf{w}_{9}\}\) and \(\mathbf{F}_{32\times 32}\). (C) is the construction pipeline of ground-truth for partial landmark loss. More details are described in our supplementary materials. where \(Lmk\) is the set of inner face landmark pairs, \(Mesh^{swap}\) represents swapped image's extracted mesh from \(E_{s}\) and 3DMM decoder. **Reconstruction Loss.** We adopt the reconstruction loss for regularizing \(\hat{I}\) attributes with \(I_{tgt}\). This part is composed of two losses, \(L_{l2}\) and LPIPS [38] loss. \[\mathcal{L}_{recon}= \|I_{tgt}-\hat{I}_{1}\|_{2}+LPIPS(I_{tgt},\hat{I}) \tag{7}\] **Identity Loss.** Identity loss ensures the \(\hat{I}\) to have the same identity with \(I_{src}\). \[\mathcal{L}_{id}=1-\text{cossim}(R(I_{src}),R(\hat{I})), \tag{8}\] where \(R\) is the pretrained face recognition model, ArcFace [7]. The notation \(\text{cossim}(\cdot,\cdot)\) represents the cosine similarity between the ArcFace's embeddings. **Adversarial Loss.** Adversarial loss makes the model to generate the realistic \(\hat{I}\). We directly use the StyleGAN [18]'s non-saturating adversarial loss, \(\mathcal{L}_{adv}\). The detailed description is in supplementary materials. **Total Objective. RobustSwap** is trained with the following total objective function: \[\mathcal{L}_{total}=\lambda_{pl}\mathcal{L}_{pl}+\lambda_{recon}\mathcal{L}_{ recon}+\lambda_{id}\mathcal{L}_{id}+\lambda_{adv}\mathcal{L}_{adv}, \tag{9}\] where \(\lambda_{pl},\lambda_{recon}\), \(\lambda_{id}\) and \(\lambda_{adv}\) are the hyper-parameters. ## 4 Experiments **Datasets.** We train our model only on the FFHQ [17] without any identity-labeled or video datasets, different from previous methods [11, 35, 5, 32]. We evaluate our method on CelebA-HQ [15] validation set and FaceForensics++ (FF++) [26] dataset. For CelebA-HQ, we sample 10,000 pairs randomly for the source and target images. For FF++, we randomly select video pairs for qualitative evaluation. Note that the quantitative evaluations are only with CelebA-HQ, a high-resolution image dataset. **Baselines.** We compare our methods with the following face swapping baselines: SimSwap [5] InfoSwap [11], HifiFace [32], MegaFS [41], FSLSD [34] and MFIM [22]. We utilize an unofficial code for HifiFace and reimplement the MFIM, strictly following the original paper. **Implementation Details.** Our model is trained with 8 batch size on a NVIDIA A100 GPU for megapixels about 5 days. We use ADAM optimizer with a learning rate \(1\times 10^{-4}\). \(\lambda_{id}\) and \(\lambda_{rec}\) are set to 1. \(\lambda_{pl}\) is set to 100. \(\lambda_{adv}\) is set to \(10^{-2}\). ### Quantitative Evaluation **Evaluation Metrics.** For quantitative evaluation, the source and target image pairs are randomly sampled without duplication from CelebA-HQ [15]. We measure five metrics widely used for evaluating face swapping methods: Identity, Expression, Head Pose, Head Pose-HN, and Frechet Inception Distance (FID) [12]. In addition to five metrics, we employ two new metrics: **Masked-L1** and **Eye gazing**. Identity score is the cosine similarity between the embedding vectors of \(I_{src}\) and \(\hat{I}\) extracted by a pre-trained face recognition model [30], where we utilize a different model from the model used for the identity objective function. Expression and Head Pose scores are calculated by measuring \(L1\) distance between expression and head pose blendshape parameters of \(I_{tgt}\) and \(\hat{I}\) extracted by another pre-trained 3DMM encoder [28]. We measure Head Pose-HopeNet (HN) score by computing \(L1\) distance between \(I_{tgt}\) and \(\hat{I}\) using a pre-trained head pose estimator [27]. We also measure FID for the 10,000 \(\hat{I}\) and real images of CelebA-HQ. **Masked-L1** measures the difference of the skin and head area excluding identity attributes between \(\hat{I}\) and \(I_{tgt}\). Specifically, we utilize a pre-trained face parsing map predictor [37] for **Masked-L1** to extract only the skin and hair area of \(I_{tgt}\) and \(\hat{I}\). Then, we measure the \(L1\) distance for the pixels of the designated area. We employ **Masked-L1** to measure the **source attribute leakage** of appearances such as hair, glasses, and skin color. Moreover, we utilize the pre-trained eye gazing estimator [10] to evaluate the eye gazing of the swapped image \(\hat{I}\). In specific, we compute the \(L1\) distance between eye gazing angles (, yaw and pitch) of \(I_{tgt}\) and \(\hat{I}\). The details for these two metrics are described in supplementary materials. **Comparison with Baselines.** Table 1 reports the quantitative comparison with baselines and **RobustSwap**. **RobustSwap** model achieves the state-of-the-art performance compared to other face swapping baselines, except for the Identity score. Although MFIM shows the best Identity \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c} \hline \hline Methods & Identity\(\uparrow\) & Expression\(\downarrow\) & Head Pose\(\downarrow\) & Head Pose-HN\(\downarrow\) & FID\(\downarrow\) & Masked-L1\(\downarrow\) & Eye Gazing\(\downarrow\) \\ \hline SimSwap & 0.502 & 0.168 & 0.016 & 2.345 & 34.84 & 0.046 & 0.065 \\ InfoSwap & 0.557 & 0.196 & 0.021 & 3.533 & 15.75 & 0.067 & 0.068 \\ HifiFace & 0.515 & 0.210 & 0.021 & 3.486 & 30.91 & 0.040 & 0.070 \\ MegaFS & 0.386 & 0.200 & 0.036 & 9.559 & 24.20 & 0.076 & 0.076 \\ FSLSD & 0.339 & 0.207 & 0.025 & 4.318 & 12.41 & 0.046 & 0.081 \\ MFIM & **0.715** & **0.160** & 0.029 & 5.660 & 15.61 & 0.072 & 0.075 \\ \hline Ours & 0.649 & **0.160** & **0.014** & **1.935** & **10.37** & **0.038** & **0.062** \\ \hline \hline \end{tabular} \end{table} Table 1: **Quantitative results** for comparison with baselines. **Bold** indicates the best score. Underline indicates the second-best score. MFIM achieves the best identity score, but the reason for their handcrafted architecture could not prevent the **source attribute leakage** in respect of **Head Pose, Masked-L1** and **Eye Gazing**. score, the synthesized images of MFIM show severe **source attribute leakage** such as vanished hair and incorrect eye gazing and expression as shown in Fig. 2. **Ablation Studies.** As shown in Table 3, we compare the performances of our model across different resolutions of \(\mathbf{F}_{h\times w}\) (_i.e._, from \(8\times 8\) to \(64\times 64\)). Considering _1st_ to _4th_ row of our methods, there is the same tendency in Sec. 3.1 that the larger resolution of \(\mathbf{F}_{h\times w}\), the lower expression, head pose errors, and identity score. Since shape-guided identity injection and partial landmark loss boost the identity injection of the source image, Ours full achieves a higher identity score than Ours \(16\times 16\). Lower expression and head pose scores demonstrate that the proposed techniques are also effective to preserve the target attributes. **User Studies.** We further evaluate our model and three recent baselines [32, 34, 22] via a user study on synthesizing the images and the videos. The participants evaluated 11 swapped image samples from CelebA-HQ and 6 video samples from CelebV-HQ [39]. The users are asked to score the quality of swapped images and videos according to the fol \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{ID sim \& Att pre\({}^{\uparrow}\)} & \multicolumn{2}{c|}{Naturalness\({}^{\uparrow}\)} & \multicolumn{2}{c|}{Quality\({}^{\uparrow}\)} \\ & Image & Video & Image & Video & Image & Video \\ \hline HifiFace & 1.545 & 1.928 & 1.259 & 1.785 & 1.298 & 1.904 \\ FSLSD & 2.194 & 1.746 & 2.311 & 1.642 & 2.370 & 1.666 \\ MFIM & 2.168 & 2.500 & 1.857 & 2.095 & 1.935 & 2.190 \\ \hline **RobustSwap** & **2.857** & **2.904** & **2.987** & **3.293** & **2.974** & **3.273** \\ \hline \end{tabular} \end{table} Table 2: **User studies**. The larger score indicates the better, and the range of each criterion’s score is set from 1 to 4. \begin{table} \begin{tabular}{l|c|c|c} \hline Methods & Identity\({}^{\uparrow}\) & Expression\({}^{\downarrow}\) & Head Pose\({}^{\downarrow}\) \\ \hline Ours \(8\times 8\) & **0.684** & 0.223 & 0.026 \\ Ours \(16\times 16\) & 0.640 & 0.205 & 0.022 \\ Ours \(32\times 32\) & 0.620 & 0.184 & 0.018 \\ Ours \(64\times 64\) & 0.595 & 0.166 & 0.016 \\ \hline Ours full (\(32\times 32\)) & 0.649 & **0.160** & **0.014** \\ \hline \end{tabular} \end{table} Table 3: **Quantitative ablation studies**. Note that Ours full denotes the shape-guided identity injection and partial landmark loss added version. Figure 8: **Qualitative results** on \(1024\times 1024\) resolution CelebA-HQ with megapixel baselines. Figure 7: **Qualitative results** on \(256\times 256\) resolution FF++; (A) Source, (B) Target, (**C) RobustSwap**, (D) MFIM, (E) FSLSD, (F) MegaFS, (G) HifiFace, (H) InfoSwap, and (I) SimSwap. More results are in our supplementary materials. lowing criteria: 1) Identity similarity and Attribute preservation (ID sim & Att pre); 2) Naturalness; and 3) Quality. We designate the highest score to be 4 and the lowest score to be 1 for each criterion. Table 2 shows that our method achieves the best score in every criterion, demonstrating that our results are the most plausible in human perceptual evaluation. Notably, our video score is higher than other baselines with large margin even though we do not train any video datasets. These results indicate that preventing the **source attribute leakage** is also crucial for synthesizing temporally consistent videos. ### Qualitative Evaluation **Comparison of Baselines.** We compare our **RobustSwap** and baselines on CelebA-HQ and FF++ datasets. As shown in Fig. 7, SimSwap sometimes shows **source attribute leakage** such as bringing the source's hair lines with low-quality results. InfoSwap and HifiFace often fail to retain the target image's expression and eye gazing. In contrast, our method generates more perceptually convincing swapped images without source **source attribute leakage**. In Fig. 8, we compare our **RobustSwap** with megapixel models [22, 34, 41]. Although MFIM and FSLSD reflect source identity well, they often produce visual artifacts like _appearance leakage_ (_e.g._, hairstyle and eyeglasses) and _pose leakage_ (_e.g._, incorrect eye gazing and expression). MegaFS generates inaccurate skin-colored images. On the other hand, our method robustly changes the target face to the source's one almost without **source attribute leakage**, following the target image's eye gazing and expression, and having no texture leakage from the source image. Moreover, as shown in Fig. 10, we compare the recent three baselines with our method on generating the videos. While the baselines show the _pose_ and _appearance leakages_, **RobustSwap** is robust to _source attribute leakage_ even in the video. Notably, these results show that _source attribute leakage_ is also crucial to synthesize the temporally consistent videos in face swapping task. **Ablation Studies.** As shown in Fig. 9, Ours \(8\times 8\) and Ours \(16\times 16\) hardly preserve the hat, hairstyle, skin color, eye gazing, and expression of the target image. In contrast, Ours \(64\times 64\) faithfully follows the appearance and pose of the target, but the identity of the result is quite heterogeneous with the source. Ours \(32\times 32\) contains the target's attribute and source's identity in balance, which demonstrates that our analysis in Sec. 3.1 is effective for searching the proper combination of latent spaces. While the performance of Ours \(32\times 32\) is commendable, there is a room for improvement in accurately preserving the pose of the inner face region. Therefore, by leveraging the shape-guided identity injection and partial landmark loss, Ours full can preserve more detailed expression and head pose, and simultaneously reflect source's inner shape than Ours \(32\times 32\). Figure 10: **Qualitative results** on \(512\times 512\) resolution CelebV-HQ; (A) Target frames, (B) HifiFace, (C) FSLSD, (D) MFIM, **(E) RobustSwap**. Please be aware of the yellow and red arrows. Baselines suffer from **source attribute leakage**. Figure 9: **Qualitative ablation studies of \(\mathbf{F}_{h\times w}\)** and Ours full version. Ours \(8\times 8\) indicates that we embed target attributes to \(\mathbf{F}_{8\times 8}\) and use \(\mathbf{w}_{4+}\) for injecting identity attributes of a source. Please be aware of the yellow and red boxes. Ours full is improved in respects of preserving the target image’s pose such as eye gazing and expression, and reflecting the source image’s shape compared with Ours \(32\times 32\). ## 5 Conclusion In this paper, we propose a robust face swapping model, **RobustSwap**, which solves **source attribute leakage** problems. We analyze the latent space of StyleGAN for face swapping, ultimately we develop a simple yet robust face swapping model without any architectural modification of StyleGAN, which is easy to train and implement. On the other hand, we believe that our model can be extended to other combinations of subspaces, not limited to only face swapping tasks. We further utilize the explicit and implicit information of 3DMM to provide more detailed source identity information and precise target person's pose. Our experiments show that **RobustSwap** is comparable with previous face swapping models. Additionally, **RobustSwap** shows high-quality results in video face swapping without video datasets. We believe that our analysis on StyleGAN for face swapping inspires the future researchers to analyze the latent spaces of the generative model in perspective of face swapping task and utilize it as a strong prior for face swapping.
2309.00458
Aerodynamic Characterization of a Fan Array Wind Generator
Experimental assessment of safe and precise flight control algorithms for unmanned aerial vehicles (UAVs) under gusty wind conditions requires the capability to generate a large range of velocity profiles. In this study, we employ a small fan array wind generator which can generate flows with large spatial and temporal variability. We perform a thorough aerodynamic characterization operating the fans uniformly from a low to the maximum level. PIV and hot-wire measurements indicate a jet-like flow with nearly uniform core which monotonously contracts in streamwise direction and surrounding growing unsteady shear-layers. These complex dynamics results in a limited region with desired flow profile and turbulence level. The experimental results shed light on the flow generated by a full-scale fan array wind generator, and indicate the need for further improvements via properly designed add-ons and dedicated control algorithms.
Songqi Li, Yutong Liu, Zhutao Jiang, Gang Hu, Bernd R. Noack, Franz Raps
2023-09-01T13:43:09Z
http://arxiv.org/abs/2309.00458v1
# Aerodynamic Characterization of a Fan Array Wind Generator ###### Abstract Experimental assessment of safe and precise flight control algorithms for unmanned aerial vehicles (UAVs) under gusty wind conditions requires the capability to generate a large range of velocity profiles. In this study, we employ a small fan array wind generator which can generate flows with large spatial and temporal variability. We perform a thorough aerodynamic characterization operating the fans uniformly from a low to the maximum level. PIV and hot-wire measurements indicate a jet-like flow with nearly uniform core which monotonously contracts in streamwise direction and surrounding growing unsteady shear-layers. These complex dynamics results in a limited region with desired flow profile and turbulence level. The experimental results shed light on the flow generated by a full-scale fan array wind generator, and indicate the need for further improvements via properly designed add-ons and dedicated control algorithms. ## 1 I. Introduction The rapid development and increased application of unmanned aerial vehicles (UAVs) and electrical Vertical Take-Off and Landing aircrafts (eVTOL) demand high levels of agility and precise control [1, 2, 3, 4, 5]. For instance, air taxis must safely land on rooftops and vertiports [6], cargo delivery drones must accurately follow the planned trajectories [7], and rescue drones require precise control in extreme weather conditions [8, 9]. However, the atmospheric boundary layer (ABL) may pose a considerable challenge to the safe and precise control of UAVs due to its turbulent, nonlinear, and unpredictable nature [10, 11]. The development and application of more advanced flight control algorithms become critical for UAVs to fight against environmental disturbances. To test the performance of flight control algorithms, it is necessary to simulate artificial winds with temporal and spatial variability during the flight tests [12, 13]. Multiple experimental techniques have been developed to generate artificial winds with spatial and temporal variability. Roughness elements are the most commonly used device to simulate ABL profiles in wind tunnels [14, 15, 16]. To generate flow unsteadiness, deploying louvers with multiple variable blades or vanes [17, 18, 19] is the most popular strategy. Other artificial disturbances in the flow, such as wavy walls [20], rotating cylinders [21], and a pitching/plunging airfoil [22] are also capable to generate unsteady incoming flow. An alternative strategy employs arrays of randomly actuated jets to generate desired turbulent profiles [23, 24, 25, 26, 27, 28]. Pioneered by Makita 29, active grids represent another type of technique to generate flow with spatial and temporal variations. Active grids consist a series of small wings that are mounted on the horizontal and vertical shafts. The rotation of shafts will alter the local blockage and gives rise to different incoming flow properties. Although the idea of active grids is first proposed to generate turbulence with desired characteristics [30, 31, 32, 33, 34], the capability of such devices to generate different flow profiles has recently been exploited and reported in [35, 36, 37, 13] among many others. The recent emergence of Fan Array Wind Tunnels (FAWTs) and Fan Array Wind Generators (FAWGs) represents a new opportunity to design artificial wind with specified spatial patterns and temporal profiles. By placing an array of fans (or propellers) at the test section inlet (FAWT,[38, 39, 40, 41, 42]), or in an open environment (FAWG, [43, 12, 44]), fan arrays are capable to generate customizable wind conditions by controlling the rotation speeds of fan elements. As the fans can be individually controlled, FAWTs and FAWGs can generate wind profiles with rich spatial and temporal variability. Since the operation of FAWGs doesn't rely on currently existing wind tunnels, wind generators quickly become a popular choice for UAV flight tests. Recently, several drone flight experiments are conducted where FAWGs are employed to generate environmental disturbances [45, 46, 47, 48]. These encouraging results suggest the potential of FAWGs to help test and improve the design of UAV control systems. However, from the aerodynamical point of view, there is still a lack of a complete characterization of flow generated from FAWGs. Although flow measurements from pointwise sensors have been reported in [49], flow field measurements with satisfying spatial resolution are still necessary in order to fully understand the flow properties. However, given the typical scale (\(\mathcal{O}(1\,\mathrm{m})\)) of FAWGs, it is difficult to directly apply Particle Image Velocimetry (PIV) for wind generator flow measurements, given the restrictions in laser power, camera positioning, etc. In this work, we design and construct a Small Fan Array Wind Generator (SFAWG), which has an overall size of \(40\,\mathrm{cm}\times 40\,\mathrm{cm}\) and contains 100 fan elements. We select small fans to construct the wind generator, such that typical flow measurement tools in the laboratory become available to characterize the flow field from the small wind generator. With flow measurements from SFAWG, we hope to infer the major flow characteristics of full scale wind generators. We conduct a complete aerodynamical study of this facility, including hotwire scanning on four near field planes close to the fan exit, planar PIV measurements on a streamwise plane, and stereoscopic PIV measurements on two far field planes. The discussion focuses on the characteristics of the streamwise velocity profiles when all fans are operated under the same rotation speeds, as the profiles of the streamwise velocity component are of the most importance regarding the generation of artificial wind gusts. From the analysis of the experimental data, we observe complex flow dynamics, including the initial mixing in the near field, the stretched and distorted jet profiles in the far field, and the fast expansion of the annular shear layers. These observations are crucial to (1) understand the complex flow dynamics in full-scale FAWGs, (2) guide the placement of UAVs in flight tests, and (3) develop add-on devices and control algorithms to further improve the flow quality. The manuscript is structured as follows. In Section II, we provide a detailed description of the experimental facility and techniques used in this study. This includes the design of the SFAWG, as well as the instrumentation used for flow measurements. Section III presents the flow measurement results, including the mean flow statistics and an extended discussion on the turbulence characteristics. we offer our major observations and provide an outlook for future research in Section IV. ## II. Experiment Facility and Techniques This section outlines the design of the Small Fan Array Wind Generator (SFAWG), and the experimental instrumentation for flow field characterization. Featuring individual control of \(10\times 10\) fan elements, SFAWG represents a miniaturved version of full scaled wind generators. We can utilize this specialized test facility to study the flow generated from fan arrays, and to implement flow control algorithms that enable us to achieve desired flow properties. This study focuses on the experimental characterization of SFAWG using hotwire anemometry (HWA) and particle image velocimetry (PIV). In Section II.A, we present a detailed description of the experimental facility, including its design and construction. This is followed by a description of the experimental setup which is provided in Section II.B. ### A. Experiment Facility: Small Fan Array Wind Generator As show in Figure 0(a), the small fan array wind generator is composed of \(10\times 10\) axial fans. The fans utilized in this study is Delta Electronics GFB0412EHS. This dual-stage counter-rotating fan has a squared cross-section and a dimension of \(40\,\mathrm{mm}\times 40\,\mathrm{mm}\times 56\,\mathrm{mm}\). The blades on the front side rotate counterclockwise, while the blades on the rear side rotate clockwise. We stack 100 fan elements in a nearly seamless manner both horizontally and vertically forming a \(10\times 10\) fan array. Specially designed frames and multi-hole washers are used to fix the fans, and the assembly is mounted on a stand that is \(1.5\,\mathrm{m}\) tall and made of aluminum extrusions. In this study, the wind generator is placed inside an empty chamber which is \(8\,\mathrm{m}\times 8\,\mathrm{m}\times 3.5\,\mathrm{m}\). We place the wind generator on one side of the chamber, and the air is blown towards the opposite end (Figure 0(c)). A cooling system is used inside the chamber to keep a constant ambient temperature of \(23\,\mathrm{\SIUnitSymbolCelsius}\). We connect the fan array to a 12 VDC power supply, and control the rotation speed of each fan element using pulse-width-modulation (PWM) signals with variable duty cycles. As the duty cycle increases, the fan speed will increase accordingly. Figure 2 presents the wiring diagram of the wind generator. To provide 100-channel PWM signals to the fan array, we chose a series of 7 PCA9685 PWM drivers. Each PWM driver is capable to generate 16-channel independent PWM outputs, and the duty cycle of every output is controllable. An Arduino Mega is used as the host controller to communicate with the PWM drivers and regulate the duty cycles of the PWM signals. With the above-mentioned hardware configurations and control setup, this small wind generator is capable to provide artificial wind with both spatial and temporal variability. As the goal of the current work is to investigate the aerodynamic characteristics of the artificially uniform flow generated from the fan array, here we employ uniform duty cycles for all fans and perform flow measurements to evaluate the performance of the wind generator. ### Flow Measurement Techniques: Hotwire Anemometry and Particle Image Velocimetry In this work, we utilize hotwire anemometry (HWA) and particle image velocimetry (PIV) to obtain accurate measurements of the flow fields. An overview of all experimental campaigns performed in this study is presented in Table 1. To better describe the experimental setup, we introduce the Cartesian coordinate system originating from the center of fan array exit. As displayed in Figure (a)a, \(x\) represents the streamwise direction, \(y\) and \(z\) represent horizontal Figure 1: A graphical introduction of small fan-array wind generator (SFAWG). (a) front view of SFAWG; (b) the fan element used in the wind generator; (c) smoke visualization of wind generated from SFAWG. and vertical directions, respectively. We also adopt \(h\) and \(H\) to represent the height/width of the individual fan element and the entire fan array, respectively. With the above-mentioned configurations, the Reynolds numbers (\(Re_{H}\) ) are \(3.07\times 10^{5}\), \(2.15\times 10^{5}\), and \(1.11\times 10^{5}\) at 100 %, 80 %, and 50 % duty cycles, respectively. Another important goal of this study is to investigate the flow characteristics in the near field of the flow. As PIV measurements suffer from strong background reflection when the light sheet illuminates the cross-stream planes in the near field, hotwire measurements are adopted in this region. We conduct hotwire measurements in conjunction with a two-dimensional positioning system to obtain streamwise velocity profiles near the fan exit. This approach produced promising flow field measurements with high spatial resolution. A HangHua miniature single-sensor hotwire probe is used to measure the streamwise velocity signals. The probe consists of a short tungsten wire with finite diameter and is attached to the two prongs made of stainless steel. As for the flow generated by fans, the streamwise fluctuations are an order of magnitude larger than the fluctuations in other directions [50]. Thus, the results from the single hotwire measurement can be used to indicate the flow characteristics in the streamwise direction. The measurements are performed on 4 near-field planes \(x/h=1\) to 4. To scan the flow field on each cross-stream plane, we move the hotwire probe with a spatial resolution of \(dy=dz=4\,\mathrm{mm}\). In this manner, the measurement results will form the velocity profiles on a two-dimensional grid. To minimize the blockage effect introduced by the hotwire and the traverse system, we connect the hotwire sensor to a 15 cm long probe made of stainless steel, which is connected to a low-profile dual-rail Figure 2: **The wiring diagram for the individual control of fan elements in SFAWG.** traverse to ensure smooth motion in the flow field. At each grid point, the sampling frequency is \(F_{s}=4000\,\mathrm{Hz}\) with a measurement time of \(t=10\,\mathrm{s}\). In this study hotwire calibration is realized by logging the mean voltage output under different known incoming velocities, and a 5th order polynomial fit is performed to obtain the relationship between voltage outputs and the corresponding velocities. The calibration is conducted using the wind generator, in which a Pitot tube and the hotwire probe measure the velocity and the voltage output at \((x,y,z)=(40\,\mathrm{cm},0,0)\), respectively. In addition, the constant temperature anemometry is carefully tuned such that the hotwire is responsive to disturbances up to 2000 Hz from a square-wave test. A National Instrument USB-6009 14-bit data acquisition device is used to sample voltage outputs from the hotwire anemometry. The input range of the data acquisition card is \(\pm 10\,\mathrm{volt}\). The overall measurement uncertainty from hotwire is about 3% considering positioning error, calibration error, A/D resolution, and temperature variation (\(\pm 1\,\mathrm{\SIUnitSymbolCelsius}\)). Flow field measurements are conducted under a PWM duty cycle of 80%, as significant heat lumping will occur in the DC power at higher duty cycles during the extended scanning process. In addition, particle image velocimetry (PIV) is employed to measure the velocity fields on one streamwise plane over the centerline (\(y=0\)), as well as two cross-stream planes in the far field of the fan array (\(x/H=1,2\)). The measurement planes are visualized in Figure 3b. In this study, we employ a LaVison PIV system equipped with a double-pulsed laser and two \(2752\times 2200\) mega-pixel CCD cameras. For velocity measurements on the \(y=0\) plane, a planar PIV configuration [51] is adopted to obtain two-dimensional velocity measurements, and two cameras are placed side-by-side to maximize the field of view (FoV) in the streamwise direction. For far field measurements, a stereoscopic setup [51] is used to calculate three dimensional velocity vectors. In this case, the two cameras with Scheimpflug adapters are mounted on opposite sides of the wind generator. Calibration of the camera system is achieved by detecting and fitting the target points on a three-dimensional calibration plate a via a pinhole model. A dual-cavity Litron Nano L 200-15 Nd:YAG laser of 532 nm wavelength illuminates seeding particles from an Antari fog machine that produces particles about \(0.2\,\mathrm{\SIUnitSymbolMicro m}\). PIV image pairs are acquired at a frequency of \(12\,\mathrm{Hz}\) under two representative duty cycles of \(50\,\mathrm{\char 37}\) and \(100\,\mathrm{\char 37}\). At each duty cycle, a total of 3000 snapshots are acquired for the planar PIV measurements, while 2500 snapshots are recorded for stereoscopic PIV measurements on each far field plane. All image pairs are post-processed in DaVis 10. PIV vectors are calculated using a multipass routine with 2 passes each for interrogation windows of \(128\times 128\) and \(64\times 64\). A \(50\,\mathrm{\char 37}\) overlap is also adopted during the post-processing, and the configurations results in a spatial resolution of \(3.6\,\mathrm{mm}\) for planar and \(6.5\,\mathrm{mm}\) for stereoscopic measurements. The uncertainty analysis performed by DaVis is based on the cross-correlation statistics during the calculation of velocity vectors [52]. This analysis results in an uncertainty less than \(0.15\,\mathrm{m}\,\mathrm{s}^{-1}\) for all PIV measurement campaigns. ## 3 Results and Discussions In this section, we present the experimental results from PIV and HWA measurements, and discuss the major characteristics of flow generated from SFAWG. Experimental results and discussions about major flow characteristics are presented in Section III.A. The discussion focuses on the flow quality for the streamwise velocity component, as it is of most importance regarding the evaluation of artificial wind gusts. The control of transverse velocity components for the streamwise velocity component are presented in Section III. \begin{table} \begin{tabular}{c c c c c} \hline \hline Experiment & Measurement Field(s) & Data Sampling & Spatial Resolution & Duty Cycle(s) \\ \hline \multirow{3}{*}{HWA} & \(x/h=1\), 2, 3, 4, & \multirow{3}{*}{\(F_{s}=4000\,\mathrm{Hz}\), \(t=10\,\mathrm{s}\)} & \(\Delta y=5\,\mathrm{mm}\), \(\Delta z=5\,\mathrm{mm}\) & \multirow{3}{*}{80\%} \\ & \(|z/H|\leqslant 0.5\) & & & \\ \hline \multirow{3}{*}{2D2C PIV} & \(0.1\leqslant x/H\leqslant 3\), & \multirow{3}{*}{3000 snapshots} & \(\Delta x=3.6\,\mathrm{mm}\), \(\Delta z=3.6\,\mathrm{mm}\) & \multirow{3}{*}{50\%, 100\%} \\ & \(|z/H|\leqslant 0.75\) & & & \\ \hline \multirow{3}{*}{2D3C PIV} & \(x/H=1\), 2, & \multirow{3}{*}{2500 snapshots} & \(\Delta y=6.5\,\mathrm{mm}\), \(\Delta z=6.5\,\mathrm{mm}\) & \multirow{3}{*}{50\%, 100\%} \\ & \(|z/H|\leqslant 0.75\) & & & \\ \hline \hline \end{tabular} \end{table} Table 1: An overview of experimental campaigns. Figure 3: A conclusion of flow measurement planes from HWA and PIV. Here \(h\) denotes the height of fan elements and \(H\) represents the height of the \(10\times 10\) fan array. (a) four HWA measurement planes in the near field; (b) three PIV measurement planes including one streamwise plane for planar PIV and two far field planes for stereoscopic PIV. The Cartesian coordinate system used in this study is visualized in the figure. specific goals, like increased uniformity and maximized turbulence, represent one unique application niche of fan array wind generators. The ongoing studies will be subject of later publications. Extended discussions regarding near field and far field turbulence properties are detailed in Section III.B. ### A. Flow characterization From PIV snapshots that are measured according to the configuration presented in Table 1, the mean streamwise velocity \(U\), as well as the fluctuating counterpart \(u^{\prime}\), can be calculated based on the Reynolds decomposition [53]. Figure 4a and Figure 4b present the evolution of streamwise velocity (\(U\)), and mean-squared turbulent velocity (\(\sqrt{\langle u^{\prime 2}\rangle}\)) along the centerline of the fan array (\(y=0\), \(z=0\)), respectively. Here \(\langle\cdot\rangle\) represents the ensemble average from PIV snapshots. At the same time, we define and calculate the far field jet centerline velocity \(U_{CL}\) as the spatially averaged centerline velocity between \(x=1H\) and \(3H\), and we normalize both \(U\) and \(\sqrt{\langle u^{\prime 2}\rangle}\) by the far field centerline velocity in both Figure 4a and Figure 4b. Based on our calculation, \(U_{CL}\) is about \(12.24\,\mathrm{m}\,\mathrm{s}^{-1}\) at 100% duty cycle, and is about \(4.43\,\mathrm{m}\,\mathrm{s}^{-1}\) at 50% duty cycle. After normalization, the velocity profiles at 50% and 100% duty cycles present similar characteristics. Before \(x/H=1\), a gradual increment of the jet centerline velocity is observed which corresponds to the initial mixing stage in the near field (\(x=\mathcal{O}(h)\)). In this region, small annular jets with high velocity are generated from the fan ducts. These jets expand and gradually increase the velocity in the rest of the flow region, including the centerline. The mixing process becomes weaker as the small jets propagate downstream, and it is accompanied by the decrement of mean-squared turbulence velocity on the centerline. The mixing process appears to stop near \(x=1H\), where the streamwise velocity reaches \(U_{CL}\), and the mean-squared turbulent velocity drops to about 3.5% of the far field centerline velocity. Thereafter, both \(U\) and \(\sqrt{\langle u^{\prime 2}\rangle}\) remain nearly constant between \(1H\) and \(3H\). The plateau of both the centerline velocity and the turbulence velocity between \(1H\) and \(2H\) is similar to the typical characteristics of a turbulent jet few diameters downstream (see, for example, [54]). This similarity will be elaborated in the following. Figure 5 displays the mean streamwise velocity (first row), mean-squared turbulent velocity (second row), and turbulence intensity (third row) that are measured under 100% and 50% duty cycles via pla Figure 4: **(a) Centerline streamwise velocity normalized by the far field centerline velocity \(U_{CL}\), and (b) centerline mean-squared turbulent velocity \(\sqrt{\langle u^{\prime 2}\rangle}\) normalized by \(U_{CL}\).** under both duty cycles exhibit strong similarities, which include the initial mixing in the near field, as well as the expansion of top and bottom shear layers in the far field (\(x=\mathcal{O}(H)\)). The nearly symmetrical expansion of the shear layers, as well as the streamwise decay of the "potential core", possesses similar characteristics as a typical jet flow. However, the mixing process in the near field are not completely symmetrical about the centerline, and local non-uniformity can also be observed inside the core region. These asymmetries may originate from slight differences of the fan elements. Similar to a typical jet flow, the core region possesses relatively low turbulence level in the far field of the fan array. In contrast, the shear layers are of high turbulence levels, where complex, multi-scaled turbulent structures emerge, evolve, and interact with each other. As the goal of the current work is to generate uniform flow from the fan array, flow regions with uniform velocity profiles and low turbulence intensity levels are preferred. To visualize the region of low turbulence intensity, we use white isolines in the last row of Figure 5 to represent a turbulence level of 10%. As the flow progresses downstream, the vertical extent of the region of low turbulence becomes increasingly restricted. At \(x=3H\), the vertical extent of the low turbulence region is only approximately \(0.5H\) for both duty cycles, indicating that the effective cross-sectional area with low turbulence level is reduced to only 25% of the fan array's cross-sectional area. To investigate the initial mixing in the near field, hotwire measurement results from four near field planes at \(x/h=1,2,3,4\) are presented in Figure 6. At a downstream distance of \(1h\), the flow is strongly non-uniform, with high-speed regions mostly concentrated around the annular fan duct. Due to subtle differences among fan elements, the Figure 5: A comparison of flow properties on the streamwise PIV measurement plane at duty cycles of 100% and 50%. First row: normalized mean streamwise velocity; second row: normalized mean-squared turbulent velocity; third row: turbulence intensity. White isolines on the third row represents a turbulence intensity of 10%. Fan locations are visualized as gray rectangles in the figure. flow profiles generated by different fans exhibit slight variations. As the flow progresses in the streamwise direction, the high-speed flow gradually interacts and mixes with the low-speed fluid, accompanied by a decrease in turbulence level from \(1h\) to \(4h\) downstream. Another interesting observation is the distortion of the flow envelope as the flow moves downstream. At \(x=1h\), the spatial locations of the annular jets are consistent with the location of the fan array, as visualized by the white grid lines. However, with increased mixing, the nearly-rectangular overall profile becomes more distorted at downstream locations, with the edges shrinking inward and the corners stretching outward. This is caused by the swirling of the flow generated by the rotating fan blades. Similar observations are also found in the turbulence intensity profiles, with regions of high turbulence intensity corresponding to the appearance of the turbulent shear layer. As the flow progresses to the far field, the cross-stream flow profiles become more and more distorted. In Figure 7, the mean streamwise velocity, mean-squared turbulent velocity, and turbulence intensity measured on two far field planes at \(x/H=1,2\) via stereoscopic PIV are displayed. After the mixing process in the near field, the core region in the far field becomes more uniform and less turbulent. Meanwhile, the expansion and distortion of the annular shear layer dominate the far field flow characteristics. The asymmetric profile may originate from the swirling of the flow generated from the small fans. Attenuating this distortion of the flow will become an important topic in order to produce wind profiles with higher quality. From \(1H\) to \(2H\) downstream, the velocity profile continues to be stretched and distorted, Fig. 6: **Flow profiles at four near field planes \(x/h=1,2,3,4\) from hotwire measurements at 80% duty cycle. First row: normalized mean streamwise velocity; second row: normalized mean-squared turbulent velocity; third row: turbulence intensity. Fan locations are visualized by the white grid lines.** while the significant thickening of the shear layer can be observed under both duty cycles. In addition, the region of low turbulence intensity inside the core region shrinks rapidly, which is similar to the observations in Figure 5. We calculate the vertical span of the low turbulence intensity region at different streamwise locations based on the white isolines which represents a turbulence intensity of 10%. Under both duty cycles, the results translate to a cross-sectional area of \(0.36H^{2}\) at \(x=2H\) and \(0.25H^{2}\) at \(x=3H\). ### Extended discussions on turbulence characteristics In this subsection we continue the discussion about turbulent characteristics in the near and far fields of the wind generator. Based on the observed strong similarity across different duty cycles, we will only present results from the maximum duty cycles (80% for hotwire and 100% for PIV measurements) to demonstrate the turbulence characteristics. Using hotwire measurements, we analyze the power spectrum density (PSD) of the velocity signals at four representative points on the \((y,z)\) plane: (1) the center of the cross-stream plane at \((0,0)\), (2) the center of the annular duct at \((0.5\,\mathrm{cm},2\,\mathrm{cm})\), (3) the center of a fan element at \((2\,\mathrm{cm},2\,\mathrm{cm})\), and (4) the corner of the fan array at \((20\,\mathrm{cm},20\,\mathrm{cm})\), as illustrated in Figure 8a. The PSDs of these four points at different near field planes are presented in Figure 8b. The Figure 7: Flow profiles at far field planes \(x/H=1,2\) from stereoscopic PIV measurements at 50% and 100% duty cycles. First row: normalized mean streamwise velocity; second row: normalized mean-squared turbulent velocity; third row: turbulence intensity. White isolines on the third row represents a turbulence intensity of 10%. Fan locations are visualized by the white grid lines. calculation of PSDs follows the typical windowing and averaging procedure. The Hanning window is adopted in the PSD calculation, with an overlap of 50%. The frequency \(f\) is non-dimensionalized forming the Strhouhal number \(St_{h}=fh/U_{CL}\), and PSD is also non-dimensionalized according to these scaling parameters. The use of the \(St_{h}\) is grounded on the similarity between the typical jet flow and the small "jets" generated by the individual fans. In addition, we include Kolmogorov's \(-5/3\) power law as black dashed lines in the figure. For nearly all cases, flat energy distributions occur in the low-frequency region, followed by exponential rolling-offs starting around \(St_{h}=0.1\). This implies that flow generated by the fay array is governed by complex turbulent structures at a wide range of turbulence scales. Notably, for point 4, we observe a broadband peak near \(St_{h}=0.3\) at \(x/h=1\). This peak may be related to the dynamics of the small annular jet, according to the jet characteristic frequency documented in [55]. However, the peak is quickly overwhelmed by broadband turbulence as the flow moves further downstream. At \(4h\) downstream, the roll-offs of the power spectra follow the\(-5/3\) power law, which indicates the end of the local mixing phenomenon among small "jets" generated by the individual fans. To reveal the spatial length scale of turbulent structures, the PIV snapshots measured on the streamwise plane is applied to calculate the normalized streamwise velocity correlation coefficients at 9 representative points. The origins of these 9 points are fixed at three streamwise locations (\(x/H=0.5,1.5,2.5\) ) and three vertical locations (\(z/H=0.5,0,-0.5\)). These points will allow us to examine the streamwise evolution of the velocity correlation on the upper shear layer, centerline, and the lower shear layer. The normalized velocity correlation coefficient (\(R\)) is calculated based on the following equation: \[R(\mathbf{x},\mathbf{x_{0}})=\frac{\langle u^{\prime}(\mathbf{x},t_{n})u^{ \prime}(\mathbf{x_{0}},t_{n})\rangle}{\sqrt{\langle u^{\prime}(\mathbf{x},t_{n })^{2}\rangle}\sqrt{\langle u^{\prime}(\mathbf{x_{0}},t_{n})^{2}\rangle}}. \tag{1}\] Figure 8: (a) The four points used to calculate the power spectral density (PSD) from hotwire measurements. The \((y,z)\) coordinates of the four points are: \((0,0)\),(\(0.5\,\mathrm{cm},2\,\mathrm{cm}\)),(\(2\,\mathrm{cm},2\,\mathrm{cm}\)), and \((20\,\mathrm{cm},20\,\mathrm{cm})\).(b) Power spectal density of points 1 to 4 at different near field locations. Here \(\mathbf{x}\) represents the locations of all possible points on the measurement plane and \(\mathbf{x_{0}}\) represents the locations of the 9 correlation points. These correlations are displayed in Figure 9. On the flow centerline, the spatial extension of the streamwise correlation remains small along the streamwise direction. However, clear growths of the correlation region can be observed in upper and lower shear layers. In addition, the streamwise correlations in the shear layers incline at an angle around \(15^{\circ}\) about the jet axis. These observations closely resemble the behavior of a turbulent jet, as noted by [56]. Therefore, we can conclude that the shear layer is subject to jet dynamics, even though it is initially distorted and stretched due to swirling. In addition, we perform the snapshot proper orthogonal decomposition (POD, [57, 58]) to extract the dominant coherent structures from the PIV images recorded on the streamwise plane at 100% duty cycle. The snapshot POD is a data analysis technique to decompose the velocity vectors into a set of orthonormal spatial modes \(\mathbf{u}_{i}\) and the corresponding mode amplitudes \(a_{i}\), such that: \[\mathbf{u^{\prime}}(\mathbf{x},t_{n})\approx\sum_{i=1}^{N}a_{i}(t_{n})\: \mathbf{u}_{i}(\mathbf{x}). \tag{2}\] Here \(\mathbf{u^{\prime}}\) represents the turbulent velocity vector and \(t_{n}\) represents the time for the \(n\)th snapshot. \(N\) is the truncation of POD modes. We note that for the calculation of POD modes, the full velocity vectors from PIV measurements Fig. 9: **Normalized streamwise velocity correlation coefficient in the \(y=0\) plane at 9 selected \((x,z)\) points under 100% duty cycle.** are employed to calculate the POD modes, and in the following discussions we only present the eigenvector in the streamwise direction which can already reveal the shape and distribution of turbulent structures in the flow. We visualize the energy distribution of the leading POD modes in Figure 10a and display the streamwise components of the first 12 POD modes in Figure 10b. The energy distribution of the leading POD modes, as shown in Figure 10a, is highly dispersive. Specifically, the first two POD modes contains only 4.5% and 2.8% of the overall turbulent kinetic energy (TKE), respectively. The need for the first 88 modes to account for 50% of the TKE suggests the existence of turbulent structures with multiple scales and divergent dynamics, which is consistent with observations from hotwire measurements. Additionally, Figure 10b shows the dominant flow structures with the highest energy levels. In the leading POD modes, turbulent structures are mainly located inside the top and bottom shear layers, and their spatial scales become smaller for higher-order modes. However, none of the spatial modes display a clear symmetrical or anti-symmetrical pattern, which might be caused by the distortion of the shear layer profiles depicted in Figure 7. In a similar manner, we apply the snapshot POD to investigate the dominant turbulent structures at two cross-stream measurement planes under 100% PWM, and the results are presented in Figure 11 and Figure 12. In both cases, the leading POD modes contain only fractional amounts of the overall TKE. Comparatively the energy convergence at \(x=2H\) is slightly faster than at \(x=1H\). In terms of the shapes of the dominant turbulent structures, the streamwise components of the leading POD modes at \(x=1H\) are dominated by small-scaled turbulent structures that are non-uniformly distributed inside the shear layer region. These small-scaled structures progress downstream and interact with each other. At \(x=2H\), the dominant turbulent structures become larger, and clear azimuthal patterns can be observed inside the leading POD modes, although the turbulent shear layer is not strictly axisymmetric. ## 4 Conclusion and Outlook In this study, we design and construct the Small Fan Array Wind Generator (SFAWG), featuring individual control of \(10\times 10\) fan elements. We utilize uniform duty cycles for the fan array, and perform aerodynamic characterization of the facility using hotwire measurements in the near field (\(O(h)\)), as well as Particle Image Velocimetry (PIV) measurements in one streamwise plane and two cross-stream planes in the far field (\(O(H)\)). The major observations are concluded as follows: * In the near field, the outer flow is dominated by the mixing process, where high momentum "swirling jets" generated from the fans expand and influence the neighboring low speed region. A nearly uniform mixing can be achieved at the end of the near field. * In the far field, the flow characteristics are similar to those of a typical jet, including the expansion of the jet shear layer and the decay of the core region. Moreover, the annular shear layer is stretched and distorted as the flow progresses downstream, which results in the multi-scaled, energy-dispersive turbulent structures. * The complex dynamics in both the near and far fields severely limit the flow region with the desired properties of uniformity and low turbulence level. At a downstream distance of \(2H\), only about 36% of the cross-sectional area has satisfying flow properties, and this ratio drops to about 25% at \(3H\) downstream. * The flow profiles generated at different duty cycles exhibit strong similarity, indicating that the challenges discussed above will exist under a wide range of duty cycles. Preliminary investigations yield similar observations for the world's largest FAWG for drone testing. This FAWG has a 3.25 m \(\times\) 3.25m active blowing frame accomodating 1600 individually controllable computer fans [59]. Hence, we hypothesize that our results as typical for all square FAWG. The major conclusions of this study highlight the need to improve the flow generated from FAWGs. To improve the flow quality, mesh grids and honeycombs may be applied to enable a reduced turbulence level. However, the Fig. 10: Results of the snapshot POD from PIV measurements on the streamwise plane at 100% PWM. (a) energy distribution of the leading POD modes; (b) streamwise components of the first 12 POD modes. employment of such devices will unavoidably lead to energy loss, so a dedicated design of the flow straightening devices may help to achieve a balance between flow quality and energy loss. To reduce the shear layer expansion, local flow control devices can be adopted, like vortex generators for jets [60]. The generation of more accurate spatial-temporal wind profiles will benefit from general nonuniform operation of the fans with suitable control schemes[61]. In the pioneering works of [62, 63, 64], the control of turbulence level and wind profiles are achieved using various control algorithms. With the implementation of upstream control and downstream sensing, the addition of state-of-the-art control strategies may further improve the flow profiles at desired downstream location. The fan array wind generators possess undoubted potential to empower the design and test of future UAVs. Fig. 11: Results of the snapshot POD from PIV measurements on the cross-stream plane \(x=1H\) at 100% PWM. (a) energy distribution of the leading POD modes; (b) streamwise components of the first 6 POD modes. Fig. 12: Results of the snapshot POD from PIV measurements on the cross-stream plane \(x=2H\) at 100% PWM. (a) energy distribution of the leading POD modes; (b) streamwise components of the first 6 POD modes. ## Funding Sources This work is supported by the National Science Foundation of China (NSFC) through grants 12172109, 12172111, and 12202121, by Guangdong province, China, by the Shenzhen Science and Technology Program under grant JCYJ20220531095605012, by the Natural Science and Engineering grant 2022A1515011492, and by the Science and Technology Innovation Bureau, Pingshan District, Shenzhen City, China, through grant 29853M-KCJ-2023-002-05. ## Acknowledgments We acknowledge valuable discussions with Jialong Chen, Zhibin Chen, Guy Yoslan Cornejo Maceda, Nan Deng, Nan Gao, Francois Lusseyran, Jincai Yang, Jun Yang, Yang Yang, Yannian Yang, Huang Yao and Yanlung Zhang. We appreciate generous technical and scientific support from the HangHua company (Dalian, China). We also extend our thanks to Prof. Xiangyuan Zheng and Prof. Sunwei Li from Tsinghua Shenzhen International Graduate School for their generous help during the experiments. We are grateful to the anonymous referees for their insightful and constructive advice.
2310.15650
A characterization on orientations of graphs avoiding given lists on out-degrees
Let $G$ be a graph and $F:V(G)\to2^N$ be a set function. The graph $G$ is said to be \emph{F-avoiding} if there exists an orientation $O$ of $G$ such that $d^+_O(v)\notin F(v)$ for every $v\in V(G)$, where $d^+_O(v)$ denotes the out-degree of $v$ in the directed graph $G$ with respect to $O$. In this paper, we give a Tutte-type good characterization to decide the $F$-avoiding problem when for every $v\in V(G)$, $|F(v)|\leq \frac{1}{2}(d_G(v)+1)$ and $F(v)$ contains no two consecutive integers. Our proof also gives a simple polynomial algorithm to find a desired orientation. As a corollary, we prove the following result: if for every $v\in V(G)$, $|F(v)|\leq \frac{1}{2}(d_G(v)+1)$ and $F(v)$ contains no two consecutive integers, then $G$ is $F$-avoiding. This partly answers a problem proposed by Akbari et. al.(2020)
Xinxin Ma, Hongliang Lu
2023-10-24T09:09:00Z
http://arxiv.org/abs/2310.15650v1
# A characterization on orientations of graphs avoiding given lists on out-degrees+ ###### Abstract Let \(G\) be a graph and \(F:V(G)\to 2^{N}\) be a set function. The graph \(G\) is said to be _F-avoiding_ if there exists an orientation \(O\) of \(G\) such that \(d_{O}^{+}(v)\notin F(v)\) for every \(v\in V(G)\), where \(d_{O}^{+}(v)\) denotes the out-degree of \(v\) in the directed graph \(G\) with respect to \(O\). In this paper, we give a Tutte-type good characterization to decide the \(F\)-avoiding problem when for every \(v\in V(G)\), \(|F(v)|\leq\frac{1}{2}(d_{G}(v)+1)\) and \(F(v)\) contains no two consecutive integers. Our proof also gives a simple polynomial algorithm to find a desired orientation. As a corollary, we prove the following result: if for every \(v\in V(G)\), \(|F(v)|\leq\frac{1}{2}(d_{G}(v)+1)\) and \(F(v)\) contains no two consecutive integers, then \(G\) is \(F\)-avoiding. This partly answers a problem proposed by Akbari et. al.(2020) ## 1 Introduction Let \(G\) be a graph without loops and with vertex set \(V(G)\) and edge set \(E(G)\). Let \(e(G):=|E(G)|\). For \(v\in V(G)\), we use \(d_{G}(v)\) to denote the degree of a vertex \(v\) in \(G\). For \(S\subseteq V(G)\), let \(G[S]\) denote the vertex induced subgraph induced by \(S\). Given two integers \(s,t\) such that \(s\leq t\), \(\{s,s+1,\ldots,t\}\) is denoted by \([s,t]\). An orientation of \(G\) is an assignment of a direction to each edge of \(G\). For a vertex \(v\in V(G)\), we denote by \(d_{O}^{+}(v)\) the out-degree of \(v\) under the orientation \(O\) of the edges of \(G\). Because of its fruitful applications, an orientation with specified properties has been extensively studied [2, 8]. Frank and Gyarfas [6] proved that for a graph \(G\) and two mappings \(a,b:V(G)\to\mathbb{N}\) with \(a(v)\leqslant b(v)\) for every vertex \(v\), \(G\) has an orientation such that \[a(v)\leqslant d^{+}(v)\leqslant b(v)\] for every vertex \(v\), if and only if for any \(U\subseteq V(G)\), \[\sum_{v\in U}a(v)-d(U)\leqslant|E(G[U])|\leqslant\sum_{v\in U}b(v).\] where \(d(U)\) is the number of edges connecting \(U\) and \(V(G)\setminus U\). Borowiecki, Grytczuk and Pilsniak [3] discovered a beautiful fact that every graph admits an orientation of its edges such that the outdegrees of any two adjacent vertices are different. Such orientations can be interpreted as graph colorings and are now known as proper orientations. Let \(F:V(G)\to 2^{N}\). A graph is said to be _F-avoiding_ if there exists an orientation \(O\) of \(G\) such that \(d_{O}^{+}(v)\notin F(v)\) for every \(v\in V(G)\). In this case, we say that \(O\) avoids \(F\) or \(O\) is an _F-avoiding_ orientation. Conversely, for \(H:V(G)\to 2^{N}\), we call that an orientation \(O\) of \(G\) an _H-orientation_, if \(d_{O}^{+}(v)\in H(v)\) for every \(v\in V(G)\). In this paper, we may always assume that for every \(v\in V(G)\), \(\max H(v)\leq d_{G}(v)\) and \(\min H(v)\geq 0\). Akbari et al. [1] proved that for a graph \(G\) and \(F:V(G)\to 2^{N}\), if \[|F(v)|\leq\frac{d_{G}(v)}{4}\] for every vertex \(v\), then \(G\) is _F-avoiding_. Furthermore, they proposed that the following conjecture. **Conjecture 1.1** (Akbari et al, [1]): _For a graph \(G\), and a mapping \(F:V(G)\to 2^{N}\), if_ \[|F(v)|\leq\frac{1}{2}(d_{G}(v)-1)\] _for every \(v\in V(G)\), then \(G\) is F-avoiding._ In the same paper, Akbari et al. [1] proved that Conjecture 1.1 holds for bipartite graphs. Bradshaw et.al. made some research progress on Conjecture 1.1: they [4] proved if \(|F(v)|\leq|d_{G}(v)|/3\) for every \(v\in V(G)\), then \(G\) is \(F\)-avoiding. In order to settle this conjecture, in this paper, we can make some progress with some constraints on the mapping \(F\). Our proof is also to give polynomial algorithm to find such orientation. In this proof, we use the technique of _parity trace_ to describe the existence of the suitable orientation \(O\) of \(G\) in the constraints of the mapping \(H:V(G)\to 2^{N}\), where \(H\) has the property: \[i\notin H(v)\ \ \mbox{implies }i+1\in H(v),\mbox{ for every }i\in\mathbb{N},\ v\in V(G). \tag{1}\] Let us call the pair \((G,H)\)_dense_ if \(H\) satisfies (1), and \(G\) is connected. _Parity trace_ is a specific permutation of all the cut vertices of \(G\), the details of which are described in Section 3. To formulate main result, we need to introduce the following notions. Let \((G,H)\) be dense, \(V_{0}^{1}\) denote the set of odd vertices, and \(V_{0}^{2}\) the set of even vertices. Suppose \((x_{1},...,x_{k})\) is an order of \(V(G)\setminus(V_{0}^{1}\cup V_{0}^{2})\). \((x_{1},...,x_{k})\) will be called a _parity trace_ if it satisfies the following statement. * Let \(V_{0}^{1}:=\{x\mid H(x)\mbox{ is odd}\}\) and \(V_{0}^{2}:=\{x\mid H(x)\mbox{ is even}\}\). For \(i=1,...k\) define recursively the numbers \(l_{i},u_{i}\), and the sets \(V_{i}^{1}\) and \(V_{i}^{2}\), for which (2) above holds; * Suppose \(V_{i-1}^{1}\) and \(V_{i-1}^{2}\) have already been defined. Let \(l_{i}\) be the number of those components \(C\) of \(G-x_{i}\) for which \(C\subseteq V_{i-1}^{1}\cup V_{i-1}^{2}\) and satisfy \(|C\cap V_{i-1}^{1}|+e(C)\not\equiv d_{G}(x_{i},C)\pmod{2}\). Let \(u_{i}:=d_{G}(x)-t_{i}\), where \(t_{i}\) is the number of those components \(C\) of \(G-x_{i}\) for which \(C\subseteq V_{i-1}^{1}\cup V_{i-1}^{2}\) and \(|C\cap V_{i-1}^{1}|\neq e(C)\pmod{2}\). Moreover, the following statement holds. \[\mbox{All elements in }[l_{i},u_{i}]\cap H(x_{i})\mbox{ have the same parity.}\] (2) * If \([l_{i},u_{i}]\cap H(x_{i})\) is odd, then let \(V_{i}^{1}:=V_{i-1}^{1}\cup x_{i}\), and \(V_{i}^{2}:=V_{i-1}^{2}\); if it is even, then \(V_{i}^{1}:=V_{i-1}^{1}\), and \(V_{i}^{2}:=V_{i-1}^{2}\cup x_{i}\). There is no other cases according to (2). \(([l_{i},u_{i}]\cap H(x_{i})=\emptyset\) is possible, then we are free to consider it to be either odd or even.) If \((x_{1},...x_{k})\) is a parity trace, then let \(V^{1}:=V_{k}^{1},V^{2}:=V_{k}^{2}\). Clearly, \(V(G)=V^{1}\cup V^{2},V^{1}\cap V^{2}=\emptyset\). In this paper, we give a Tutte-type characterization for \(H\)-orientation problem when \((G,H)\) is dense. **Theorem 1.2**: _Let \(G\) be a graph and let \(H:V(G)\to 2^{N}\) such that \((G,H)\) is dense. There exists an \(H\)-orientation of \(G\) if and only if there is no parity trace such that \(|V^{1}|\neq e(G)\pmod{2}\)._ We mainly refer to the idea for finding parity factors in [7], which also appears in [9]. By Theorem 1.2, we may give a partial solution to Conjecture 1.1. In Section 2, we characterize the existence of _H-orientation_ for _dense pairs_ with 2-connected \(G\); this characterization is used in Section 3 to obtain the characterization for connected graphs. ## 2 2-Connected Graphs First we prove a simple statement concerning arbitrary _dense pairs_. Before proceeding the main arguments, we put several terminologies that are used in next proof. For a given _dense pair_\((G,H)\) and an orientation \(O\) of \(G\), we call \(v\in V(G)\)_feasible_ if \(d_{O}^{+}(v)\in H(v)\). Let \(S\subseteq V(G)\), and let \(e(S)\) denote the number of edges whose two vertices both belong to \(C\), i.e., \(e(S):=|E(G[S])|\). We denote by \(d_{G}(a_{i},S)\) the number of edges joining \(a_{i}\) to some vertex of \(S\), and \(d_{G}^{+}(a_{i},S)\) the out-degree of \(a_{i}\) to \(S\). For \(a\in V(G)\), we denote \(\delta(a)\) the set of edges incident with \(a\). Let \(O\) be an orientation of \(G\) and \(P(x,y)\) be a path (or cycle when \(x=y\)) on \(G\) connecting vertices \(x\) and \(y\). For \(e\in E(G)\), denote \(O_{e}\) the orientation on \(e\) based on \(O\) and \(-O_{e}\) the reverse orientation with respect to \(O_{e}\). We define the _symmetry difference_ between \(O\) and \(P(x,y)\) by \(O\bigtriangleup P(x,y)\), that is, \[\hat{O}=O\bigtriangleup P(x,y):=\begin{cases}O_{e},&\text{if }e\notin P(x,y); \\ -O_{e},&\text{if }e\in P(x,y).\end{cases}\] **Lemma 2.1**: _Let \(u\in V(G)\). If \((G,H)\) is dense, then there exists an orientation \(O\) of \(G\) such that \(v\) is feasible for all \(v\in V(G)\setminus\{u\}\)._ **Proof.** Let \(O\) be an orientation of \(G\) such that for any orientation \(O^{\prime}\) of \(G\), \[|x\in V(G)\setminus\{u\}:d_{O}^{+}(x)\notin H(x)|\leq|x\in V(G)\setminus\{u \}:d_{O^{\prime}}^{+}(x)\notin H(x)|.\] We claim \(x\in V(G)\setminus\{u\}:d_{O}^{+}(x)\notin H(x)=\emptyset\). Otherwise, suppose that there exists \(v\in V(G)-\{u\}\), such that \(d_{O}^{+}(v)\notin H(v)\). Then let \(P\) be a path connecting \(u\) and \(v\), and let \(P(x,y)\) denote the sub-path of \(P\) between two of its vertices \(x\) and \(y\). Note that we may choose \(v\) such that that for any \(w\in V(P)\setminus\{u,v\}\), \(d_{O}^{+}(w)\in H(w)\). By (1), \(d_{O\bigtriangleup P}^{+}(v)\in H(v)\). Let \(x\) be the vertex with \(d_{O\bigtriangleup P}^{+}(x)\notin H(x)\) nearest to \(v\) on \(P\) (if there exists no such vertex, then define \(x:=u\)). By the choice of \(x\), one can see that \(d_{O\bigtriangleup P(v,x)}^{+}(x)\in H(x)\), and for any \(y\in V(P(x,v))\setminus\{x,v\}\), \(d_{O\bigtriangleup P(x,v)}^{+}(y)=d_{O\bigtriangleup P}^{+}(y)\in H(y)\). Thus we have \[\{w\in V(G)\setminus\{u\}:d_{O\bigtriangleup P(x,v)}^{+}(w)\notin H(w)\} \subseteq\{y\in V(G)\setminus\{u\}:d_{O}^{+}(y)\notin H(y)\}.\] Note that \(v\) is _feasible_. Hence \[|y\in V(G)\setminus\{u\}:d_{O\bigtriangleup P(x,v)}^{+}(y)\notin H(y)|<|y\in V (G)\setminus\{u\}:d_{O}^{+}(y)\notin H(y)|,\] contradicting to the choice of \(O\). \(\Box\) For \(u\in V(G)\), define \[D_{G,H}^{+}(u)=\{d_{O}^{+}(u):O\text{ is an orientation of }G,\ d_{O}^{+}(x)\in H (x)\ if\ x\in V(G)\setminus\{u\}\},\] i.e.,, \(D^{+}_{G,H}(u)\) is the set of all possible out-degrees in \(u\), under the condition that all the other vertices are _feasible_. We often use the following observations: **Observation 2.2**: 1. _By Lemma_ 2.1_,_ \(D^{+}_{G,H}(u)\neq\emptyset\)_._ 2. _There exists an H-orientation if and only if_ \(D^{+}_{G,H}(u)\cap H(u)\neq\emptyset\)_._ 3. _If_ \(i,i+1\in D^{+}_{G,H}(u)\)_, then by (_1_),_ \(D^{+}_{G,H}(u)\cap H(u)\neq\emptyset\)_, and consequently there exists an_ \(H\)_-orientation_._ Given a _dense pair_\((G,H)\) and \(x\in V(G)\), we call \(H(x)\)_odd_, or _even_, if \(H(x)\) consists of only odd or of only even integers. A vertex \(x\) will be said to have _fixed parity_ if \(H(x)\) is odd or even. When there is no confusion, we also call \(x\) is _odd_ (or _even_) if \(H(x)\) is odd (even, respectively). **Lemma 2.3**: _Suppose \(u\in V(G)\) is not a cut vertex in \(G\). Then \(D^{+}_{G,H}(u)\) either contains two consecutive integers, or if not, then it consists of all the odd, or all the even integers in \([0,d_{G}(u)]\)._ **Proof.** If \(d_{G}(u)=1\), then the result holds obviously. Next we may assume that \(d_{G}(u)\geq 2\). Consider \(D^{+}_{G,H}(u)\) contains two consecutive integers. Since \((G,H)\) is dense, \(G\) contains an \(H\)-orientation. Now suppose that \(G\) does not admit \(H\)-orientation. Claim 1.\(i\in D^{+}_{G,H}(u),\ i+1\notin D^{+}_{G,H}(u),\ i+2\leqslant d_{G}(u)\) implies \(i+2\in D^{+}_{G,H}(u)\). Let \(O\) be an orientation of \(G\) such that \(d^{+}_{O}(x)\in H(x)\) for all \(x\in V(G)\setminus\{u\}\), \(d^{+}_{O}(u)=i\), and \(e_{1},e_{2}\in\delta(u)\) satisfying that the orientations of them are pointing to \(u\) with respect to \(O\). Since \(u\) is not a cut vertex, there exists a cycle \(C\) containing both \(e_{1}\) and \(e_{2}\). If for every \(x\in V(C)\setminus\{u\}\), \(d^{+}_{O\triangle C}(x)\in H(x)\), then by the definition, one can see that \(i+2\in D^{+}_{G,H}(u)\). For \(x\notin V(C)\setminus\{u\}\), we use \(P(u,x)\) denote the sub-path clockwise along \(C\). Let \(v\in V(C)\setminus\{u\}\) be the first vertex along \(C\) clockwise such that \(d^{+}_{O\triangle C}(v)\notin H(v)\). Thus for every \(y\in V(P(u,v))\setminus\{u,v\}\), we have \(d^{+}_{O\triangle C}(y)=d^{+}_{O}(y)\in H(y)\). Moreover, since \((G,H)\) is dense, \(d^{+}_{O\triangle C}(v)\notin H(v)\) and \(d^{+}_{O}(v)\in H(v)\), we have \(d^{+}_{O\triangle P(u,v)}(v)\in H(v)\). Recall that \(i\notin H(u)\) and \(i+2\leq d_{G}(u)\). Since \((G,H)\) is dense, then we have \(i+1\in H(u)\). Thus we infer that \(d^{+}_{O\triangle P(u,v)}(u)\in H(u)\). Now one can see that for all \(x\in V(G)\), \(d^{+}_{O\triangle P(u,v)}(x)\in H(x)\), i.e., \(G\) has an \(H\)-orientation, a contradiction. This completes the proof. With similar discussion, we have the following statement. Claim 2.\(i\in D^{+}_{G,H}(u),\ i-1\notin D^{+}_{G,H}(u),\ i-2\geqslant 0\) implies \(i-2\in D^{+}_{G,H}(u)\). Combining Claims 1 and 2, the result is followed. This completes the proof. \(\Box\) **Lemma 2.4**: _If \((G,H)\) is dense, and \(G\) does not have \(H\)-orientation, then all the vertices of \(G\) except possibly the cut vertices have fixed parity._ **Proof.** Suppose that \(v\in V(G)\) is not a cut vertex. If \(D^{+}_{G,H}(v)\) contains two consecutive integer, then \(D^{+}_{G,H}(v)\cap H(v)\neq\emptyset\) since \((G,H)\) is dense. So by Observation 2.2, we may infer that \(G\) has an \(H\)-orientation, a contradiction. If \(D^{+}_{G,H}(v)\) does not contain two consecutive integers, then by Lemma 2.3, it consists of all the odd or all the even integers of the interval \([0,d_{G}(v)]\). Since \((G,H)\) is dense and \(H(v)\cap D^{+}_{G,H}(v)=\emptyset\), we have \(H(u)=[0,d_{G}(v)]\backslash D^{+}_{G,H}(v)\). Thus \(H(v)\) has fixed parity. This completes the proof. \(\Box\) **Theorem 2.5**: _Let \(G\) is 2-connected and \((G,H)\) is dense. \(G\) has no \(H\)-orientation if and only if \(x\) has fixed parity for all \(x\in V(G)\), and \(|\{x\in V(G)\ |\ H(x)\mbox{ is odd}\}|\neq e(G)\pmod{2}\)._ **Proof.** Necessity. Suppose that \(G\) has no \(H\)-orientation. By Lemma 2.4, we have that every vertex has fixed parity. Choose an arbitrary \(v\in V(G)\). By Lemma 2.1, there exists an orientation \(O\) of \(G\) such that \(d^{+}_{O}(x)\) has the same parity as \(H(x)\) for all \(x\in V(G)\backslash\{v\}\). If \(d^{+}_{O}(x)\) and \(H(x)\) has the same parity, then we have \(d^{+}_{O}(x)\in H(x)\) since \((G,H)\) is dense. Thus we may infer that \(G\) has an orientation, a contradiction. Now consider \(d^{+}_{O}(x)\) and \(H(x)\) has the different parity. Then we have \[|\{x\in V(G)\ |\ H(x)\mbox{ is odd}\}|\neq\sum_{x\in V(G)}d^{+}_{O}(x)\pmod{2}.\] Since \(\sum\limits_{v\in V(G)}d^{+}_{O}(v)=e(G)\), one can see that \(|\{x\in V(G)\ |\ H(x)\mbox{ is odd}\}|\neq e(G)\pmod{2}\). This completes the proof. Sufficiency. By contradiction, suppose that \(G\) has an \(H\)-orientation. Then we have \(\sum\limits_{v\in V(G)}d^{+}_{O}(v)\equiv|\{x\in V(G)\ |\ H(x)\mbox{ is odd}\}|\pmod{2}\). Other hand, \(\sum\limits_{v\in V(G)}d^{+}_{O}(v)=e(G)\). Thus we may infer that \(|\{x\in V(G)\ |\ H(x)\mbox{ is odd}\}|\equiv e(G)\pmod{2}\), a contradiction. This completes the proof. \(\Box\) We are still far from our goal: in order to see clearly when \(G\) does not have _H-orientation_, we should better understand the set \(D^{+}_{G,H}(v)\) for cut vertices as well. ## 3 Connected Graphs **Proof of Theorem 1.2.** Firstly, we prove necessity. Suppose there exists an \(H\)-orientation \(O\). Let \((x_{1},...,x_{k})\) be a parity trace and let \(V^{1}\) be defined as above. Now it is sufficient for us to prove \(|V^{1}|\equiv e(G)\pmod{2}\). Since \(\sum_{x\in V(G)}d^{+}_{O}(x)=e(G)\), it is enough to prove that \(d^{+}_{O}(x)\) is odd if and only if \(x\in V^{1}\). **Claim 1.** Let \(V_{j}^{1}\) and \(V_{j}^{2}\) be defined as above. Then \(d_{O}^{+}(x)\) is odd for \(x\in V_{j}^{1}\), and \(d_{O}^{+}(x)\) is even for \(x\in V_{j}^{2}\). We prove this statement by induction on \(j\leqslant k\). For \(j=0\), the result holds by definition. Suppose this is true for \(j=i-1\). Let \(C\) be a connected component of \(G-x_{i}\) such that \(C\subseteq V_{i-1}^{1}\cup V_{i-1}^{2}\). If \(|C\cap V_{i-1}^{1}|+e(C)\not\equiv d_{G}(x_{i},C)\pmod{2}\), then \(d_{O}^{+}(x_{i},C)\geq 1\). By the definition of "\(l_{i}\)", this immediately implies \(d_{O}^{+}(x_{i})\geq l_{i}\). If \(C\subseteq V_{i-1}^{1}\cup V_{i-1}^{2}\), and the parity of \(|C\cap V_{i-1}^{1}|\) is different from the parity of \(e(C)\), then \(d_{O}^{+}(x_{i},C)\leq e_{G}(x_{i},C)-1\). By the definition of "\(t_{i}\)", \(d_{O}^{+}(x_{i})\leq d_{G}(x_{i})-t_{i}=u_{i}\) immediately follows. Consequently \(d_{O}^{+}(x_{i})\in[l_{i},u_{i}]\cap H(x_{i})\), whence \(d_{O}^{+}(x_{i})\) is odd if \(x_{i}\in V_{i}^{1}\), and even if \(x_{i}\in V_{i}^{2}\). This completes the proof claim 1. Since \((x_{1},...x_{k})\) is a parity trace, \(V(G)=V^{1}\cup V^{2}\). By Claim 1, \(d_{O}^{+}(x)\) is odd if and only if \(x\in V^{1}\). This completes the proof of necessity. Next we prove sufficiency. By contradiction, suppose that \(G\) does not admit _H-orientation_. It is sufficient to construct a parity trace \((x_{1},...x_{k})\) satisfying that the parity of \(|V^{1}|\neq e(G)\pmod{2}\). Suppose for \(1\leq i\leq k\), \(V_{i-1}^{1}\) and \(V_{i-1}^{2}\) have been defined. We construct \((x_{1},...x_{k})\) so that (2) holds. Now we give the definition of \(V_{i}^{1},V_{i}^{2}\). **Claim 2.** There exists \(x_{i}\in V(G)\setminus(V_{i-1}^{1}\cup V_{i-1}^{2})\), for which all components of \(G-x_{i}\) except possibly one are subsets of \(V_{i-1}^{1}\cup V_{i-1}^{2}\). By Lemma 2.4, every vertex in \(V(G)\setminus(V_{0}^{1}\cup V_{0}^{2})\) is a cut vertex of \(G\). Recall that \(V_{0}^{1}\cup V_{0}^{2}\subseteq V_{i}^{1}\cup V_{i}^{2}\) for \(i\geq 0\). Thus all vertices in \(V(G)\setminus(V_{i}^{1}\cup V_{i}^{2})\) are cut vertices of \(G\). Write \(U_{i}=V(G)\setminus(V_{i}^{1}\cup V_{i}^{2})\). We choose \(x\in U_{i}\) such that \[\max\{|V(C)\cap U_{i}|\ \mbox{$C$ is a connected components of $G-x$}\}\] is as large as possible. Let \(R\) be the connected component of \(G-x\) such that \[|V(R)\cap U_{i}|=\max\{|V(C)\cap U_{i}|\ \mbox{$C$ is a connected components of $G-x$}\}.\] We claim \(U_{i}\backslash\{x\}\subseteq V(R)\). Otherwise, let \(x^{\prime}\in U_{i}\setminus(V(R)\cup\{x\})\). Then \(G-x^{\prime}\) has a connected component \(R^{\prime}\) such that \(x\in V(R^{\prime})\) and \(V(R)\subseteq V(R^{\prime})\), contradicting to the choice of \(x\). This completes the proof of claim 2. Choose now \(x_{i}\) to be a point with the property stated in the Claim 2. Let \({\cal B}\) be the set of induced subgraphs with the form \(G[V(C)\cup\{x_{i}\}]\), where \(C\) is a connected component of \(G-x_{i}\). Thus \(x_{i}\) is not a cut vertex of graph \(B\) for every \(B\in{\cal B}\), and \[D_{G,H}^{+}(x_{i})=\sum_{B\in{\cal B}}D_{B,H}^{+}(x_{i}). \tag{3}\] (If \(X,Y,...Z\) are sets of numbers, then \(X+Y+...+Z:=\{x+y+...+z\mid\;x\in X,\;y\in Y,...,z\in Z\}\).) Since \(G\) does not have _H-orientation_, \(D^{+}_{G,H}(x_{i})\) does not contain two consecutive integers. Thus for all \(B\in\mathcal{B}\), \(D^{+}_{B,H}(x_{i})\) also does not contain two consecutive integers. By Lemma 2.3, for each \(B\in\mathcal{B}\), every element of \(D^{+}_{B,H}(x_{i})\) is odd or every element of \(D^{+}_{B,H}(x_{i})\) is even. Comparing this with (3), and with the definition of \(l_{i},u_{i}\), we see that \(D^{+}_{G,H}(x_{i})\subseteq[l_{i},u_{i}]\). On the other hand, by the choice of \(x_{i}\), for all \(B\in\mathcal{B}\), \(V(B)\setminus\{x_{i}\}\subseteq V^{1}_{i-1}\cup V^{2}_{i-1}\) holds, except for possibly one \(B_{0}\in\mathcal{B}\). Recall that \(x\) is not a cut vertex of \(B_{0}\). Then \(D^{+}_{B_{0},H}(x_{i})\) consists of all the odd or all the even integers in \([0,d_{B_{0}}(x_{i})]\). Thus according to (3), \(D^{+}_{G,H}(x_{i})\) consists of each second number starting with either \(l_{i}\) or \(l_{i}+1\) and until either \(u_{i}\), or \(u_{i}-1\), depending on the parity of \(D^{+}_{B_{0},H}(x_{i})\). In other words \(D^{+}_{G,H}(x_{i})\) consists of all the odd or all the even integers in \([l_{i},u_{i}]\). Since \(G\) does not admit _H-orientation_, we have \(D^{+}_{G,H}(x_{i})\cap H(x_{i})=\emptyset\). Using (1), we have \[[l_{i},u_{i}]\cap H(x_{i})=[l_{i},u_{i}]\setminus D^{+}_{G,H}(x_{i}), \tag{4}\] which implies (2) holds. We have proved that if there is no _H-orientation_, then there exists a parity trace \((x_{1},...x_{k})\). Now it is sufficient to show that for this parity trace, \(|V^{1}|\neq e(G)\pmod{2}\). Without loss generality, we may assume that for every \(x\in V(G)\setminus x_{k}=V^{1}_{k-1}\cup V^{2}_{k-1}\), \(d^{+}_{O}(x)\in H(x)\). Since \((x_{1},...x_{k})\) is a parity trace, applying Claim 1, we know that for \(x\in V^{1}_{k-1}\), \(d^{+}_{O}(x)\) is odd and for \(x\in V^{2}_{k-1}\), \(d^{+}_{O}(x)\) is even. Since \(G\) does not admit \(H\)-orientation, then \(d^{+}_{O}(x_{k})\) has the different parity with \(H(x_{k})\cap[l_{k},u_{k}]\). Thus we may infer that \(\sum\limits_{v\in V(G)}d^{+}_{O}(v)\neq|V^{1}|\pmod{2}\). Recall that \(\sum\limits_{v\in V(G)}d^{+}_{O}(v)=e(G)\). Hence we get \(|V^{1}|\neq e(G)\pmod{2}\), this completes the proof. \(\Box\) We can derive the following result, which partly confirms Conjecture 1.1. **Corollary 3.1**: _Let \(G\) be a graph and let \(F,H:V(G)\to 2^{N}\) such that \((G,H)\) is dense and \(F(v)=[0,d_{G}(v)]\setminus H(v)\) for all \(v\in V(G)\). If for every \(v\in V(G)\),_ \[|F(v)|\leq\frac{1}{2}(d_{G}(v)-1),\] _then \(G\) admits an \(H\)-orientation._ **Proof.** Since \(|F(v)|\leq\frac{1}{2}(d_{G}(v)-1)\), then \(H(v)\) contains two consecutive integers. Let \(v_{0}\in V(G)\) and \(v_{0}\) is not a cut vertex of \(G\). Since \(H(v_{0})\) contains two consecutive integers, then by Lemma 2.4, \(G\) admits an \(H\)-orientation. \(\Box\) **Remark.** From the proof of Corollary 3.1, the result still holds if the following conditions holds. * For all \(x\in V(G)\), \(|F(x)|\leq\frac{1}{2}(d_{G}(v)+1)\) and \(F(x)\) contains no two consecutive integers; * there exists \(u\in V(G)\) such that \(u\) is not cut vertex, \(|F(u|\leq\frac{1}{2}(d_{G}(u)-1)\). By Theorem 1.2, we may obtain the following result. **Corollary 3.2**: _Let \(G\) be a graph and let \(H:V(G)\to 2^{N}\) such that for every \(v\in V(G)\), \(H(v)\in\{\{0,2,...,2\lceil d_{G}(v)/2\rceil\},\{1,3,\ldots,2\lceil d_{G}(v)/2 \rceil-1\}\}\). If \(|\{v\in V(G)\ |\ H(v)\ is\ odd\}|\equiv e(G)\pmod{2}\), then \(G\) admits an \(H\)-orientation._
2308.00771
Dose and compositional dependence of irradiation-induced property change in FeCr
Ferritic/martensitic steels will be used as structural components in next generation nuclear reactors. Their successful operation relies on an understanding of irradiation-induced defect behaviour in the material. In this study, Fe and FeCr alloys (3-12%Cr) were irradiated with 20 MeV Fe-ions at 313 K to doses ranging between 0.00008 dpa to 6.0 dpa. This dose range covers six orders of magnitude, spanning low, transition and high dose regimes. Lattice strain and hardness in the irradiated material were characterised with micro-beam Laue X-ray diffraction and nanoindentation, respectively. Irradiation hardening was observed even at very low doses (0.00008 dpa) and showed a monotonic increase with dose up to 6.0 dpa. Lattice strain measurements of samples at 0.0008 dpa allow the calculation of equivalent Frenkel pair densities and corrections to the Norgett-Robinson-Torrens (NRT) model for Fe and FeCr alloys at low dose. NRT efficiency for FeCr is 0.2, which agrees with literature values for high irradiation energy. Lattice strain increases up to 0.8 dpa and then decreases when the damage dose is further increased. The strains measured in this study are lower and peak at a larger dose than predicted by atomistic simulations. This difference can be explained by taking temperature and impurities into account.
Kay Song, Dina Sheyfer, Kenichiro Mizohata, Minyi Zhang, Wenjun Liu, Doğa Gürsoy, David Yang, Ivan Tolkachev, Hongbing Yu, David E J Armstrong, Felix Hofmann
2023-08-01T18:21:35Z
http://arxiv.org/abs/2308.00771v2
# Dose and compositional dependence of irradiation-induced property change in FeCr ###### Abstract We present a model for the \(\alpha\)-ray induced property change in FeCrCr. We show that the \(\alpha\)-ray induced property change in FeCrCr is a function of the \(\alpha\)-ray induced property change in FeCrCr. We show that the \(\alpha\)-ray induced property change in FeCrCr is a function of the \(\alpha\)-ray induced property change in FeCr. We also show that the \(\alpha\)-ray induced property change in FeCr is a function of the \(\alpha\)-ray induced property change in FeCrCr. We also show that the \(\alpha\)-ray induced property change in FeCrCr is a function of the \(\alpha\)-ray induced property change in FeCrCr. We also show that the \(\alpha\)-ray induced property change in FeCrCr is a function of the \(\alpha\)-ray induced property change in FeCrCr. We also show that the \(\alpha\)-ray induced property change in FeCrCr is a function of the \(\alpha\)-ray induced property change in FeCrCrCr. We also show that the \(\alpha\)-ray induced property change in FeCrCr is a function of the \(\alpha\)-ray induced property change in FeCr ###### Abstract Ferritic/martensitic steels will be used as structural components in next generation nuclear reactors. Their successful operation relies on an understanding of irradiation-induced defect behaviour in the material. In this study, Fe and FeCr alloys (3-12%Cr) were irradiated with 20 MeV Fe-ions at 313 K to doses ranging between 0.00008 dpa to 6.0 dpa. This dose range covers six orders of magnitude, spanning low, transition and high dose regimes. Lattice strain and hardness in the irradiated material were characterised with micro-beam Laue X-ray diffraction and nanoindentation, respectively. Irradiation hardening was observed even at very low doses (0.00008 dpa) and showed a monotonic increase with dose up to 6.0 dpa. Lattice strain measurements of samples at 0.0008 dpa allow the calculation of equivalent Frenkel pair densities and corrections to the Norgett-Robinson-Torrens (NRT) model for Fe and FeCr alloys at low dose. NRT efficiency for FeCr is 0.2, which agrees with literature values for high irradiation energy. Lattice strain increases up to 0.8 dpa and then decreases when the damage dose is further increased. The strains measured in this study are lower and peak at a larger dose than predicted by atomistic simulations. This difference can be explained by taking temperature and impurities into account. Keywords: Iron alloys, ion irradiation, hardness, lattice strains, defects ## 1 Introduction The study of the influence of the electronic structure of the system on the electronic structure of the system is a topic of the study of the influence of the electronic structure of the system on the electronic structure of the system. The study of the influence of the electronic structure of the system on the electronic structure of the system is a topic of the study of the influence of the electronic structure of the system on the electronic structure of the system. The study of the influence of the electronic structure of the system on the electronic structure of the system is a topic of the study of the influence of the electronic structure of the system on the electronic structure of the system. The study of the influence of the electronic structure of the system on the electronic structure of the system is a topic of the study of the influence of the electronic structure of the system on the electronic structure of the system. The study of the influence of the electronic structure of the system on the electronic structure of the system is a topic of the study of the influence of the electronic structure of the system on the electronic structure of the system. The study of the influence of the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the system on the electronic structure of the system on the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the system on the electronic structure of the system on the system on the electronic structure of the system on the system on the electronic structure of the system on the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the system on the electronic structure of the system on the electronic structure of the system on the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the system on the electronic structure of the system on the electronic structure of the system on the system on the electronic structure of the system on the system on the electronic structure of the system on the electronic structure of the system on the electronic structure of the system on the system on the electronic structure of the system on the electronic structure of the system on the system on the electronic structure of the system on the electronic structure of the system on the system on the electronic structure of the system on the electronic structure of the system on the system on the electronic structure of the system on the system on the electronic structure of the system on the system on the electronic structure of the system on the system on the system on the electronic structure of the system on the system on the electronic structure of the system on the system on the electronic structure of the system on the system on the electronic structure of the system on the system on the electronic structure of the system on the electronic structure of the system on the system on the system on the electronic structure of the system on the system on the system on the electronic structure of the system on the system on the electronic structure of the system on the system on the electronic structure of the system on the system Introduction Reduced-activation ferritic/martensitic (RAFM) steels are leading candidate materials for structural components of fusion reactors, particularly for blanket/first-wall components [1]. They possess superior resistance to radiation-induced swelling and better thermal properties than austenitic stainless steels [2]. To better understand the effect of irradiation on the microstructure and material properties of RAFM steels, iron-chromium (FeCr) binary alloys have often been studied as a model system [3, 4, 5, 6, 7]. The FeCr system can provide a mechanistic understanding of irradiation damage in ferritic/martensitic steels while removing many microstructural complexities. Consequently, it becomes more feasible to compare experimental and simulation data [8]. A significant challenge in fully understanding the effects of nuclear fusion operations on structural steels is the lack of an existing power plant facility. While certain aspects of irradiation can be replicated with fission neutrons [9, 10, 11] and ion-irradiation [12, 13, 14], it is costly and time-intensive to reproduce the full range of conditions expected for structural materials in a nuclear fusion power plant. This necessitates the development of reliable simulation models ranging from first principles atomistic simulations to simulations that capture evolution and degradation at the component scale [15, 16, 17, 8]. Comparison to experimental data is crucial for the validation and optimisation of these models. However, several key gaps exist in the literature on FeCr regarding experimental studies: 1. Low-temperature (\(<573\) K) irradiation effects: The majority of experiments have been performed in the 623-773 K temperature window expected in-service for most of the structural components [18]. This is also partly because fission neutrons are generally only accessible in an environment with temperatures at or above 573 K [5, 19, 20]. Irradiation hardening has been observed at these temperatures, reaching saturation around 1 - 2 dpa, particularly for Cr content greater than 5% [21, 22]. Extensive transmission electron microscopy (TEM) observations have revealed the presence of \(a\langle 100\rangle\) and \(\frac{a}{2}\langle 111\rangle\) dislocation loops, with the former more dominant at temperatures closer to 773 K and the latter prevailing at 573 K or below [23]. At high temperatures, the stability of dislocation loops is reduced [24] and dislocation microstructures also undergo coarsening, both of which affect their distribution and density [25]. However, the effects of irradiation at low temperatures have not been extensively studied. Low temperatures are also of operational significance as reactor cooling components are expected to operate below 573 K, even to \(\sim\) 100 K [26]. Experimental data is required from irradiation in this temperature range, particularly at room temperature, in order to study and model the athermal effects of defect population and behaviour at lower temperatures. 2. An extensive range of irradiation levels: Reactor components in operation are expected to sustain irradiation exposure up to \(\sim\) 100 dpa [9, 27]. Void swelling, caused by the formation of voids at high doses (\(>\) 1 dpa) and elevated temperatures (\(>\) 673 K), has been extensively investigated for Fe and FeCr-based alloys [20, 28]. Though void swelling is not present at low doses, other effects such as lattice swelling can cause large non-uniform stresses [29], and these effects are strongly dose- and composition-dependent [30]. Studying material property changes at low doses (\(\ll\) 1 dpa) is important for several reasons. Firstly, many irradiation-induced changes reach saturation at a certain dose [22, 31, 32]. Determining the dose threshold for saturation is important. Secondly certain modelling techniques, such as _ab initio_ and molecular dynamics [17], face limitations when trying to simulate a large range of exposures due to constraints in computational resources. Therefore, the availability of experimental results obtained at a range of, including very low, doses for comparison is needed. Thirdly, the distribution of defects and material property changes in a real-life reactor component will not be homogeneous. Predicting how these distribution gradients will affect material performance requires knowledge of material property changes across a wide range of damage levels. Finally, the damage microstructure of materials has been found to depend on pre-existing damage [33, 34]. As such, knowledge of the damage accumulation history and effects at lower doses is crucial. 3. The synergistic effect of dose and composition: Though there are many studies of irradiation-induced effects in FeCr that explore a range of sample conditions, the parameter space covered in each study (e.g. dose level, compositional variation, temperature, irradiating ion species) is generally limited [7, 35]. For example, the nanoindentation work of Heintz _et al._ only covers 2 doses (1 and 10 dpa) for Cr content ranging from 2.5% to 12.5% [21]. There is of course a trade-off between examining specific phenomena in detail versus covering a broader parameter space. However, identifying synergistic effects from comparing different studies is difficult due to variations in sample history and irradiation conditions. To address these aforementioned gaps in literature, it is important to also consider which irradiation effects to focus on. TEM is one of the most common methods of studying irradiation-induced defect structures and populations [23, 36, 37]. TEM has provided many key insights on defect density, size distribution, and defect type. However, the lack of sensitivity of TEM to small defects (\(<\) 1 nm) [38] makes it incomplete for the study of the full defect population, particularly at low doses where a majority of defects are below the detection threshold [39]. Positron annihilation spectroscopy revealed a 2 order of magnitude discrepancy in defect cluster density compared to TEM measurements at doses as low as 10\({}^{-3}\) dpa [40]. Measurements of material properties such as thermal diffusivity [41] and lattice strain [30] have also revealed key insights into defect populations, while also showing significant discrepancies in defect density estimates compared to TEM studies at the same dose. The study of irradiation-induced changes in the mechanical properties of steels has also been useful to understand the effect of the damage microstructure [42, 43]. Therefore, lattice strain and hardness characterisation have been chosen for this study to obtain a broader understanding of defect populations in FeCr. Furthermore, key mechanistic insight can also be revealed by comparing and contrasting the evolution of different material properties. In this study, we present the characterisation of lattice strain and hardness changes induced by Fe-ion irradiation of Fe and FeCr alloys. We examine a range of doses from 0.00008 dpa to 6 dpa, spanning six orders of magnitude, for six compositions of FeCr binary alloys (0 \(\leq\) Cr% \(\leq\) 12). We discuss key insights, at low dose, of defect production and retention rate, comparing with the commonly-used Norgett-Robinson-Torrens (NRT) model and microscopy results from the literature. The effect of dose and Cr concentration on defect mobility and clustering is explored by comparing experimental trends of hardness and lattice strain, as well as with simulations. ## 2 Materials and Methods ### Sample preparation and ion-implantation The high-purity FeCr alloy materials used in this investigation were manufactured under the European Fusion Development Agreement (EFDA) programme (contract no. EFDA-06-1901). The chemical compositions and mean grain sizes of the as-delivered alloys are listed in Table 1. The alloys were produced by induction melting under an argon atmosphere, followed by hot-forging at 1273 K for Fe and 1423 K for all other compositions. The bars were subsequently cold-forged with a reduction ratio of 70%. Heat treatment of 1 h K for Fe, 1023 K for Fe3Cr, Fe5Cr, and Fe8Cr, 1073 K for Fe10Cr, and 1123 K for Fe12Cr. The recrystallised materials were then air-cooled [44, 45]. Surprisingly our electron backscatter diffraction (EBSD) characterisation of the materials revealed significant signs of cold work (intragranular misorientation) in the microstructure of the Fe8Cr materials, unlike the other compositions (Appendix A). In addition, the measured average grain size of 86 \(\pm\) 63 \(\upmu\)m differs noticeably from the manufacturer-quoted value of 320 \(\upmu\)m [45]. As such, we suspect that the Fe8Cr raw material may not have been fully heat treated to the conditions reported by the manufacturer. The as-delivered samples were sectioned with a fast diamond saw into pieces of approximately 5 \(\times\) 5 \(\times\) 0.7 mm\({}^{3}\) in size. The polishing process consisted of mechanical grinding with SiC paper, followed by polishing with diamond suspension and colloidal silica. Finally, the samples were electropolished with 5% perchloric acid in ethanol, with 15% ethylene glycol monobutyl ether, at 293 K for 2-3 minutes using a Struers LectroPol-5. The voltage applied during electropolishing was 45 V for Fe and Fe3Cr, 35 V for Fe5Cr and Fe8Cr, and 30 V for Fe10Cr and Fe12Cr. An energy of 20 MeV was chosen for Fe-ion irradiation, as this produced a damage layer of \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Alloy & Cr (wt \%) & C (wppm) & S (wppm) & O (wppm) & N (wppm) & Mean grain size (\(\upmu\)m) \\ \hline Fe & \(<\) 0.0002 & 4 & 2 & 4 & 1 & 187 \(\pm\) 150 \\ Fe3Cr & 3.05 & 4 & 3 & 6 & 2 & 160 \(\pm\) 114 \\ Fe5Cr & 5.40 & 4 & 3 & 6 & 2 & 112 \(\pm\) 60 \\ Fe8Cr & 7.88 & 6 & 2 & 10 & 2 & 86 \(\pm\) 63 \\ Fe10Cr & 10.10 & 4 & 4 & 4 & 3 & 98 \(\pm\) 55 \\ Fe12Cr & 11.63 & 6 & 2 & 4 & \(<\) 10 & 281 \(\pm\) 250 \\ \hline \end{tabular} \end{table} Table 1: The chemical compositions of the FeCr alloys used in this study as measured by the manufacturer using glow discharge mass spectrometry [44, 45]. The mean grain size was determined from EBSD measurements, with \(\pm\) 1 standard deviation shown here (see Appendix A). 3.5 \(\upmu\)m thickness, allowing characterisation with X-ray diffraction and nanoindentation. Depth profiles of the damage and injected ion distribution (Figure 1) were calculated with SRIM using the Quick K-P model [46] with 20 MeV Fe ions on a Fe target with 40 eV displacement energy [47] at normal incidence. In this study, the nominal dose of an irradiation profile refers to the average damage dose in the first 2 \(\upmu\)m of the sample, where the damage profile is relatively flat and the injected ion concentration is still low. Ion-implantation was performed at room temperature with Fe\({}^{4+}\) ions using the tandem accelerator at the Helsinki Accelerator Laboratory. The irradiation conditions are listed in Table 2. For each sample composition, up to 7 different nominal dose levels were produced (0.00008 dpa, 0.0008 dpa, 0.008 dpa, 0.08dpa, 0.8 dpa, 3.6 dpa and 6.0 dpa). An unirradiated reference sample was also retained for each composition. To enhance readability, we will use scientific notation labelling for the lowest 4 doses in this study. 0.8 is used as the coefficient (e.g. 0.00008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.08 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.008 dpa, 0.08 dpa, 0.008 dpa, 0.08 dpa, 0.008 dpa, 0.08 dpa, 0.008 dpa, 0.08 dpa, 0.008 dpa, 0.08 dpa, 0. dpa is written as 0.8E-4 dpa) to align the exponents in the notation with the exponents of the figures presented in the Results section. A custom-built holder was designed for the irradiation process that allowed active temperature control using a combination of liquid nitrogen cooling and a cartridge heater. The samples were mounted on an aluminium block, and a thermocouple was positioned within 10 mm of the samples to monitor the temperature. For the irradiations in this investigation, the temperature was held constant at 313 K. Due to the spatial constraints of the sample holder, the samples had to be irradiated in two separate groups (Fe/Fe3Cr/Fe5Cr and Fe8Cr/Fe10Cr/Fe12Cr). The exception was for the 6.0 dpa irradiation where only Fe, Fe3Cr, Fe5Cr and Fe12Cr were irradiated simultaneously due to time constraints. ### Micro-beam Laue X-ray diffraction Lattice strain, i.e. change in the atomic plane-spacing of the crystal, caused by the irradiation was measured using micro-beam Laue X-ray diffraction at the 34-ID-E beamline, Advanced Photon Source (Argonne National Laboratory, IL, USA). Depth-resolution in the sample measurements was obtained using Differential Aperture X-ray Microscopy (DAXM), which has \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Dose Name & Nominal dose & Total fluence & Nominal dose & Flux & Irradiation time \\ & (dpa) & (ions/cm\({}^{2}\)) & rate (dpa/s) & (ions/cm\({}^{2}\)/s) & (hour:min:sec) \\ \hline 0.8E-4 & 0.00008 & 5.30\(\times 10^{11}\) & 3.54\(\times 10^{-6}\) & 2.34\(\times 10^{10}\) & 00:00:17 \\ 0.8E-3 & 0.0008 & 5.30\(\times 10^{12}\) & 3.54\(\times 10^{-6}\) & 2.34\(\times 10^{10}\) & 00:03:12 \\ 0.8E-2 & 0.008 & 5.30\(\times 10^{13}\) & 4.65\(\times 10^{-5}\) & 3.08\(\times 10^{11}\) & 00:02:51 \\ 0.8E-1 & 0.08 & 5.30\(\times 10^{14}\) & 5.31\(\times 10^{-5}\) & 3.52\(\times 10^{11}\) & 00:25:06 \\ 0.8 & 0.8 & 5.30\(\times 10^{15}\) & 2.78\(\times 10^{-5}\) & 1.8\(\times 10^{11}\) & 08:09:27 \\ 3.6 & 3.6 & 2.39\(\times 10^{16}\) & 4.61\(\times 10^{-5}\) & 3.05\(\times 10^{11}\) & 21:42:00 \\ 6.0 & 6.0 & 3.98\(\times 10^{16}\) & 4.50\(\times 10^{-5}\) & 2.98\(\times 10^{11}\) & 37:00:16 \\ \hline \end{tabular} \end{table} Table 2: Irradiation conditions for this study. The nominal dose is calculated from the average of the dose profile in the top 2 μm below the surface, as shown in Figure 1. The convention described for the dose name will be used for the rest of this paper. been described in detail elsewhere [48, 49, 50]. Briefly, the sample is mounted in 45\({}^{\circ}\) reflective geometry. An area detector (Perkin-Elmer, #XRD 1621, pixel size 200 \(\times\) 200 \(\upmu\)m\({}^{2}\)) positioned above the sample records the Laue diffraction patterns. The key component of this technique is a platinum wire (\(\sim\) 10 \(\upmu\)m in diameter, mounted on a silicon monocrystal) that is oriented perpendicular to the beam direction, and scanned parallel to the surface of the sample through the diffracted beams. By comparing the intensity of each detector pixel as the wire is moved along consecutive positions, and triangulating using the position of the incident beam and the wire edge, a profile of intensity as a function of sample depth can be obtained for the whole detector. This enables the reconstruction of diffraction patterns as a function of depth into the sample along the beam path (i.e. at 45\({}^{\circ}\) to the sample surface). By also scanning the incident X-ray beam energy, a Bragg peak can be fully measured in 3D reciprocal space and as a function of depth into the sample. Note that for the subsequent analysis presented in this study, all profiles have been converted into a function of depth perpendicular to the sample surface. The X-ray beam size at the sample surface was 190 \(\times\) 360 nm\({}^{2}\), and the estimated depth resolution was \(\sim\) 1 \(\upmu\)m. For each sample, a minimum of 3 points were measured on grains within 10\({}^{\circ}\) of \(\langle 001\rangle\) out-of-plane orientation, identified in advance with EBSD. The energy of the monochromatic beam was chosen to match that of a \(\{\)00n\(\}\) reflection in the range of 11 to 16 keV. This ensured that the dominant source of the diffraction signal is from the top 5 \(\upmu\)m of the sample where the irradiated layer lies. For each measurement, an energy range of \(\sim\) 40 eV was scanned to fully capture the diffraction peak in 3D reciprocal space. An energy step size of 1 eV was used for all samples except for the lowest dose irradiation, where a 0.5 eV step size was used. It should be noted that measuring only the reflection closest to the normal orientation of the grain gives the out-of-plane strain, not the full strain tensor. Previous DAXM measurements on ion-irradiated tungsten showed that the in-plane strains are small compared to the out-of-plane strain [50, 51]. This is due to the constraint imposed by the unimplanted material beneath the irradiated layer. Two of the data points were further studied in a similar method depth-resolving the diffraction signal using a coded aperture instead of a platinum wire. Specifically these measurements were performed for Fe8Cr 0.8E-4 dpa and Fe8Cr 3.6 dpa samples. Further details of the coded aperture technique can be found elsewhere [52, 53]. ### Nanoindentation The hardness of the implanted layer was measured by nanoindentation using an MTS Nano Indenter XP with a diamond Berkovich tip (Synton-MDP). The tip area calibration was performed on fused silica (elastic modulus of 72 GPa). Indents of 2 um deep were performed using continuous stiffness measurement (CSM) mode [54], with a strain rate of 0.05 s\({}^{-1}\), a CSM frequency of 45 Hz, and a harmonic amplitude of 2 nm. At least 35 indents, spaced at a minimum distance of 50 um apart, were carried out across a minimum of 10 grains on each sample. ## 3 Results ### Irradiation-induced lattice strain #### 3.1.1 Extracting strain from diffraction data Figure 2(a) and (d) show the measured integrated intensity for the (004) peak at 0.8 dpa in the Fe and Fe10Cr samples, respectively. The intensity is plotted as a function of the scattering vector magnitude \(Q=|\mathbf{Q}|\) and of the reconstructed depth into the surface. The depth-reconstruction and integration procedure was performed with LaueGo [55]. The distribution of \(Q\) changes as a function of depth (Figure 2(b) and (e)), with the distribution in the first 6 um shifting towards lower \(Q\) values compared to those measured from 7 um and deeper. From Figure 1, the predicted range of displacement damage and injected ions only extend to 3.5 um below the surface. Therefore, the distribution of \(Q\) at depths much greater than this can be attributed to the undamaged material in each sample, conveniently serving as a built-in strain-free reference. For each depth, a single Gaussian function was fitted to the intensity distribution in \(Q\), and the centre of the distribution (\(Q_{c}\)) was extracted. An intensity-weighted average of \(Q_{c}\) from depths 7 to 12 \(\upmu\)m was used as the strain-free reference \(Q_{0}\) of each sample. The lattice strain \(\epsilon\) at each depth is calculated using the expression: \[\epsilon=\frac{Q_{0}-Q_{c}}{Q_{c}}. \tag{1}\] Performing this analysis for all depths yields the results shown in Figure 2(c) and (f), respectively, for Fe and Fe10Cr implanted to 0.8 dpa. These plots show that irradiation causes Figure 2: The integrated intensity plotted as a function of the magnitude of the scattering vector, \(Q\), and depth perpendicular to the surface of the sample for (a) Fe at 0.8 dpa and (d) Fe10Cr at 0.8 dpa. The colour scale has been normalised. (b), (e) Intensity vs. \(Q\) for different depths into the sample (indicated by white arrows in (a) and (d)). (c), (f) The corresponding lattice strain in the sample as a function of depth into the surface. The dashed line showing zero strain is included for comparison. lattice strain changes down to 6 um below the surface, which is significantly deeper than the damage layer thickness predicted by SRIM (Figure 1). The strain profile increases with depth, reaching a peak at 3 - 3.5 um before decreasing to zero at a depths greater than 6 um. Interestingly, at depths corresponding to the peak injected ion concentration, the intensity vs. \(Q\) profile is not a single Gaussian distribution (depth = 2.8 um - 3.2 um in Figure 2(b) and (e)). An additional peak appears at lower \(Q\), corresponding to lattice expansion. This is presumably a result of injected ions existing as interstitials in the material, which have a positive relaxation volume, leading to lattice swelling [56]. The appearance of an additional \(Q\) peak was only observed for damage levels of 0.8 dpa or higher. Very clear splitting was observed for all FeCr alloys, while Fe samples only showed a slight asymmetric peak broadening. This study focuses on the lattice strain associated with the collision damage cascade formed during irradiation. As such, the subsequent analysis will focus on the measurements in the depth range of 0 to 2 um, where the effect of the injected ions is negligible. #### 3.1.2 Strain as a function of dose The lattice strain caused by the displacement damage, as opposed to the injected ions, was examined by analysing the average strains in the top 2 um of the samples (Figure 3). The reported uncertainties in lattice strains are the standard deviation values across the top 2 um of all measurements for each sample. The plotted error bars for dose represent the dose range in the top 2 um of the irradiated samples. For all sample compositions, except Fe5Cr, no lattice strain was detected at 0.8E-4 dpa. Lattice strain increases monotonically with dose from 0.8E-4 dpa to 0.8 dpa for all sample compositions. In the case of pure Fe, the strain does not change significantly as a function of dose beyond 0.8 dpa. However, for all FeCr alloys, there is a reduction in strain for doses greater than 0.8 dpa. Positive lattice strain indicates lattice expansion associated with crystal defects that have a positive relaxation volume (\(\Omega\), in units of atomic volume). In BCC Fe, self-interstitials have a positive relaxation volume (\(\Omega_{int}\sim\) 1.6-1.8), while vacancies have a negative relaxation volume (\(\Omega_{vac}\) = -0.220) [56]. Therefore, the net effect of a Frenkel pair is a positive relaxation volume. A reduction in lattice strain thus suggests that there is a removal of interstitial defects from the system with increasing irradiation dose, or a greater retention of vacancies than interstitials. #### 3.1.3 Strain as a function of Cr For all irradiation dose levels, a non-monotonic relationship between lattice strain and Cr content is observed (Figure 4). Pure Fe samples exhibit the lowest lattice strain level compared to FeCr samples at any given dose, except 6.0 dpa. Lattice strain appears to be greatest for Fe5Cr Figure 3: The average out-of-plane lattice strain in the top 2 \(\upmu\)m of each sample plotted as a function of irradiation dose for all compositions. The vertical error bars represent the standard deviation in lattice strain across the 2 \(\upmu\)m layer. The horizontal error bars represent the range of dose levels within the first 2 \(\upmu\)m of the samples. Markers have been offset horizontally for clarity. and Fe8Cr at all measured doses. Fe10Cr and Fe12Cr generally show a similar amount of lattice strain at a given dose level. #### 3.1.4 Onset of strain with dose At the lowest dose investigated for this study (0.8E-4 dpa), all samples except Fe5Cr showed no statistically significant strain within any depth of the implanted layer (some representative examples are shown in Figure 5). There are some measurement points for Fe and Fe10Cr which appear to show negative strain. However, considering the size of the error bars, which represent \(\pm\) 1 standard deviation from values across all measurements on each particular sample, these negative values are likely due to experimental uncertainty rather than an implantation effect. Even though there are some fluctuations in the measurements and fitting of intensity vs. Figure 4: The average out-of-plane lattice strain in the top 2 \(\upmu\)m of each sample as a function of Cr content for 0.8E-4 dpa, 0.8E-2 dpa, 0.8 dpa and 6.0 dpa. The vertical error bars represent the standard deviation in lattice strain in the 2 \(\upmu\)m layer for all measurements taken for each particular composition and dose. \(Q\) at these low dose levels, the Fe5Cr sample clearly exhibits positive strain in the first 4 \(\upmu\)m (Figure 5). The strain profile peaks between depths of 2.5 \(\upmu\)m to 3 \(\upmu\)m, similar to profiles of samples implanted to higher doses (Figure 2(c) and (f)). This peak is likely due to the presence of injected Fe ions. From the measured strain level, a lower-bound estimate of the equivalent Frenkel pair density can be calculated (Appendix B). For the Fe5Cr sample at 0.8E-4 dpa, this value is \(5.9\times 10^{24}\) m\({}^{-3}\) within the first 2 \(\upmu\)m below the surface. #### 3.1.5 Comparison to the NRT model The defect concentration predicted by the NRT model [57], which is used to calculate the dose in dpa, is a major overestimation, particularly at primary knock-on atom (PKA) energies much higher than the displacement energy of the material [58]. This is the case for the present study where the irradiation was performed with 20 MeV ions, while the displacement energy of Fe is only 40 eV. The 'NRT efficiency' is the ratio of the actual concentration of defects in the Figure 5: A comparison of the strain profile as a function of depth for Fe (purple), Fe5Cr (green), and Fe10Cr (orange) samples irradiated to 0.00008 dpa. The error bars represent \(\pm\) 1 standard deviation from the measurements taken on each sample. Markers have been offset horizontally for clarity. A dashed line at zero strain is shown for reference. material, usually measured by electrical resistivity or calculated by molecular dynamics, to the NRT model predictions. From studies in the literature [58], the NRT efficiency of metals saturates at 0.2-0.3 at high PKA energy (\(>\) 1 keV). By converting the measured lattice strain into a corresponding Frenkel pair density (Appendix B), the NRT efficiency in Fe and FeCr at 313 K can be calculated (Figure 6(a)). The lattice strain values at 0.8E-3 dpa and 0.8E-2 dpa were chosen for this calculation as all samples exhibit non-zero lattice strain at these doses. Uncertainties in the lattice strain measurements (from Figure 3) have been propagated forward when calculating defect densities and NRT efficiencies. Low irradiation doses were chosen because the NRT-dpa definition and the NRT efficiency are only applicable for non-overlapping cascades [59]. Furthermore at low dose, the effect of defect clustering can be minimised, which allows a more accurate conversion of strain Figure 6: (a) The NRT efficiency as a function of Cr content for samples irradiated to 0.8E-3 dpa (red circles) and 0.8E-2 dpa (blue squares). (b) The measured lattice strain for Fe (purple), Fe5Cr (green) and Fe 10Cr (orange) as a function of strain, compared to the predicted strain for a linear accumulation of isolated defects based on the NRT efficiency calculated at 0.8E-3 dpa (dashed lines of corresponding colours). The black dotted line shows the predicted strain profile in the case of 100% defect production and retention (i.e. NRT efficiency = 1). to defect density. This is important as one of the key assumptions made in the defect density calculation is that all defects present in the samples are isolated Frenkel pairs (Appendix B). The NRT efficiency is highest for Fe5Cr and Fe8Cr at both doses examined (Figure 6(a)), and the values fall within the expected range of 0.2-0.3 reported in the literature [58]. It is the lowest for pure Fe at \(\sim\)0.02. There is a reduction in NRT efficiency for all FeCr samples from 0.8E-3 dpa to 0.8E-2 dpa. This suggests either a decrease in defect retention, or the occurrence of significant defect clustering or cascade overlap. Interestingly, this is not the case for pure Fe, which has the same NRT efficiency at both doses within measurement error. By comparing to the theoretical strain from 100% defect production and retention (i.e. NRT efficiency = 1), it can be seen that our samples are in the regime where defect retention is much lower than predicted by the NRT model (Figure 6(b)). For all FeCr samples, defect accumulation deviates from linear behaviour from 0.8E-3 dpa onwards. For Fe, this deviation happens beyond 0.8E-2 dpa. This suggests the 'low dose' regime, before the onset of cascade overlap, ends between 0.001 to 0.01 dpa. This transition dose also depends on NRT efficiency, with a lower dose threshold for higher NRT efficiency material (such as Fe5Cr and Fe8Cr). ### Hardness The observed hardening with the increase of Cr concentration in the unirradiated samples indicates solid solution hardening (Figure 7). There is a 38% increase in hardness from Fe to Fe12Cr. The increase in hardness with Cr content is also monotonic at all irradiation doses. Furthermore, there is a general trend hardness increasing monotonically with dose up to 6.0 dpa, the highest dose investigated for this study. For Fe3Cr and Fe5Cr, the curve flattens out beyond 0.8 dpa, suggesting the onset of saturation in irradiation hardening. A similar trend is observed for pure Fe, although the measurement at 3.6 dpa makes this trend slightly unclear. The difference in hardness between samples of different Cr content at each dose increases with dose. This corresponds to a greater irradiation hardening rate for samples of higher Cr content. ## 4 Discussion ### Trends with dose #### 4.1.1 Low dose effects (\(<\) 0.008 dpa) The lattice strain for Fe5Cr observed at 0.8E-4 dpa (the lowest dose in this study) corresponds to a lower bound equivalent Frenkel pair density of \(5.9\times 10^{24}\) m\({}^{-3}\). The dose threshold for defect observation with TEM in irradiated FeCr (manufactured under the same project as the samples in this study) has been reported as 1.5E-3 dpa [60]. This threshold is 20 times higher than the Figure 7: Hardness of all samples, averaged between indentation depths of 300 to 600 nm, plotted as a function of dose. The unimplanted samples are included on the left side of the plot as a reference. The error bars represent \(\pm 1\) standard deviation of all measurements performed on each sample. Markers have been offset horizontally for clarity. dose at which we observe the onset of defect-induced lattice strain. The authors of the TEM study estimated the defect density at 1.5E-3 dpa to be \(6.5\times 10^{20}\) m\({}^{-3}\). From the lattice strain measurements at a similar dose in this study (0.8E-3 dpa, Figure 3), the estimated defect density is between \(6.0-17.1\times 10^{24}\) m\({}^{-3}\) for Fe and FeCr alloys. The significant difference in defect density estimates from X-ray lattice strain measurements compared to TEM measurements may be attributed to the limited sensitivity of TEM to very small defect features (\(<\) 2 nm) [38]. At low doses, these small defects are expected to dominate the defect population [23, 39]. Similar findings from a different study of Fe irradiated with neutrons at 325-345 K revealed that the defect cluster density estimated by TEM is over two orders of magnitude lower than that determined by positron annihilation spectroscopy [40]. From that study, a nanocavity density of \(2\times 10^{23}\) m\({}^{-3}\) at 1E-4 dpa was measured, with most nanocavities less than 1 nm in diameter (below TEM resolution limit). Even though this nanocavity density value is different from the equivalent Frenkel pair density found at 0.8E-4 dpa in this study, the difference can be reconciled by considering that an 1 nm nanocavity could contain an equivalent of up to 10 vacancy point defects [61]. Irradiation hardening was observed for all Fe and FeCr samples at 0.8E-4 dpa. The amount of hardening increases with Cr content, from 2% hardening for pure Fe to 5% hardening for Fe12Cr. In the literature, a 10% increase in yield stress (related to changes in indentation hardness) was observed for pure Fe following room temperature neutron irradiation to 1.2E-4 dpa, comparable to the lowest dose explored in this study [40]. However, the TEM characterisation on the same sample did not identify any visible defects. A neutron dose of \(\sim\)0.7E-3 dpa in Fe, Fe4Cr and Fe9Cr at the same temperature as the current study (313 K) has been found to cause an increase in yield stress of 35-40% [62]. In comparison, at 0.8E-3 dpa, an increase of 3-13% (increasing with Cr content) in nanoindentation hardness was measured in our study. However, a direct comparison of nanoindentation hardness and yield stress from bulk tensile testing is not straightforward due to the difference in lengthscale and mechanisms [63, 64]. Nanoindentation hardness depends on both the availability of sources for the nucleation of dislocations, and their subsequent propagation. On the other hand, macroscopic yield stress from bulk tensile testing depends mainly on the mobility of pre-existing dislocations. Another reason for the discrepancy between our study and that of Hammad _et al._ could be due to the different levels of impurities present. The carbon content for our samples is approximately 20 appm, which is 10 times lower than that reported in [62]. Lower impurity levels could greatly reduce defect retention, leading to less irradiation hardening in our samples. The calculation of NRT efficiency at 0.8E-3 dpa for Fe5Cr (0.22) and Fe8Cr (0.19) is in good agreement with literature values (0.2-0.3) [58, 65]. An important consideration in this comparison concerns the clustering of defects. In this study, the calculation of defect density relies on the assumption that defects remain as isolated Frenkel pairs. When clustering occurs, the relaxation volume per defect decreases, requiring more Frenkel pairs to be present to produce the same amount of strain as a corresponding population of isolated point defects. Hence the estimates of defect density and NRT efficiency can be interpreted as a lower-bound estimate. Furthermore, the measurements of NRT efficiency in most other studies [58] were performed at 4 K, which reduces the rate of defect recombination. Higher temperatures, such as in our study, therefore result in a further decrease of NRT efficiency [66]. The low levels of impurities present in our materials may further limit defect retention [67], which could contribute to a lower experimentally measured value of NRT efficiency compared to the other studies. In the low dose regime (0.8E-4 dpa to 0.8E-3 dpa), an increase in irradiation dose leads to a monotonic increase in both lattice strain and hardness. Defects likely evolve from isolated Frenkel pairs to some clustering, resulting in a reduction in NRT efficiency with increasing dose. Eventually, the threshold dose is reached for direct observation of defects in TEM ( 1E-3 dpa [60]). #### 4.1.2 Intermediate dose effects (0.008 \(\leq\) dose \(<\) 0.8 dpa) The hardness and lattice strain measurements of this present study are in good agreement with previous measurements conducted on Fe3Cr, Fe5Cr and Fe10Cr (same raw materials) in similar conditions to doses of 0.8E-2 and 0.8E-1 dpa [30]. The lattice strain measurements of this study agree with previous measurements (averaged within the top 2 \(\upmu\)m of the sample) to within 30%. Discrepancies can be attributed to measurement uncertainties for small strains and slight variations between individual grains. The hardness measurements of this study agree with those from [30] to within 9%. The difference in hardness values can be attributed to the grain orientation specificity [68] of the previous study, which only considers grains with \(\langle 100\rangle\) out-of-plane orientation. In both cases, the evolution of hardness is similar with a monotonic increase with both dose and Cr content. In the intermediate dose regime, the rate of lattice strain increase decreases with increasing dose. This indicates a deviation from a linear accumulation of isolated Frenkel pairs (\(\Omega\sim 1.4-1.6\)). This could result from the clustering of defects or a change in the ratio of interstitial to vacancy defects. Several factors likely contribute to this. The first is the effect of cascade overlap, which becomes important at doses greater than \(\sim\) 1E-2 dpa, leading to the formation of dislocation loops [37]. This causes a reduction in the relaxation volume, and thus lattice swelling contribution, per defect. Another factor is the reduction in the survival rate of subsequently introduced defects due to pre-existing defects in the crystal [33, 34]. This leads to an overall lower rate of defect population growth. The effect of finite temperature is also important. Stages of defect recovery as a function of temperature have been identified from resistivity studies in Fe [69]. At 313 K, the active mechanisms include Frenkel pair recombination, as well as di-interstitial and vacancy migra tion. Since interstitials have much greater mobility than vacancy defects, they are more likely to cluster and reduce their contribution to overall lattice strain per defect. Furthermore, the increase in concentration of defects will also lead to a greater rate of recombination [69]. The net effect is the reduction in the rate of lattice strain increase, which is observed for all samples. #### 4.1.3 High dose effects (\(\geq\) 0.8 dpa) The reduction of lattice strain with increasing irradiation dose above 0.8 dpa in FeCr suggests a net removal of interstitial defects. This has been observed previously in self-ion irradiated tungsten by Mason _et al._[31]. In tungsten, lattice strain increased monotonically up to 0.032 dpa, before dropping to zero at 0.056 dpa and ultimately becoming negative at doses beyond 1 dpa. This experimental result was compared to simulation results from Frenkel pair creation and insertion using the creation-relaxation algorithm (CRA). The CRA involves randomly displacing an atom to a new position within the simulation cell and then minimising the global potential energy in the cell [70]. Agreement was obtained between experiments and simulation results regarding the dose at which the strain peaked and then changed signs. But the simulation results predicted a strain level that was 10 times greater in magnitude than experimental observations. For iron, Derlet and Dudarev have previously simulated strain induced by irradiation using CRA up to 2.5 cdpa (canonical dpa) [70]. Note that the definition of cdpa in CRA simulations is the ratio of the number of Frenkel pairs inserted to the total number of atoms in the simulation. This is an analogous measure of defect production to dpa. From CRA in iron, an increase of strain is observed up to 0.07 cdpa, peaking at a strain of \(5\times 10^{-3}\) before decreasing to zero strain beyond 2 cdpa. Compared to the experimental results of this study, the magnitude of strain predicted is 10 times greater than those experimentally observed in our study. The theoretical prediction of the dose threshold for maximum lattice strain is also a factor of 10 lower than we observe experimentally. These observations are similar to the results found for tungsten [31]. The phenomenon of a positive peak in strain followed by a trend towards zero strain (and in the case of tungsten, ultimately negative strain) at high doses is attributed to the formation of large-scale defect microstructures from an accumulation of interstitial defects [31]. As the density of interstitial defects increases with dose, the defects begin to coalesce. The growth of these extended defect structures eventually causes an evolution back towards a less defective crystal structure, which removes positive lattice strain [70]. As vacancies are much less mobile, they largely remain as isolated defects, contributing negative lattice strain, which becomes dominant at high dose. However, unlike tungsten, the lattice strain in iron and iron-chromium alloys does not become negative, either in simulations [70] or experimentally in this study. The reason for this could be the ratio of mobilities for interstitial and vacancy defects. In BCC iron, the migration energy of a single vacancy is 0.65 eV [71] and for a self-interstitial loop, it can be as low as 0.1 eV [72]. Considering an Arrhenius rate behaviour for thermally-induced defect movement in crystals, there is a factor of \(\sim\) 1.7 difference between interstitial and vacancy mobility. This is much lower than the corresponding ratio for tungsten (\(>\) 5) [73, 74]. This means for iron, a higher rate of Frenkel pair recombination is expected. Furthermore, the rates of migration and annihilation of defects to sinks (e.g. dislocations and grain boundaries), and defect clustering would be more comparable between interstitial-type and vacancy-type defects. Therefore, even after the formation of extended dislocation networks at high doses, the imbalance of interstitial-type and vacancy-type defects is not sufficient to cause the net lattice strain in iron to become negative. However for tungsten, the low mobility of vacancies causes them to remain 'frozen' in the lattice, such that they ultimately cause a negative net strain at high doses after the interstitial defects aggregate to form new crystal lattice planes [31]. The formation of large extended defect microstructures in iron following room temperature high-dose (\(>\) 6.5 dpa) irradiation has been previously observed by TEM [75]. This dose is different to the dose threshold of strain reduction that resulted from defect coalescence in the present study. However, it's worth noting that TEM irradiation suffers from defect loss to the surface of the foil, which would delay the onset of defect coalescence [76]. The lattice strain predicted by CRA simulations of Fe [70] peaks at a dose 10 times smaller than the experimental observations in FeCr. This could be attributed to the greater mobility of defects in Fe and FeCr at 313 K which would lead to increased recombination and sink-annihilation of Frenkel pairs, thus delaying defect evolution and formation of extended structures. This temperature effect could also be the reason for the factor of 10 difference in the magnitude of strain between simulations and experiments. Since CRA simulations are athermal, with purely stress-driven relaxation of the lattice, there will be much higher levels of defect retention due to lower recombination rates. This is evident when comparing defect content and NRT efficiency between this experiments and CRA simulation. From CRA simulations, the ratio between defect content and cdpa value (analogous to NRT efficiency) for cdpa \(<\) 0.01 is close to 1. Whereas from experiments, the NRT efficiency beyond 0.008 dpa is less than 0.1 for all compositions in this study (Figure 6), in part due to temperature-enhanced recombination. It is worth noting that the presence of impurities in the experimental samples, even at low concentrations, compared to a perfect starting crystal in CRA simulations would cause enhanced retention of defects [77, 78, 79]. This effect would act in opposition to that caused by finite temperatures, as discussed previously. Interestingly, in our case, it appears that the temperature effect is dominant, resulting in a net delay in lattice strain evolution as a function of dose. This is consistent with the low impurity levels of the as-received materials in this study. The evolution of nanoindentation hardness is markedly different to the behaviour of lattice strain in this dose range above 0.8 dpa. We do not observe any changes in the sign of hardness change. Surprisingly, there do not appear to be any prior studies on hardness changes in FeCr as a function of dose following room temperature irradiation with self-ions. A study at room temperature following irradiation with Ar ions for RAFM T91 steel suggests hardness saturation at 4 dpa [80]. A study of CLAM steel irradiated with Xe ions at room temperature up to 5 dpa showed no hardness saturation [81]. However, direct comparison with the results of the present study is not straightforward due to the use of noble gases as irradiating particles in these previous works. Self-ion studies [21, 22] have only been performed for irradiation at 573 K and showed hardness saturation for pure Fe and FeCr alloys with Cr content greater than 5% above 1 - 2 dpa. For low Cr content, no saturation was observed even up to 10 dpa. The trends from these studies conducted at higher irradiation temperatures differ from our results. Following room temperature irradiation, Fe, Fe3Cr and Fe5Cr approached hardness saturation at doses \(\geq\) 0.8 dpa, but Fe8Cr, Fe10Cr, Fe12Cr continued to harden with increasing irradiation dose. One reason for the discrepancy could be due to some carbon contamination from the irradiation process. Atom probe tomography data of Fe10Cr at 0.8 dpa is included in Appendix C. We estimate the carbon enrichment to be \(\sim\) 100 appm after 0.8 dpa, compared to 18 appm reported by the manufacturer for the as-made samples. The additional carbon introduced, which scales with irradiation dose, could contribute in part to the hardening seen at high doses [82]. Another reason for the observed difference in hardening trends with other studies at higher irradiation temperatures could be enhanced defect mobility, which leads to more defect recombination. TEM studies have shown the movement of dislocation loops at 573 K around 0.6 dpa [75]. Even at lower doses, the same authors observed a much greater fraction of mobile loops at 573 K compared to at room temperature [37]. As such, one might expect the asymptotic or saturation state of defects in irradiated Fe and FeCr to occur at a lower dose for irradiation at higher temperatures. In contrast to lattice strain trends in the high dose regime, the monotonic increase of hardness as a function of irradiation dose indicates that the presence of all irradiation defects, both isolated and within aggregated structures such as dislocation loops and networks, contributes to the hardening of Fe and FeCr. ### Trends with Cr content The trends for lattice strain and hardness as a function of Cr content differ greatly. While an increase in Cr content at each dose is associated with a greater increase in hardness, lattice strains peak between Fe5Cr and Fe8Cr at each dose, with lower strains for higher and lower Cr content. As previously discussed, hardness is an indicator of the overall defect population, with interstitials and vacancies additively contributing. Lattice strain is dependent on the imbalance between interstitial-type and vacancy-type defects as their respective contributions (relaxation volume) have opposite signs. By examining both trends, we can gain an insight into how the defect population is affected by Cr content. There is a monotonic increase in hardness and hardening rate of Fe and FeCr with Cr content. This suggests that defect retention increases with Cr content, which causes an increase in defect number density and/or size [83]. Heintz _et al._ measured hardness after room temperature irradiation to 1 dpa and similarly found that hardening increased with Cr content [21]. Interestingly, no hardening was observed for Fe2.5Cr in that study, which is different to our results. In contrast, the lattice strain trend is non-monotonic with maximum lattice strain at Cr content between 5-8%. This suggests that the defect population and types depend on Cr content. Larger positive lattice strain can arise from a larger population and/or less clustering of interstitial-type defects, as well as a smaller population and/or more clustering of vacancy-type defects. Since interstitials are more mobile than vacancies in iron [71], their population and mobility may have a greater effect on lattice strain at room temperature. It is interesting to note that FeCr has many non-monotonic trends in the range of Cr content less than 20%. One such trend is that the change in ductile-to-brittle-transition temperature reaches a local minimum at 9%Cr [4]. For irradiation-induced void swelling, it also varies non-monotonically with Cr content [3, 5, 7]. The diffusivity of interstitial defects in FeCr is also a non-monotonic function of Cr content, as identified by molecular dynamics and Monte Carlo studies [84]. At low Cr concentrations, an increase in Cr content will increase the binding of self-interstitial clusters to the Cr atoms, reducing their mobility [24]. With further increase of Cr content past a critical minimum point, each interstitial atom could be interacting with more than one Cr atom simultaneously, effectively 'pulling' the defect in opposite directions. This then results in an increase of interstitial diffusivity with Cr content. The point of minimum interstitial diffusivity occurs around 10%Cr [85]. This mechanism has also been used to explain the local minimum of void swelling at 5%-10%Cr observed in some studies [3, 85]. It is proposed that the origin of void swelling lie in the absorption of fast-moving interstitial defects at sinks, such as grain boundaries and surfaces, leaving behind vacancies and vacancies clusters that subsequently form voids. A reduction in interstitial cluster mobility will limit their migration to sinks and thus enhance recombination, thereby reducing void swelling. Furthermore, Cr only weakly interacts with vacancies [86], so their population is not affected as strongly by Cr content. Relating this to our observations, the balance of interstitial to vacancy defects will be strongly correlated with the mobility of interstitial defects. A reduction in the mobility of interstitial clusters will increase the ratio of interstitial to vacancies in the material, leading to a greater, positive lattice strain. The range of Cr content (5-8%) that correspond to the highest level of lattice strain in this study is consistent with the range for minimum void swelling and cluster diffusivity [3, 84], suggesting this could be an explanation for the non-monotonic dependence of lattice strain on Cr content. However, we note that trends for void swelling strongly depends on irradiation temperature, dose, and sample processing history (e.g. cold-worked or annealed). Further systematic studies on void swelling could reveal more insights into the role of Cr on defect populations. For pure Fe, a delayed onset of lattice strain saturation is observed compared to the FeCr alloys. Even at 6.0 dpa, the lattice strain does not appear to have reached a peak, unlike for the other FeCr alloys where a decrease of lattice strain occurs beyond 0.8 dpa (Figure 3). The NRT efficiency for all FeCr alloys is reduced between 0.8E-3 dpa to 0.8E-2 dpa (Figure 6 (a)). This suggests defect clustering is occurring, as it causes a reduction in strain contribution per defect, which leads to an underestimation of the defect density in the material. For pure Fe, the NRT efficiency remains constant between 0.8E-3 dpa to 0.8E-2 dpa, corresponding to a linear increase in isolated defect density. This suggests that pure Fe should exhibit defect clustering at a higher dose than FeCr. Furthermore, due to the low impurity content in the Fe material, the barrier to defect recombination is low, resulting in a low NRT efficiency and defect retention. As a result, all stages of defect evolution in Fe are shifted to higher doses compared to FeCr where defect retention is higher due to the binding of interstitial defects to Cr, as discussed earlier. ## 5 Summary and Conclusion In this study, the effect of ion-irradiation at 313 K on lattice strain and hardness in Fe and FeCr alloys was investigated. A dose range of 0.00008 dpa to 6.0 dpa was covered for pure Fe and FeCr up to 12%Cr. The key findings are as follows: * Irradiation hardening was observed for all Fe and FeCr alloys at a dose as low as 0.00008 dpa. Non-zero lattice strain in the implanted layer was also measured at that dose for Fe5Cr, which corresponds to an equivalent Frenkel pair density of \(5.9\times 10^{24}\) m\({}^{-3}\). This is well below the dose limit for any defect detection reported in the literature by electron microscopy. * The NRT efficiency was calculated for all alloys at 0.0008 dpa. The highest values (\(\sim 0.2\)) were found for Fe5Cr and Fe8Cr, which agree with studies in the literature at similar ion energies. * FeCr alloys reach a maximum positive strain at a dose of 0.8 dpa. Further increase in dose caused a reduction in lattice strain due to the formation of extended interstitial defect structures. This agrees with simulation results after accounting for the effects of finite temperature and impurities. * A delay in the onset of interstitial clustering as a function of dose is observed in Fe compared to FeCr. No maximum in lattice strain was observed in Fe even at 6.0 dpa. In the absence of Cr, there is comparatively little defect pinning in Fe, especially given the high purity in the as-received material. As a result, the NRT efficiency is low. Combined with the temperature, this means the evolution of lattice strain in Fe at room temperature is delayed to significantly higher doses than in FeCr. * Fe5Cr and Fe8Cr exhibit the highest level of positive strain out of all FeCr samples studied at any given dose level. This falls into the range of Cr concentration where the diffusivity of self-interstitial clusters is lowest, increasing their retention. * There is a monotonic increase in hardness with irradiation for all samples investigated. However, even by 6.0 dpa, irradiation hardening has not reached saturation for Fe or FeCr alloys. Fe, Fe3Cr and Fe5Cr exhibit very low hardening rates while Fe8Cr, Fe10Cr and Fe12Cr exhibited higher hardening rates at doses beyond 0.08 dpa. ## Declarations ### Funding The authors acknowledge use of characterisation facilities within the David Cockayne Centre for Electron Microscopy, Department of Materials, University of Oxford, alongside financial support provided by the Henry Royce Institute (Grant ref EP/R010145/1). This research used resources of the Advanced Photon Source, a U.S. Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357. KS acknowledges funding from the General Sir John Monash Foundation and the University of Oxford Department of Engineering Science. FH acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 714697). DEJA acknowledges funding from EPSRC grant EP/P001645/1. ### Conflicts of interest The authors have no relevant financial or non-financial interests to disclose. ### Data and code availability All data, raw and processed, as well as the processing and plotting scripts are available at: _A link will be provided after the review process and before publication._ ## Acknowledgements The authors are grateful to Sergei Dudarev (United Kingdom Atomic Energy Authority) for his helpful discussions. The authors are also grateful to Andy Bateman and Simon Hills (Department of Engineering Science, University of Oxford) for their assistance with making the temperature-controlled sample holder for ion implantation. ## Appendix A - EBSD maps of samples Electron backscatter diffraction (EBSD) was carried on a Zeiss Merlin FEG-SEM at the David Cockayne Electron Microscopy Centre at the University of Oxford. The acceleration voltage used was 30 kV with a probe beam current of 10 nA. Post-processing was performed using the Oxford Instruments HKL Channel 5 Tango software to remove noise and determine the grain sizes. The representative orientation maps (Figure A-1) were used to select the appropriate areas for lattice strain measurements. The microstructure of the Fe8Cr material also exhibits signs of cold-work (intragranular misorientation). However, this did not have a significant impact on the lattice strain measurements. ## Appendix B - Defect density calculations The method used to calculate the defect density from the lattice strain measurements is similar to the method used in our previous study [30]. For further details and example calculations, readers are referred to the supplementary file of [30]. In this section, we provide an overview of the equations used and assumptions made in our defect calculations. ### B1 - Defect Density from Lattice Strain Measurements The out-of-plane strain, \(\epsilon_{zz}\), induced by irradiation defects can be expressed as [51]: \[\epsilon_{zz}=\frac{1}{3}\frac{(1+\nu)}{(1-\nu)}\sum_{A}n^{(A)}\Omega_{r}^{(A)}\] (B-1) where \(\nu=0.3\) is the Poisson ratio (for pure Fe [87]), \(n^{(A)}\) and \(\Omega_{r}^{(A)}\) are respectively the number density and relative relaxation volume for each type of defect \((A)\). For low irradiation dose (\(\leq\) 0.008 dpa), we made the following assumptions in our calculations: * There is no clustering of interstitials or vacancies. This means the relaxation volume per point defect is maximised [88]. * There is no loss of defects, particularly interstitials, to the surface of the materials or to sinks such as grain boundaries. This means that there is an equal number of interstitial atoms (\(n^{i}\)) and vacancies (\(n^{v}\)). Using these assumptions, we obtain a lower bound of the equivalent Frenkel pair density (\(n^{FP}\)). Clustering of interstitials would decrease the relaxation volume per point defect [56], requiring more Frenkel pairs to be present in order to produce the same amount of positive strain that was measured (covered by assumption 1). If interstitials were lost to the surface, more equivalent Frenkel pairs would need to be present to account for the amount of positive strain measured (covered by assumption 2). Rearranging Equation B-1 and multiplying by the atomic density of Fe (\(\rho_{Fe}=8.48\times 10^{28}\) m\({}^{-3}\)) yields the volumetric number density of equivalent Frenkel pairs: \[N^{FP}=\frac{\rho_{Fe}\epsilon_{zz}}{\Omega_{r}^{FP}}\left(\frac{3(1-\nu)}{(1+ \nu)}\right)\] (B-2) The relative relaxation volume, per point defect, of a \(\langle 111\rangle\) interstitial is \(\Omega_{r}^{\langle 111\rangle}=1.65\) and that of a \(\langle 100\rangle\) interstitial defect is \(\Omega_{r}^{\langle 100\rangle}=1.86\)[56]. For a vacancy, the relative relaxation volume is \(\Omega_{r}^{v}=-0.22\)[56]. As a \(\langle 100\rangle\) interstitial has a higher positive relaxation volume, we can use \(\Omega_{r}^{FP}=1.86-0.22=1.64\) to get a lower bound estimate on the equivalent Frenkel pair density in the irradiated samples. ### B2 - NRT Efficiency Calculation The NRT efficiency describes the ratio of produced and retain defect density to the value predicted by the NRT-dpa model [58]. This can be expressed by: \[\text{NRT efficiency} =\frac{n^{FP}}{dpa}\] \[=\frac{\epsilon_{zz}}{(dpa)\Omega_{r}^{FP}}\left(\frac{3(1-\nu) }{(1+\nu)}\right)\] (B-3) ## Appendix C - Atom probe tomography results Due to concerns about possible carbon contamination during the irradiation process, atom probe tomography (APT) was performed on the Fe10Cr sample, exposed to 0.8 dpa, to quantify the concentration of carbon present in the samples. APT analysis was performed using a Leap 5000XR microscope in laser mode. The laser energy was 50 pJ with a pulse frequency of 200 kHz and detection rate of 0.5. Samples were maintained at 50 K for the analysis. The depth profiles of a few key elements are presented in Figure C-1. Near the surface (\(0<\) depth \(<\) 100 nm), there is a high concentration of carbon and nitrogen, possibly due to surface contamination and FIB milling process. There is a spike in carbon concentration at \(\sim\) 400 nm depth, which also correlated with a spike in oxygen atoms and C-Cr ions (not shown in plot). APT results indicate a higher concentration of impurity elements in the Fe10Cr alloy than reported by the manufacturer (analysed by glow discharge mass spectrometry [44]). For N, O and P, the concentrations measured by APT are between 7 to 11 times higher than the manufacturer-reported values. For C, the difference is a factor of 30. It is important to consider that the APT sample preparation required FIB milling and lift-out, which itself will introduce some contamination. Similar discrepancies between the manufacturer-reported values and those measured by APT have been reported previously for samples cut from the same raw material [89]. The true amount of carbon enrichment following irradiation is probably around 100-200 appm for the 0.8 dpa samples. It is also important to note that since the carbon contamination originated from the irradiation process, the level of contamination scales with irradiation time and dose. This means for samples irradiated to less than 0.8 dpa, the carbon enrichment is not expected to play a significant role.
2310.01625
Monolithic Polarizing Circular Dielectric Gratings on Bulk Substrates for Improved Photon Collection from InAs Quantum Dots
III-V semiconductor quantum dots (QDs) are near-ideal and versatile single-photon sources. Because of the capacity for monolithic integration with photonic structures as well as optoelectronic and optomechanical systems, they are proving useful in an increasingly broad application space. Here, we develop monolithic circular dielectric gratings on bulk substrates -- as opposed to suspended or wafer-bonded substrates -- for greatly improved photon collection from InAs quantum dots. The structures utilize a unique two-tiered distributed Bragg reflector (DBR) structure for vertical electric field confinement over a broad angular range. Opposing ``openings" in the cavities induce strongly polarized QD luminescence without harming collection efficiencies. We describe how measured enhancements depend critically on the choice of collection optics. This is important to consider when evaluating the performance of any photonic structure that concentrates farfield emission intensity. Our cavity designs are useful for integrating QDs with other quantum systems that require bulk substrates, such as surface acoustic wave phonons.
Ryan A. DeCrescent, Zixuan Wang, Poolad Imany, Sae Woo Nam, Richard P. Mirin, Kevin L. Silverman
2023-10-02T20:38:31Z
http://arxiv.org/abs/2310.01625v2
Monolithic Polarizing Circular Dielectric Gratings on Bulk Substrates for Improved Photon Collection from InAs Quantum Dots ###### Abstract III-V semiconductor quantum dots (QDs) are near-ideal and versatile single-photon sources. Because of the capacity for monolithic integration with photonic structures as well as optoelectronic and optomechanical systems, they are proving useful in an increasingly broad application space. Here, we develop monolithic circular dielectric gratings on bulk substrates - as opposed to suspended or wafer-bonded substrates - for greatly improved photon collection from InAs quantum dots. The structures utilize a unique two-tiered distributed Bragg reflector (DBR) structure for vertical electric field confinement over a broad angular range. Opposing "openings" in the cavities induce strongly polarized QD luminescence without harming collection efficiencies. We describe how measured enhancements depend critically on the choice of collection optics. This is important to consider when evaluating the performance of any photonic structure that concentrates farfield emission intensity. Our cavity designs are useful for integrating QDs with other quantum systems that require bulk substrates, such as surface acoustic wave phonons. ## I Introduction III-V semiconductor quantum dots (QDs) are recognized as quintessential solid-state single-photon sources for quantum photonic technologies [1]. They emit on-demand indistinguishable single photons at gigahertz rates with nearly lifetime-limited spectral linewidths [2; 3; 4]. Their charge states can be deterministically controlled with simple semiconductor gate structures, and their resonance frequencies can be Stark tuned within the same device layout [5; 3]. These features -- combined with the possibility of monolithic integration -- offer tremendous opportunities for interfacing III-V QDs with other two-level systems and for incorporating them into larger hybrid systems and circuits such as optoelectronic or optomechanical systems [6; 7; 8; 9; 10; 11; 12]. One universal obstacle for the implementation of QD light sources is due to the relatively large refractive index mismatch between the host medium (e.g., GaAs) and vacuum. The majority of the photons generated in a bulk material experience total internal reflection at the semiconductor-vacuum interface, ultimately limiting photon collection efficiencies to \(\lesssim\)1% when using vertical collection and external optics. A wide variety of photonic structures have been developed to efficiently interface with QDs for both on-chip and free-space applications [13]. Some examples include photonic crystal waveguides [14; 15], ridge waveguides and ring resonators [16; 17], and microdisk resonators [18; 19], all of which are particularly useful for on-chip routing of photons to and from QDs. For free-space applications, micropillar cavities [20; 21], photonic crystal cavities [22], circular grating resonators [23; 24] and open tuneable microcavities based on distributed Bragg reflectors (DBRs) [25] have been demonstrated. Such architectures are often designed to optimize a specific metric, e.g., photon collection efficiency, strong exciton-photon coupling, optical coherence times, or total brightness. Though this is suitable for pure photonic applications, these structures often cannot be immediately incorporated into larger hybrid structures where the QD needs to interact well with another system. An example hybrid system that encounters this challenge is a microwave-to-optical transducer based on InAs QDs and surface acoustic wave (SAW) resonators [7; 8]. This technology requires optimized electrical, mechanical and optical structures to be co-located while minimally afflicting the other subsystems. Recent work has shown remarkable success, but poor optical collection from the QDs was a significant source of total end-to-end efficiency losses. Specifically, a bare GaAs surface is ideal for high-quality-factor SAW resonators, but leads to poor photon collection from the QD. On the other hand, most previously developed photonic structures for optimal photon collection will strongly scatter the SAW field, reducing mechanical quality factors, or change the mode shape completely. This hybrid system thus requests a photonic structure monolithically incorporated into a bulk substrate while minimally perturbing the SAW strain field. The current work is largely motivated by this goal, but our cavity designs may be useful for any application where quantum emitters must be embedded in bulk substrates, such as integrating with bulk acoustic resonators [26] or similar vertical acoustic microcavities [27]. Our designs are based on circular dielectric gratings, or "bullseye cavities", which have been shown to greatly improve vertical extraction of single photons emitted from QDs [23; 24]. Previous work used suspended membranes or wafer-bonded III-V layers to vertically confine the op tical fields so that the emitted photons readily interact with the radial bullseye structure [23; 24]. Here, we show how to achieve effective vertical field confinement by using a two-tiered DBR structure. This offers several potential advantages over suspending or wafer-bonding approaches, including fabrication ease and maintaining larger distances between the QD and etched surfaces. In order to make these structures compatible with SAW resonators, we open the optical cavities on two sides so that focused SAWs can propagate through them with minimal scattering. This opening also creates an optical anisotropy that leads to highly polarized luminescence while negligibly affecting optical performance for the cavity-polarized emission mode. We measure \(\approx\)100\(\times\) photon collection improvements from QDs in our bullseye cavities when compared to unstructured regions on the same substrates; calculations suggest that this corresponds to \(\approx\)30\(\times\) improvements compared to a traditional DBR structure. Motivated by our findings, we quantitatively describe the critical role that external collection optics have on measured collection enhancements (Appendix B). That concept is applicable to any photonic structure that concentrates far-field emission intensity and should be considered carefully when evaluating device performance. ## II Design and fabrication Our devices (Fig. 1a) consist of a GaAs slab (thickness \(t\)) above two distinct DBR regions. The lower DBR consists of 22 periods of AlAs/GaAs and is designed to reflect normal-incidence light ("normal DBR"). The upper DBR consists of 2.5 periods of relatively thick AlAs/GaAs layers and is designed to reflect oblique-incidence light at angles around 63\({}^{\circ}\) ("oblique DBR"). InAs QDs are grown at the center of the upper GaAs slab. Circular grooves with depth d are etched into the resulting heterostructure, defining the optical bullseye cavity. The normal DBR reflects light that would otherwise be lost into the bulk substrate, but is only effective within an angular range spanning approximately 20\({}^{\circ}\) around normal incidence (Appendix A). The upper oblique DBR is intended to emulate a slab waveguide so that light emitted at larger angles (55\({}^{\circ}\) to 70\({}^{\circ}\)) readily interacts with the circular grating. The basic radial geometry is defined by three parameters (Fig 1b): the center radius (\(r\)), trench periodicity (\(\Lambda\)), and trench width (\(w\)). Design parameters are optimized by estimating device performance using commercial finite-difference time-domain software. Final design parameters are specified in Table 1 and in the text when relevant. Finally, we symmetrically open the cavity trenches such that opposing etched minor arcs span an angle \(\theta\)\(<\)180\({}^{\circ}\). Three partially enclosed cavities with cavity enclosure angles \(\theta\)=60\({}^{\circ}\), 90\({}^{\circ}\), and 120\({}^{\circ}\) are illustrated in Fig. 1c. In this geometry, \(y\)-oriented electric dipoles are expected to interact with the grating while \(x\)-oriented dipoles are expected to be only weakly affected. Fully enclosed cavities with \(\theta\)=180\({}^{\circ}\) are expected to show polarization-independent performance. Samples are grown via molecular beam epitaxy and then deposited with a sputtered SiO\({}_{2}\) hard mask. Circular grating trenches are defined by electron-beam lithography and subsequently etched via reactive-ion etching. The hard mask is then removed by hydrofluoric acid. Fig. 1d shows a cross-sectional scanning electron micrograph (SEM) of a fabricated calibration structure; distinct oblique DBR and normal DBR regions, as well as a single etched groove, are easily identified. Fig. 1e shows plan-view SEMs of four cavities with \(\theta\)=60\({}^{\circ}\), 90\({}^{\circ}\), 120\({}^{\circ}\), and 180\({}^{\circ}\) (corresponding to structures illustrated in Fig. 1c) during an intermediate fabrication step. The optical bullseye cavities are designed to exhibit an (\(n\),\(l\))=(5,0) drumhead-like electromagnetic resonance (\(n\) and \(l\) are the radial and azimuthal quantum numbers of a circular resonator) at 945 nm. Numerically calculated electric field magnitude (\(|E|\)) profiles of this cavity resonance, excited by an \(x\)-oriented electric dipole, are illustrated in Figs. 2a,b for two different plane cuts. These calculations show that the circular grating and double-DBR structures generate substantial in-plane (Fig. 2a) and out-of-plane (Fig. 2b) field confinement. Farfield calculations (Fig. 2c) show that a majority of the optical power emitted into the vacuum above the device is contained within an angular range corresponding to a numerical aperture (NA) of 0.25. This directed emission is favorable when long-working-distance collection optics must be used, a scenario commonly encountered with optical cryostats. The cavities also theoretically provide modest Purcell emission rate enhancements of approximately 4 to 5 (Fig. 2d). Here, we use the Purcell spectrum primarily to identify and quantify the cavity resonance. We also quantify the polarizing properties of partially enclosed cavities by comparing the Purcell spectra for \(y\)- and \(x\)-oriented dipoles. These spectra indicate cavity resonances with a typical bandwidth of 10 nm. For \(y\)-oriented dipoles (Fig. 2d; top panel), the Purcell enhancement for the \(\theta\)=90\({}^{\circ}\) partially enclosed cavity is reduced by only \(\approx\)25% with respect to the fully enclosed cavity (\(\theta\)=180\({}^{\circ}\)). In contrast, for \(x\)-oriented dipoles, the Purcell enhancement nearly vanishes for the \(\theta\)=90\({}^{\circ}\) cavity. This is intuitive when considering the radiation patterns for the respective dipole orientations; the 90\({}^{\circ}\) enclosed cavity scatters a majority of the \(y\)-oriented dipole's radiation field, but very little of the \(x\)-oriented dipole's field. In fact, this remains true even for off-center dipoles, and polarization-dependent photon collection enhancements are thus expected to be somewhat robust against QD positioning and to exist regardless of the Purcell effect. Fig. 2e quantifies these effects by comparing calculated photon collection rates from our optimized bullseye cavities to five reference systems (illustrated on the right side and bottom of panel e). The reference systems are as follows: 1) a bare (bulk) GaAs substrate with no bullseye grating; 2) our optimized bullseye geometry _without_ the upper oblique DBR; 3) our optimized bullseye cavity geometry _without_ the bullseye trenches; 4) a conventional DBR structure comprising a 1-\(\lambda\) thick GaAs on a normal DBR; 5) the same as 'Reference system 4' with additional bullseye trenches. In all cases, collected photons correspond to farfield power contained within an NA of 0.5. Total "rate enhancements" (Fig. 2e; solid markers) are derived by directly comparing the calculated farfield power between the optimized bullseye cavity and reference systems. We expect roughly 100\(\times\) and 10\(\times\) total rate enhancements when compared to bare GaAs (blue; 'Reference system 1') and a conventional DBR structure (red; 'Reference system 4'). Photon "collection enhancements" (Fig. 2e; open markers) -- arising purely from the redistribution of emitted photons due to coherent scattering -- are derived by normalizing the farfield power by the total radiated dipole power. For example, a single emitted photon from our device is approximately 20\(\times\) more likely to be collected when again compared to a bare GaAs surface (blue), and approximately 2\(\times\) more likely to be collected when compared to a conventional DBR structure (red). Importantly, our optimized structures are still expected to provide 5\(\times\) (2\(\times\)) higher photon rates (collection) than a conventional DBR structure even when adding a bullseye grating to that structure (purple; 'Reference system 5'). These results indicate that the collection improvement in our system largely originates from the circular gratings and to a lesser extent from the two-tiered DBR structure, although there is a synergy between these two components. In Appendix B, we describe how _experimentally_ observed collection enhancements depend on the collection NA and other details of the experimental apparatus. ## III Experimental device characterization For initial experimental characterization, we fabricate the aforementioned devices on wafers grown with a relatively high QD density (approximately 10 QDs per \(\mu\)m\({}^{2}\)). We perform photoluminescence (PL) measurements using a home-built fiber-coupled confocal microscope around an optical cryostat with the sample held at a temperature of approximately 5 K. QDs are optically excited by an 827 nm (nonresonant) pump laser focused to a nearly diffraction-limited spot at the sample surface Figure 1: (**a**) Cross-sectional schematic (\(x\)-\(z\) plane) of the device. Etched grooves (white regions), “normal DBR” (dark blue) and “oblique DBR” (light blue) regions are designated. Red lines illustrate how light emitted from a QD (red star) at various angles interacts with the structure. (**b**) In-plane (\(x\)-\(y\)) structure of the center of the device, illustrated to scale. Gray corresponds to etched regions. Several design parameters are designated in panels a and b. (**c**) Full in-plane structures of four devices, illustrated to scale, differing only by a cavity enclosure angle \(\theta\). (**d**) Cross-sectional SEM of the wafer structure. White regions: GaAs. Gray regions: AlAs. Distinct normal DBR and oblique DBR regions are designated. A single etched groove is apparent. QDs are grown at the center of the top GaAs slab (black dotted line). (**e**) Plan-view SEM images of four distinct devices, corresponding to the four devices in panel c. The 10 \(\mu\)m scale bar applies to panels e and c. \begin{table} \begin{tabular}{|c|c|c c|c|c|c|c|c|} \hline & Normal & \multicolumn{2}{|c|}{Oblique} & \multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{} \\ & DBR & layer & DBR & layer & GaAs & slab & Grating & etch & Grating periodicity, \(\Lambda\) & Trench & Cavity center \\ Parameter & thicknesses & & thicknesses & & thickness, \(t\) & depth, \(d\) & & odiicity, \(\Lambda\) & width, \(w\) & radius, \(r\) \\ \hline & 81.0 & nm & (AlAs) & / & & & & & & \\ & (AlAs) / 69.2 & 188.2 & nm & & & & & & \\ Value & nm (GaAs) & (GaAs) & & 172 nm & 0.83\(t\)=142.7 nm & 325 nm & 0.21\(\Lambda\)=68 nm & 2.025\(\Lambda\)=58 nm \\ \hline \end{tabular} \end{table} Table 1: Optimized geometrical parameters. through an objective with a nominal NA of 0.7. (The general importance of the collection optics is discussed in Appendix B.) PL is collected by the same objective, then coupled into a single-mode optical fiber. Reflected pump light is rejected with spectral filters. Polarization-dependent PL spectra are recorded by transmitting the collected PL through a linear polarizer before being coupled into fiber and counted on a CCD spectrometer. Typical PL spectra are shown in Fig. 3a, recorded from inside a partially enclosed cavity (\(\theta\)=90\({}^{\circ}\), black filled spectrum) and from an unetched region immediately outside the cavity (blue, multiplied by 10). Spectra were recorded under identical pump and collection conditions, and correspond to \(y\)-polarized emission. At this QD density, approximately 10 to 15 QDs contribute to each recorded spectrum, yielding approximately 30 to 45 PL peaks spanning a wavelength range between 920 nm and 960 nm. (Each QD contributes 2 to 3 PL lines to each spectrum, originating from different charge states and exciton complexes.) Spectra recorded from inside the cavity show only a few (\(\approx\)3 to 6) intense PL lines with count rates approximately 20 to 50\(\times\) higher than typical peak values recorded from bare regions. That is, the cavities allow spectral isolation and improved collection of just 2 to 3 QDs, likely those located near the cavity's center. The enhanced peaks tend to lie within a 10 nm range around the cavity resonance, as verified by measuring each cavity's reflection spectrum (e.g., Fig. 3a; red filled region). This indicates that the improved collection indeed originates from a coherent scattering process associated with the designed cavity mode rather than random scattering from etched surfaces. The improved count rate agrees well with the calculated values illustrated in Fig. 2e (green markers). We perform a similar comparison on cavities with enclosure angles \(\theta\)=60\({}^{\circ}\), 90\({}^{\circ}\), 120\({}^{\circ}\), and 180\({}^{\circ}\). Fig. 3b summarizes measured collection enhancements obtained from a variety of cavities with various values of \(\Lambda\), \(r\), and \(w\) (not specified) and \(\theta\) (horizontal axis). Due to the random nature of the brightness and position of each QD in the cavities, the estimated enhancement varies widely between cavities and shows no clear correlation with the enclosure angle \(\theta\). (Error bars in Fig. 3b represent variations expected from the random brightness of QDs in the ensemble.) Nonetheless, typical PL enhancements are estimated to be around 30 to 50\(\times\), with an upper estimate of approximately 140\(\times\) from several of the fully enclosed cavities. We note that the magnitudes of these experimentally observed enhancements depend on the details of the experimental apparatus and are expected to be smaller for high-NA collection objectives (Appendix B). A remarkable result is that PL enhancements for partially enclosed cavities are comparable to those of complete circular cavities -- differing only by a factor of \(\approx\)2 -- when collecting \(y\)-polarized PL. In contrast, \(x\)-polarized spectra from partially enclosed cavities resemble spectra recorded from unetched regions, indicating that \(x\)-polarized collected photons weakly interact with Figure 2: (**a,b**) Calculated field magnitude (\(\left|E\right|\)) in the (a) \(x\)-\(y\) plane and (b) \(y\)-\(z\) plane when driving a fully enclosed cavity with an \(x\)-oriented electric dipole on resonance at 945 nm. In a, edges of the etched regions are designated by white circles. In b, boundaries between the DBRs and GaAs slab regions are designated by white horizontal lines and etched regions by gray rectangles. (**c**) Farfield intensity \(\left|E\right|^{2}\) calculated in vacuum (above the device) under the same conditions as in panels a and b. Dotted white circles indicate increments of NA=0.25. (**d**) Purcell spectra calculated for \(y\)-oriented (top panel) and \(x\)-oriented (bottom panel; dashed curves) dipoles. Different colors correspond to different cavity enclosure angles \(\theta\) according to the legend. The system geometry (illustrated in each panel) is the same as in Fig. 1c. (**e**) Photon collection enhancements, relative to five different reference systems (illustrated at right and below), calculated for bullseye cavities. “Total rate” includes both Purcell and geometrical enhancements. “Scattering only” excludes changes due to the Purcell factor. (More complete explanations of these terms are provided in the main text.) In all cases, collected photons correspond to power contained in the farfield within an NA of 0.5. the etched structure. Polarization characteristics of the cavities are summarized in Fig. 3c. In this analysis, photon counts from individual PL lines are evaluated as a function of polarization angle. For all partially enclosed cavities, the collected PL is a minimum for \(x\)-polarized collection (polarizer angle 90\({}^{\circ}\)) and maximum for \(y\)-polarized collection (polarizer angles 0\({}^{\circ}\) and 180\({}^{\circ}\)). As a result, the polarization contrast for both \(\theta\)=60\({}^{\circ}\) (blue circles) and \(\theta\)=90\({}^{\circ}\) (orange squares) cavities is approximately 20:1. The polarization contrast decreases to approximately 5:1 for the \(\theta\)=120\({}^{\circ}\) (green up triangles) cavity. Fully enclosed cavities (red down triangles) show no systematic polarization dependence; variations across angles likely arise from variations in our apparatus' photon collection efficiencies as the polarizer is rotated. ## IV Discussion and Conclusions We have detailed the design, fabrication, and optical characterization of circular dielectric gratings for improved photon collection from InAs QDs. As opposed to previous work which used suspended membranes or wafer-bonded III-V layers for effective vertical field confinement over a broad angular range [23; 24], we designed monolithic structures using a unique two-tiered DBR structure. We experimentally observe up to 140\(\times\) photon collection enhancements when compared to unstructured regions on the same substrate. Based on numerical calculations, we thus anticipate approximately 30\(\times\) better photon collection rates when compared to optimized conventional DBR structures. These collection enhancements are engineered to be strongly polarization dependent by simply truncating the grating structures angular extend around the QD. These anisotropic structures only weakly impact total collection enhancements. III-V QDs are very sensitive to localized charges within a several-hundred nanometer vicinity [28]. Consequently, for best performance, QDs should typically be kept away from interfaces where large surface defect densities are possible. One potential benefit of our monolithic bullseye gratings is that the nearest etched interfaces (namely, the bullseye trenches) are larger than 650 nm away from bullseye-centered QDs. In contrast, low-quality interfaces may be as close as 90 nm in suspended membranes [23], to 150 nm in wafer-bonded structures [24]. Our devices are also immediately compatible with conventional QD electrostatic p-i-n gating methods [3; 25], and thus are well-suited for improving single-photon source efficiencies while retaining the desired low-noise characteristics of optimized III-V QDs. ## V Acknowledgements This research was performed while R.D. held an NRC Research Associateship award at NIST. ## Appendix A Design logic of two-tiered DBR Figure 4 shows calculated angle-dependent reflectance spectra from three distinct DBR structures. The top two images corresponding to the lower ("Normal DBR") and upper ("Oblique DBR") regions in our fabricated structures, calculated with 20 DBR periods in each structure. The bottom image corresponds to the fabricated compound two-tiered DBR system with only 2.5 periods in the oblique DBR. For these calculations, an \(s\)-polarized plane wave is incident from a semi-infinite GaAs layer. The vertical white dashed line indicates the design wavelength where high reflectance around both 0\({}^{\circ}\) and \(\approx\)63\({}^{\circ}\) are desired. ## Appendix B Compatibility between photonic structures and collection optics Our experimental setup uses a single-mode optical fiber to collect the QD's emission. End-to-end collection efficiencies thus depend on first-lens collection efficiency and on proper fiber coupling. Generally, optics needed for best performance when measuring emission from a photonic structure differ from those needed without the photonic structure. For this reason, evaluating the performance of certain photonic structures depends on the larger optical setup and care should be taken when comparing results across systems. To illustrate the effect, consider the simplified setup shown in Fig. 5a. Photons emitted from a point source ("emitter") are collected by an objective ("first lens") with numerical aperture NA and focused to the tip of a single-mode optical fiber using a lens with focal length \(f\). Two cases are illustrated: (1) Without a photonic structure (light red region, solid red lines), the maximum angular range over which photons are collected is limited by the objective. At the back aperture, collimated emission has a beam radius \(r_{0}\). A strategic fiber coupling optic focuses this beam tightly to a diameter \(d_{0}\) to match the mode field diameter (MFD) of the single-mode collection fiber. (2) With a photonic structure (darker red region, dotted red lines), emission is concentrated into a smaller angular range with divergence angle \(\theta_{p}\). At the back aperture, the collimated beam thus has a smaller beam radius, \(r_{p}\)\(<\)\(r_{0}\). The same fiber coupling optic focuses this beam to a larger diameter, \(d_{p}\)\(>\)\(d_{0}\), and fiber coupling suffers. We first address the impact of the objective. Our experiments (summarized in Fig. 3) used an objective with an effective NA of approximately 0.25. (Two thick optical windows between the objective and sample perturb the confocal performance of the setup by affecting mode-matching between the collected photons and the single-mode collection fiber. The result is that our objective with nominal NA of 0.7 has an effective NA of approximately 0.25.) In this case, the bullseye grating which concentrates farfield photons well within NA=0.25 (e.g., Fig. 2c) is obviously beneficial. For objectives with larger NAs, this benefit is expected to be reduced. Fig. 5b (black solid curve) quantifies how the first-lens collection enhancement depends on the objective's NA. Here, the calculated farfield intensity from an optimized bullseye grating is normalized to that from a bulk single-interface GaAs geometry ("Reference system 1"), both for the same NA. For NA\(\leq\)0.25, first-lens collection enhancements are between 30\(\times\) and 100\(\times\), similar to those observed in our measurements. Values calculated at NA=1.0 correspond to the ratio of _total_ photons emitted into air relative to the substrate. Importantly, the bullseye grating provides approximately 10\(\times\) better total photon emission into air. For comparison, calculations for a conventional single-DBR geometry ("Reference system 4") are also shown (gray dotted curve). Bullseye gratings outperform the conventional DBR by at least 2\(\times\) over all NAs, and up to 10\(\times\) for the smallest NAs. Fig. 5c quantifies how end-to-end efficiency varies with the photonic structure's emission divergence angle \(\theta_{p}\). The fiber-coupling optic was selected to optimize collection based on the objective's nominal NA (0.7, corresponding to \(\theta\)=44\({}^{\circ}\); dashed gray curve). As the photonic structure concentrates light into a divergence angle \(\theta_{p}\), the first-lens collection efficiency (dotted gray curve) increases, but the fiber-coupling efficiency decreases. The total collection efficiency (solid black curve) is a product of these two efficiencies and reaches a maximum around Figure 4: Calculated angle-dependent reflectance spectra from two distinct DBRs (top images) and the composite two-tiered DBR used in our fabricate devices (bottom image). Figure 3: (**a**) PL spectra recorded from QDs inside a \(\theta\)=90\({}^{\circ}\) cavity (black) and from a bare region immediately outside the same cavity (blue; multiplied by 10). The differential reflection spectrum \(|\Delta R|/R\) (light-red filled region; right axis) from the same \(\theta\)=90\({}^{\circ}\) cavity is also shown. (**b**) Estimated collection enhancement for a variety of cavities with different cavity enclosure angles \(\theta\) (horizontal axis). All spectra in panels a and b were recorded with \(y\) polarization. Enhancements are calculated by comparing the peak PL counts from a QD inside each cavity to characteristic peak PL counts from QDs immediately outside the cavity. Error bars represent uncertainties arising from the random brightness of each QD in the ensemble; specifically, they were derived by taking 10 distinct estimates for QD count rates outside the cavity within a 10 nm spectral range of the cavity’s peak PL. (**c**) Experimental PL counts (open markers) of single QD emission lines as a function of collection polarization angle \(\phi\) for cavities with different enclosure angles \(\theta\) (specified in the legend). (One device of each \(\theta\) was measured and plotted.) Fits to a sinusoidal angular variation are shown by solid curves. Data for \(\theta\)=60\({}^{\circ}\), 90\({}^{\circ}\), and 120\({}^{\circ}\) cavities are normalized to the fit value at polarization angle \(\phi\)=0\({}^{\circ}\); data for the \(\theta\)=180\({}^{\circ}\) cavity is normalized independently. \(\theta_{p}\)=25\({}^{\circ}\) (corresponding to NA=0.24). That is, _total_ collection efficiencies can be further improved by choosing new optimal fiber-coupling optics appropriate for the narrower collected beam radius \(r_{p}\).
2303.16261
Conductance asymmetry in proximitized magnetic topological insulator junctions with Majorana modes
We theoretically discuss electronic transport via Majorana states in magnetic topological insulator-superconductor junctions with an asymmetric split of the applied bias voltage. We study normal-superconductor-normal (NSN) junctions made of narrow (wire-like) or wide (film-like) magnetic topological insulator slabs with a central proximitized superconducting sector. The occurrence of charge non-conserving Andreev processes entails a nonzero conductance related to an electric current flowing to ground from the proximitized sector of the NSN junction. We show that topologically-protected Majorana modes require an antisymmetry of this conductance with respect to the point of equally split bias voltage across the junction.
Daniele Di Miceli, Eduárd Zsurka, Julian Legendre, Kristof Moors, Thomas Schmidt, Llorenç Serra
2023-03-28T19:14:58Z
http://arxiv.org/abs/2303.16261v2
**Antisymmetric Breaking of Voltage Gauge Invariance due to Majorana States in Magnetic Topological Insulators** ## Abstract **We theoretically discuss how Majorana bound states and Majorana chiral propagating states break voltage gauge invariance of electric transport in specific ways. While the breaking of voltage gauge invariance can be generically related to Andreev processes at interfaces, an _antisymmetric_ conductance with respect to the point of equally split bias across a normal-superconductor-normal (NSN) junction is a more specific signal of topologically-protected Majorana modes. These electric signatures are discussed in NSN junctions made with narrow (wire-like) or wide (film-like) magnetic topological insulator slabs with a central proximitized superconducting sector.** ###### Contents * 1 Introduction * 2 Model Hamiltonian * 3 Gauge-Invariance Breaking * 4 Numerical Results * 5 Conclusion * A Derivation of the Differential Conductances * B Role of an Interface Barrier ## 1 Introduction Majorana modes in solid state physics are zero-energy quasiparticle excitations with the unusual property of being their own antiparticles [1, 2], which emerge in 1D and 2D topological superconductors (TSCs) over boundaries and vortices [3]. These fascinating states can be distinguished into Majorana chiral propagating states (MCPS) in two-dimensional superconducting phases [4, 5, 6], and zero-energy Majorana bound states (MBS) in spinless \(p\)-wave superconducting chain [7]. The former are dispersive modes analogous to quantum anomalous Hall and quantum spin Hall edge states in superconducting materials [5, 6], while the latter are localized in-gap modes emerging at the ends of gapped phases of 1D topological superconducting wires. Both allow non-abelian braiding operations, which makes them promising for fault-tolerant topological quantum computing [8, 9, 10]. Magnetic topological insulators [11], i.e., 3D topological insulators (TIs) with topological surface states and ferromagnetic ordering, are outstanding candidates for the realization of such robust platforms for quantum computation, since in presence of proximity coupling to an ordinary \(s\)-wave superconductor they realize different TSCs with either propagating or localized Majorana modes [12, 13, 14, 15]. Despite the growing interest in proximitized MTIs [16], the experimental detection of Majorana modes is still a matter of concern [17, 18, 19]. In this paper, we highlight a characteristic feature of Majorana modes that can be used in their detection. Through theoretical analysis and numerical simulations, we find that both types of Majoranas can be detected in NSN junctions between normal (N) and proximitized (S) magnetic topological insulators when the bias between the two N sections is split asymmetrically with respect to the central S lead. Without Majorana modes, or any trivial Andreev bound states (ABSs) which can also be found in _non-topological_ 1D superconductors [20, 21], the proposed setup is gauge invariant, in the sense that the electric current flowing through the junction is independent of how the bias is split between left and right terminals. Due to the emergence of Andreev processes at the interfaces of the junction, Majorana states and ABSs break the voltage gauge invariance, making the current on the two normal leads depend on the fraction of the bias that is applied to each side of the junction. In these conditions, charge conservation implies the existence of an electric current going to ground from the superconductor, defining a nonzero total differential conductance. In presence of nontrivial MBSs or MCPSs, this differential conductance is _antisymmetric_ with respect to the splitting of the bias, since the probability of Andreev reflection and transmission is equal on both N sides. Conversely, a trivial ABS gives rise to a conductance that is not antisymmetric with the bias split, due to different Andreev processes on the two sides of the junction. Observing how the total conductance varies with the bias split across the junction provides a robust criterion to identify topologically-protected Majorana modes in MTIs. Similar criteria to detect MBSs on the ends of proximitized semiconducting wires have been discussed in recent works [22, 23, 24, 25, 26, 27], without addressing however the gauge invariance breaking as a main underlying principle. ## 2 Model Hamiltonian To start, we consider the Hamiltonian of a 3D TI in presence of ferromagnetic ordering. In the basis \(\phi^{+}_{k\sigma}=(c^{+}_{k\uparrow},c^{-}_{k\uparrow},c^{+}_{k\downarrow},c^{ -}_{k\downarrow})^{T}\), where \(c^{\ast}_{k\sigma}\equiv c^{\ast}_{k\sigma}(y,z)\) annihilates an electron with longitudinal wave number \(k\equiv k_{x}\), spin \(\sigma=\uparrow,\downarrow\) and orbital index \(\tau=\pm\), the effective 3D Hamiltonian for magnetic TIs takes the following form [28, 29] \[\mathcal{H}_{0}(\mathbf{k})=\epsilon(\mathbf{k})+M(\mathbf{k})\tau_{z}+A( \mathbf{k})\tau_{x}+\Lambda\sigma_{z}\,, \tag{1}\] where \[\epsilon(\mathbf{k}) =\mu-C_{\perp}\left(k_{x}^{2}+\hat{k}_{y}^{2}\right)-C_{z}\hat{k} _{z}^{2}\,,\] \[M(\mathbf{k}) =M_{0}-M_{\perp}\left(k_{x}^{2}+\hat{k}_{y}^{2}\right)-M_{z}\hat {k}_{z}^{2}\,, \tag{2}\] \[A(\mathbf{k}) =A_{\perp}\left(k_{x}\sigma_{x}+\sigma_{y}\hat{k}_{y}\right)+A_{ z}\sigma_{z}\hat{k}_{z}\,.\] Here \(\mathbf{k}=(k_{x},\hat{k}_{y},\hat{k}_{z})\) and the transverse momentum operators are given by \(\hat{k}_{y(z)}=-i\hbar\frac{\partial}{\partial y(z)}\). The Pauli matrices \(\sigma_{i}\) (\(\tau_{i}\)) with \(i=x,y,z\) act on spin (orbital) subspaces, the magnetization, assumed along \(z\), is represented by the Zeeman term \(\Lambda\sigma_{z}\) and \(\mu\) is the chemical potential. This Hamiltonian is suitable to describe TIs such as Bi\({}_{2}\)Se\({}_{3}\), Bi\({}_{2}\)Te\({}_{3}\) and Sb\({}_{2}\)Te\({}_{3}\) through proper choice of parameters [28]. In our simulations, we used the values given in the "toy model" in Ref. [30]. When placed in proximity to an ordinary \(s\)-wave superconductor, the system is described by the Bogoliubov-de Gennes (BdG) Hamiltonian [31] \[\mathcal{H}_{\text{BdG}}(\mathbf{k})=\begin{pmatrix}\mathcal{H}_{0}(\mathbf{k })&\Delta^{\star}\\ \Delta&-\sigma_{y}\mathcal{H}_{0}^{\ast}(-\mathbf{k})\sigma_{y}\end{pmatrix}\,, \tag{3}\] expressed in the basis [32] \[\Phi^{\tau}_{k\sigma}=\left(c^{+}_{k\uparrow},c^{-}_{k\uparrow},c^{+}_{k \downarrow},c^{-}_{k\downarrow},-c^{+\dagger}_{-k\downarrow},-c^{-\dagger}_{- k\downarrow},c^{+\dagger}_{-k\uparrow},c^{-\dagger}_{-k\uparrow}\right)^{T}\,, \tag{4}\] where \(\Delta\equiv\Delta(y,z)\) is the superconducting pairing field induced by proximity. In the following, we fixed the thickness of the slab to \(d=4\) nm and considered a wire-like geometry with width \(L_{y}=20\) nm and a film-like one with \(L_{y}=160\) nm. We assumed a constant pairing field along \(y\), and modelled the proximity coupling on the upper surface of the magnetic TI through a stepwise function \(\theta\) of the form \[\Delta(y,z)=\Delta\;\theta\left(z-d/2\right)\,. \tag{5}\] All the numerical results below are obtained with \(\Delta=5\) meV for the wire (\(L_{y}=20\) nm) and \(\Delta=10\) meV for the film (\(L_{y}=160\) nm). Despite these values are a bit unrealistic, qualitatively similar results can be obtained through smaller pairings and rescaled systems. Indeed, the decay length of Majorana edge states is inversely proportional to the pairing potential \(\xi\propto 1/\Delta\), meaning that, in order to guarantee well-separated MBSs at the ends of a MTI wire, a smaller pairing can be compensated by a greater length \(L_{x}\) as long as the ratio \(\xi/L_{x}\) is unchanged. Similarly, a smaller gap requires the thin film to be wider to maintain the ratio \(\xi/L_{y}\) unchanged and ensure decoupled edge modes in the phase with MCPSs. In this way, a larger pairing allows us to reduce the computational effort using smaller systems and, at the same time, gives us the opportunity to enhance the energy gap for MBSs and increase the width of the region with MCPSs. A similar scaling has already been proposed in graphene [33]. Figure 1: (a) \(k=0\) low energy states and (b)-(c) full energy spectrum for an infinitely-long thin film with \(\mu=0\). The black dashed line in (a) stands for the bulk gap, while the band structures are computed with (b) \(\Lambda=15\) meV and (c) \(\Lambda=30\) meV. (d) \(k=0\) energy gap and (e)-(f) band structures for an infinite wire with \(\mu=10\) meV. The band structures are obtained with (e) \(\Lambda=10\) meV and (f) \(\Lambda=30\) meV. Red and blue colours represent electron and hole modes, respectively, purple is a superposition of the two states. Since the Hamiltonian for a 2D system with particle-hole symmetry belongs to the D symmetry class, the different phases in an effective two-dimensional slab can be labelled by an integer topological invariant \(\mathcal{N}\)[34, 35]. A chiral TSC with odd Chern invariant and unpaired Majorana modes can be realized in a 2D thin film from the quantum anomalous Hall (QAH) phase, which is routinely achieved in MTIs [36, 37, 38]. For \(\mu=0\), the proximity pairing induces a novel region between the \(\mathcal{N}=0\) trivial superconductor and the \(\mathcal{N}=2\) QAH state [39]. In this intermediate region, the MTI thin film realizes a \(\mathcal{N}=1\) TSC with _unpaired_ chiral Majorana modes on the edges [12, 13]. The occurrence of this chiral TSC region can be observed in Fig. 1(a), which displays the \(k=0\) low-energy eigenvalues of Eq. (3) solved in the thin film geometry as a function of \(\Lambda\). The black dashed line represents the bulk energy gap, showing the existence of two distinct critical points where topological phase transitions occur with the emergence of gapless edge modes within the bulk gap. Figs. 1(b)-(c) display the full band structure of these phases: the first shows a single crossing of unpaired MCPSs which characterizes the \(\mathcal{N}=1\) chiral topological superconductor, the second corresponds to the BdG quasiparticle spectrum of a \(\mathcal{N}=2\) proximitized QAH system. While a thin film with \(\mu=0\) can realize different 2D topological superconducting states, a narrow MTI wire with \(\mu\neq 0\) can be exploited to achieve quasi one-dimensional TSCs with end-localized MBSs [14]. Since the effective BdG Hamiltonian of a QAH/SC heterostructure in a 1D geometry fits in the BDI symmetry class [15], the topological properties of the system are characterized by an integer invariant [34, 35], which discriminates between trivial \(N_{BDI}=0\) and topological \(N_{BDI}=1\) states with unpaired Majorana edge modes in finite-length systems. In principle, even higher topological states with \(N_{BDI}\geq 2\) could be realized, but they are not relevant for our discussion as in presence of disorder a pair of co-located MBSs (i.e., at the same end of the ribbon) couples into a trivial fermion [7]. The \(k=0\) gap for a \(\mu=10\) meV infinitely-long wire is shown in Fig. 1(d), where the closing and reopening of the energy gap signals a phase transition between trivial and topological states. The full band structures of the two distinct phases are depicted in Figs. 1(e)-(f). It can be noted in Fig. 1(f) that the normal order of the energy bands around \(k=0\) is inverted, indicating a nontrivial topology of the bulk and, as a consequence of the bulk-boundary correspondence, the presence of topologically protected MBSs on the extremities of wires with finite length [40]. ## 3 Gauge-Invariance Breaking We propose to detect Majorana modes in topological superconductors through a NSN junction with an _asymmetric_ bias drop between left and right leads. The experimental setup is schematically shown in Fig. 2(a). The electric current \(I_{i}\) in the normal terminal \(i=1,2\) of a double junction can be computed through the Lambert formalism as [41, 42] \[I_{i}=\int_{0}^{+\infty}dE\sum_{a}s_{a}\left[J_{i}^{a}(E)-K_{i}^{a}(E)\right], \tag{6}\] where \[\begin{split} J_{i}^{a}(E)&=\frac{e}{\hbar}N_{i}^{ a}(E)f_{i}^{a}(E),\\ K_{i}^{a}(E)&=\frac{e}{\hbar}\sum_{jb}P_{ij}^{ab}(E )f_{j}^{b}(E)\,,\end{split} \tag{7}\] are the in-going and out-going fluxes of quasiparticles of type \(a=e,h\) (\(s_{e,h}=\pm\)) in the normal lead \(i=1,2\). The electric current is expressed in terms of the number of propagating modes in each terminal \(N_{i}^{a}\), the Fermi distribution function \(f_{i}^{\ a}\) and the transmission amplitudes \(P_{ij}^{ab}\). The latter indicates the probability of transmission of a quasiparticle \(b\) in lead \(j\) to a quasiparticle \(a\) in lead \(i\), such that both normal and Andreev reflection \(i=j\) and transmission \(i\neq j\) are taken into account. We define a differential conductance in the normal terminals of the double junction as \[G_{i}=\frac{\partial I_{i}}{\partial V}\,, \tag{8}\] where \(V=V_{1}-V_{2}\) is the _total_ bias across the junction and \(V_{i}\) is the voltage drop between the \(i\)-th lead and the central sector. Here, we assume an asymmetric bias \(V_{1}=\alpha V\) and \(V_{2}=-\beta V\) with \(0\leq\alpha\leq 1\) and \(\alpha+\beta=1\), such that the total bias between left and right terminals is fixed. With this assumption, we can derive the following expressions for the conductance in the normal leads \[G_{1}(V) = \alpha\frac{e^{2}}{h}\left[N_{1}^{e}(\alpha V)-P_{11}^{ee}( \alpha V)+P_{11}^{he}(\alpha V)\right] \tag{9}\] \[+ \beta\frac{e^{2}}{h}\left[P_{12}^{hh}(\beta V)-P_{12}^{eh}(\beta V )\right]\,,\] \[G_{2}(V) = \beta\frac{e^{2}}{h}\left[-N_{2}^{h}(\beta V)-P_{22}^{eh}(\beta V )+P_{22}^{hh}(\beta V)\right]\] (10) \[+ \alpha\frac{e^{2}}{h}\left[P_{21}^{he}(\alpha V)-P_{21}^{ee}( \alpha V)\right]\,.\] Without Andreev processes, the system conductance is gauge invariant, in the sense that the current flowing across the junction depends only on the total voltage drop \(V\) and is not affected by the way the bias is distributed on left and right leads. The emergence of Majorana edge states or trivial ABSs in the superconductor _breaks_ such an invariance, so that the currents in the two terminals acquire different intensities proportional to the fraction \(\alpha,\beta\) of the total bias applied in the two sides of the junction. According to the different bulk topology of the proximitized sector, Figure 2: (a) Experimental setup proposed for the detection of topologically-protected Majorana modes. The potential \(V_{0}=-e\mu\) is set by the back-gade electrode. (b) Sketches of the transmission processes at the interfaces of the junction for different superconducting phases in the central sector. Red and blue colors stand for electron and hole currents. different electrical responses are expected, as sketched in Fig. 2(b) for all the possible cases. At low bias, the trivial superconductor has an insulating behaviour, preventing electrical current flow between the normal terminals of the system. Conversely, the \(\mathcal{N}=2\) topological superconductor allows perfect transmission of electrons and holes across the junction, due to topologically-protected fermionic edge modes within the bulk gap. More complex scattering processes are allowed in presence of Majorana states: in a \(N_{BDI}=1\) proximitized wire with MBSs, electrons and holes undergo perfect Andreev reflection [43, 44], while equal probabilities of normal reflection, Andreev reflection, normal transmission and Andreev transmission characterize the interaction with unpaired MCPSs in a \(\mathcal{N}=1\) topological superconducting thin film [39]. Similarly to MBSs, a trivial ABS allows Andreev reflection on only one interface of the double junction. Choosing appropriate values for the transmission probabilities \(P_{ij}^{ab}\) in order to recover the scenarios above, the conductance on the two terminals of the junction can be easily computed from Eqs. (9)-(10). Their values are summarized in Table 1 for the different phases in the proximitized MTI and for a trivial ABS on the left side of the junction in a wire geometry. It can be noted that _only_ in presence of Majorana modes or ABSs, i.e., when gauge invariance is broken, the conductance on the two terminals depends on \(\alpha\) and the total conductance \(G_{t}=G_{1}+G_{2}\neq 0\). However, the behaviour of the total conductance \(G_{t}\) as a function of \(\alpha\) distinguishes topologically-protected Majorana states from trivial ABSs. For the topological case, the total conductance \(G_{t}\) is _antisymmetric_ around \(\alpha=0.5\) (equal bias splitting), while for the trivial one, the total conductance is antisymmetric around \(\alpha=0\) (completely unbalanced bias splitting). Therefore, the gauge invariance breaking is generically related to the emergence of Andreev processes in the superconductor, while the antisymmetric behaviour of \(G_{t}\) around \(\alpha=0.5\) is a more specific signal of MBSs or MCPSs. The total conductance \(G_{t}\neq 0\) is related to the existence of an electric current going to ground from the superconductor, which ensures charge conservation when gauge invariance is broken and the current entering on the left is different from the one flowing out on the right. Since this current can be easily detected through electric measurements, the detection of an antisymmetric conductance \(G_{t}\) with respect to the bias split parameter \(\alpha\) gives a robust criterion to identify Majorana quasiparticles in MTI slabs. We point out that, in our model, the only current flowing through the \(s\)-wave superconductor is due to Cooper pairs originating in the proximitized MTI. Indeed, for low bias, no higher modes can be activated, preventing unintended transmissions between the terminals of the junction. We also neglected scattering processes occurring between \begin{table} \begin{tabular}{c|c|c|c} \hline \hline S-Phase & \(G_{1}\) & \(G_{2}\) & \(G_{t}\) \\ \hline \(\mathcal{N}=0\) & \(0\) & \(0\) & \(0\) \\ \(\mathcal{N}=1\) & \(\alpha e^{2}/h\) & \((\alpha-1)e^{2}/h\) & \((2\alpha-1)e^{2}/h\) \\ \(\mathcal{N}=2\) & \(e^{2}/h\) & \(-e^{2}/h\) & \(0\) \\ \(N_{BDI}=0\) & \(0\) & \(0\) & \(0\) \\ \(N_{BDI}=1\) & \(2\alpha e^{2}/h\) & \(2(\alpha-1)e^{2}/h\) & \(2(2\alpha-1)e^{2}/h\) \\ ABS & \(2\alpha e^{2}/h\) & \(0\) & \(2\alpha e^{2}/h\) \\ \hline \hline \end{tabular} \end{table} Table 1: Low-bias conductances \(G_{1},G_{2}\) and \(G_{t}=G_{1}+G_{2}\) computed through Eqs. (9)-(10). The first column summarizes all the possible phases in the central S lead of the junction. In the last row, we considered a trivial ABS on the left side of the junction. The conductances are given for \(\beta=\alpha-1\). the normal leads and the \(s\)-wave superconductor, since the presence of a physical interface between the two distinct materials make them less favourable than the scattering events which take place within the MTI slab. ## 4 Numerical Results We computed numerically the conductances \(G_{1},G_{2}\) and the sum \(G_{t}=G_{1}+G_{2}\) in the NSN junction with a magnetic TI in the wire and thin film configurations, reproducing the physics of 1D and 2D topological superconductors, respectively. Fig. 3(a) displays \(G_{t}\) versus the magnetization of the MTI for an asymmetric bias \(\alpha=0.25\). In the thin film geometry, a region with \(G_{t}\neq 0\) distinguishes the \(\mathcal{N}=1\) chiral TSC from the \(\mathcal{N}=0\) trivial superconductor and the \(\mathcal{N}=2\) QAH phase, where \(G_{t}=0\) denotes that the electric currents in the two terminals are equal and independent of the bias split. For the chosen \(\alpha\), the conductance for \(\mathcal{N}=1\) was expected to be quantized at \(G_{t}=-e^{2}/2h\) Figure 3: Conductance \(G_{t}\) computed in the NSN junction as a function of (a) magnetization and (b) bias split. In the left panel \(\alpha=0.25\) and the blue (green) line stands for the wire (thin film) geometry. The ABS is modelled adding a barrier on the right side of the junction with the proximitized sector in the \(N_{BDI}=1\) phase. (c)-(d) Transmission amplitudes \(P_{ij}^{ab}\) for the left terminal of the junction as a function of the length \(L_{x}\) of the central proximitized sector. The probabilities are computed for a (c) \(N_{BDI}=1\) superconductor with MBSs and a (d) \(\mathcal{N}=1\) superconductor with MCPSs. In all the pictures, the total bias \(V=0.1\) meV is chosen within the bulk gap. The values of \(\Lambda\) in panels (b),(c) and (d) are chosen according to Fig. 1 to reproduce the different TSCs. which is roughly the value reached in the nontrivial region with MCPSs. Similarly, in the wire geometry a plateau \(G_{t}=-e^{2}/h\) characterizes the \(N_{BDI}=1\) nontrivial phase, while the \(N_{BDI}=0\) gapped superconductor exhibits \(G_{t}=0\). Fig. 3(b) shows \(G_{t}\) as a function of the split parameter \(\alpha\) for all the nontrivial phases realized by the proximitized MTI. A trivial ABS is also simulated in the wire geometry through a \(N_{BDI}=1\) superconductor with an insulating barrier on the right side of the junction. Here, the values of the conductance are in perfect agreement with our prediction in Table 1: the QAH system displays \(G_{t}=0\) independently of \(\alpha\), while Majorana states and trivial ABSs break the gauge invariance, resulting in the \(\alpha\) dependence of the total conductance \(G_{t}\). However, the different symmetry around \(\alpha=0.5\) distinguishes trivial and topological cases. We emphasize here that a symmetrically distributed bias \(\alpha=0.5\) is _never_ able to discriminate the phases with Majorana modes from the trivial superconductor and the proximitized QAH state, since \(G_{t}=0\) for all these distinct phases. The lower panels in Fig. 3 display the probabilities \(P^{ab}_{ij}\) for all the scattering processes occurring on the left interface of the junction, i.e., normal reflection \(R_{N}\), Andreev reflection \(R_{A}\), normal transmission \(T_{N}\) and Andreev transmission \(T_{A}\). The figures correspond to a \(N_{BDI}=1\) topological superconducting wire with unpaired MBSs in (c) and a \(\mathcal{N}=1\) TSC thin film with MCPSs in (d). The former shows that, when the junction is sufficiently large to prevent transmission by evanescent modes, the injected electron undergoes perfect Andreev reflection \(R_{A}=1\) in presence of MBSs. The latter indicates that due to MCPSs, normal and Andreev transmission and reflection occur with equal probability \(R_{N}=R_{A}=T_{N}=T_{A}=0.25\). Oscillations around the expected plateaus are due to the interference between back-scattered chiral modes from the two interfaces of the double junction, resulting in an interferometric behaviour. Such oscillations are expected for a transverse width \(L_{y}\) in the micrometer range or smaller [45]. For all the above results, the total bias across the junction is \(V=0.1\) meV, which is always smaller than the bulk energy gap of the different phases. Such low bias ensures that no bulk modes are activated in the proximitized sector and that the injected electrons and holes interact in the condensate only with topologically-protected Majorana boundary states. Indeed, the proposed framework does not hold for higher bias, which implies interaction with multiple active modes in the superconductor. ## 5 Conclusion In summary, we showed that voltage gauge invariance is a robust criterion to detect Majorana excitations in magnetic topological insulator slabs with a central proximitized section. A characteristic dependency on how the total bias is split, namely the antisymmetry of the conductance with respect to the point of equal bias splitting (\(\alpha=0.5\)) is obtained in presence of Majorana modes. Detailed model calculations for a narrow (wire-like) and a wide (film-like) slab, hosting MBSs and MCPSs respectively, are shown to support our conclusions. Our results will be useful for the experimental detection of the elusive Majorana quasiparticles, contributing to the progress towards a solid platform for quantum computing. ## Acknowledgements Funding informationThis project is supported by the QuantERA grant MAGMA, by the National Research Fund, Luxembourg, under the grant INTER/QUANTERA21/16447820/MAGMA, by the German Research Foundation under grant 491798118, by MCIN/AEI/10.13039/501100011033 under project PCI2022-132927, and by the European Union NextGenerationEU/PRTR. L.S. acknowledges support from Grants No. PID2020-117347GB-I00, funded by MCIN/AEI/10.13039/501100011033, and No. PDR2020-12 funded by GOIB. K.M. acknowledges the financial support by the Bavarian Ministry of Economic Affairs, Regional Development and Energy within Bavaria's High-Tech Agenda Project "Bausteine fur das Quantencomputing auf Basis topologischer Materialien mit experimentellen und theoretischen Ansatzen" (grant allocation no. 07 02/686 58/1/21 1/22 2/23). ## Appendix A Derivation of the Differential Conductances We derive here the equations for the conductances \(G_{1}\) and \(G_{2}\) given in the main article. The nonlocal differential conductance in the NSN double junction between normal and proximitized MTIs can be defined as \[G_{i}(E)=\frac{\partial I_{i}}{\partial V}\,, \tag{11}\] where \(I_{i}\) is the current in the \(i\)-th terminal and \(V\) is the total voltage drop across the junction. The electric current can be computed with Lambert's [41] formalism as \[I_{i}=\int_{0}^{+\infty}dE\sum_{a}a\left[J_{i}^{a}(E)-K_{i}^{a}(E)\right], \tag{12}\] where \[J_{i}^{a}(E)=\frac{e}{h}N_{i}^{a}(E)f_{i}^{a}(E)\,,\qquad K_{i}^{a}(E)=\frac{ e}{h}\sum_{jb}P_{ij}^{ab}(E)f_{j}^{b}(E)\,, \tag{13}\] are the in-going and out-going fluxes of the quasiparticle of type \(a,b=e,h\) in the lead \(i=1,2\). Here the electric current is expressed in terms of the number of propagating modes in each terminal \(N_{i}^{a}(E)\), the Fermi distribution function \[f_{i}^{a}(E)=\begin{cases}\frac{1}{1+e\left(e^{-\omega_{i}}\right)/k_{B}T}& \text{if $a=e$ },\\ \frac{1}{1+e\left(e^{-\omega_{i}}\right)/k_{B}T}&\text{if $a=h$ },\end{cases} \tag{14}\] and the transmission amplitudes \(P_{ij}^{ab}(E)\) indicating the probability of transmission of a quasiparticle \(b\) in lead \(j\) to a quasiparticle \(a\) in lead \(i\). The bias drop applied between the \(i\)-th normal lead and the central superconducting sector is represented by \(V_{i}\). By making explicit the sum over the quasiparticle types and using Eq. (13), the electric current can be rewritten as \[I_{i} =\frac{e}{h}\int_{0}^{+\infty}dE\left[J_{i}^{e}-K_{i}^{e}-J_{i}^{ h}+K_{i}^{h}\right]=\frac{e}{h}\int_{0}^{+\infty}dE\left[N_{i}^{e}f_{i}^{e}-\sum_{jb}P_{ ij}^{eb}f_{j}^{b}-N_{i}^{h}f_{i}^{h}+\sum_{jb}P_{ij}^{hb}f_{j}^{b}\,\right] \tag{15}\] \[=\frac{e}{h}\int_{0}^{+\infty}dE\,\frac{e}{h}\left[N_{i}^{e}f_{i} ^{e}-\sum_{j}\left(P_{ij}^{ee}f_{j}^{e}+P_{ij}^{eh}f_{j}^{h}\right)-N_{i}^{h}f_ {i}^{h}+\sum_{j}\left(P_{ij}^{he}f_{j}^{e}+P_{ij}^{hh}f_{j}^{h}\right)\right]\,,\] where for simplicity we omitted the energy dependence. Expanding the sum over the terminals \(j=1,2\) we can write the electric current into the two leads of the junction as \[\begin{split} I_{1}&=\frac{e}{h}\int_{0}^{+\infty}dE \bigg{\{}\left[N_{1}^{e}-P_{11}^{ee}+P_{11}^{he}\right]f_{1}^{e}+\left[-N_{1}^ {h}-P_{11}^{eh}+P_{11}^{hh}\right]f_{1}^{h}\\ &\qquad\qquad\qquad\qquad\left[P_{12}^{he}-P_{12}^{ee}\right]f_{2 }^{e}+\left[P_{12}^{hh}-P_{12}^{eh}\right]f_{2}^{h}\bigg{\}}\,,\end{split} \tag{16}\] and \[\begin{split} I_{2}&=\frac{e}{h}\int_{0}^{+\infty} dE\bigg{\{}\left[N_{2}^{e}-P_{22}^{ee}+P_{22}^{he}\right]f_{2}^{e}+\left[-N_{2}^ {h}-P_{22}^{eh}+P_{22}^{hh}\right]f_{2}^{h}\\ &\qquad\qquad\qquad\qquad\left[P_{21}^{he}-P_{21}^{ee}\right]f_ {1}^{e}+\left[P_{21}^{hh}-P_{21}^{eh}\right]f_{1}^{h}\bigg{\}}\,.\end{split} \tag{17}\] We assume that the bias is asymmetrically distributed as \(V_{1}=\alpha V\) and \(V_{2}=-\beta V\) with \(0\leq\alpha\leq 1\) and \(\beta=1-\alpha\) such that the total voltage drop across the junction is fixed \(V_{1}-V_{2}=V\), and we recall that in the zero-temperature limit the Fermi function takes the form of a stepwise function \[\begin{split} f_{1}^{e}&=\frac{1}{1+e^{(E-e\alpha V )/k_{B}T}}\xrightarrow[T\to 0]{}\Theta(E-\alpha eV)\,,\\ f_{1}^{h}&=\frac{1}{1+e^{(E+e\alpha V)/k_{B}T}} \xrightarrow[T\to 0]{}\Theta(E+\alpha eV)\,,\end{split} \tag{18}\] for the left terminal of the junction and \[\begin{split} f_{2}^{h}&=\frac{1}{1+e^{(E+e\beta V )/k_{B}T}}\xrightarrow[T\to 0]{}\Theta(E+\beta eV)\,,\\ f_{2}^{h}&=\frac{1}{1+e^{(E-e\beta V)/k_{B}T}} \xrightarrow[T\to 0]{}\Theta(E-\beta eV)\,,\end{split} \tag{19}\] for the right one. The expressions of the currents in the two terminals can thus be simplified as \[\begin{split} I_{1}&=\frac{e}{h}\int_{0}^{+\infty} dE\bigg{\{}\big{[}N_{1}^{e}-P_{11}^{ee}+P_{11}^{he}\big{]}\Theta(E-\alpha eV)+ \left[P_{12}^{hh}-P_{12}^{eh}\right]\Theta(E-\beta eV)\bigg{\}}\\ &=\frac{e}{h}\int_{0}^{\alpha eV}dE\big{[}N_{1}^{e}-P_{11}^{ee}+P _{11}^{he}\big{]}+\int_{0}^{\beta eV}dE\big{[}P_{12}^{hh}-P_{12}^{eh}\big{]}\,, \end{split} \tag{20}\] and \[\begin{split} I_{2}&=\frac{e}{h}\int_{0}^{+\infty} dE\bigg{\{}\big{[}-N_{2}^{h}-P_{22}^{eh}+P_{22}^{hh}\big{]}\Theta(E-\beta eV)+ \left[P_{21}^{he}-P_{21}^{ee}\right]\Theta(E-\alpha eV)\bigg{\}}\\ &=\frac{e}{h}\int_{0}^{\beta eV}dE\big{[}-N_{2}^{h}-P_{22}^{eh}+P _{22}^{hh}\big{]}+\int_{0}^{\alpha eV}dE\big{[}P_{21}^{he}-P_{21}^{ee}\big{]} \,,\end{split} \tag{21}\] and the differential conductance can be computed as the derivative of Eqs. 20-21 with respect to the total bias \(V\) across the junction, leading to \[G_{1}(V)=\frac{\partial I_{1}}{\partial V}=\alpha\frac{e^{2}}{h}\left[N_{1}^{e }(\alpha V)-P_{11}^{ee}(\alpha V)+P_{11}^{he}(\alpha V)\right]+\beta\frac{e^{2 }}{h}\left[P_{12}^{hh}(\beta V)-P_{12}^{eh}(\beta V)\right]\,, \tag{22}\] \begin{table} \begin{tabular}{l|c|c|c} \hline \hline S-Phase & \(G_{1}\) & \(G_{2}\) & \(G_{t}\) \\ \hline \(N_{BDI}=0\) & \(0\) & \(0\) & \(0\) \\ \(N_{BDI}=1\) (MBS) & \(2\alpha e^{2}/h\) & \(2(\alpha-1)e^{2}/h\) & \(2(2\alpha-1)e^{2}/h\) \\ \(\mathcal{N}=0\) & \(0\) & \(0\) & \(0\) \\ \(\mathcal{N}=1\) (MPCS) & \(\alpha e^{2}/h\) & \((\alpha-1)e^{2}/h\) & \((2\alpha-1)e^{2}/h\) \\ \(\mathcal{N}=2\) (QAH) & \(e^{2}/h\) & \(-e^{2}/h\) & \(0\) \\ \hline \hline \end{tabular} \end{table} Table 4: Conductances \(G_{1},G_{2}\) and their sum \(G_{t}=G_{1}+G_{2}\) computed through Eqs. (22)-(23) using the transmission probabilities given in Tabs.2-3. The sum of the conductance on the two terminal is non-zero only in presence of topologically-protected Majorana modes. Furthermore, the value of \(G_{t}\) discriminates between end-localized MBSs and dispersive MPCSs. \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline \hline S-Phase & \(N_{1}^{e}\) & \(P_{11}^{ee}\) & \(P_{11}^{he}\) & \(P_{12}^{hh}\) & \(P_{12}^{eh}\) & \(G_{1}\) \\ \hline \(N_{BDI}=0\) & \(1\) & \(1\) & \(0\) & \(0\) & \(0\) \\ \(N_{BDI}=1\) (MBS) & \(1\) & \(1\) & \(0\) & \(0\) & \(0\) \\ \(\mathcal{N}=0\) & \(1\) & \(0.25\) & \(0.25\) & \(0.25\) & \(\alpha e^{2}/h\) \\ \(\mathcal{N}=2\) (QAH) & \(1\) & \(0\) & \(0\) & \(1\) & \(0\) \\ \hline \hline \end{tabular} \end{table} Table 2: Transmission amplitudes and number of electronic modes required to compute the conductance \(G_{1}\) through Eq. (22). The values are given for all the possible topological phases which can be found in the central superconducting sector of the NSN junction. \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline \hline S-Phase & \(N_{2}^{h}\) & \(P_{22}^{eh}\) & \(P_{22}^{hh}\) & \(P_{21}^{he}\) & \(P_{21}^{ee}\) & \(G_{2}\) \\ \hline \(N_{BDI}=0\) & \(1\) & \(0\) & \(1\) & \(0\) & \(0\) & \(0\) \\ \(N_{BDI}=1\) (MBS) & \(1\) & \(1\) & \(0\) & \(0\) & \(0\) & \(2(\alpha-1)e^{2}/h\) \\ \(\mathcal{N}=0\) & \(1\) & \(0\) & \(1\) & \(0\) & \(0\) & \(0\) \\ \(\mathcal{N}=1\) (MPCS) & \(1\) & \(0.25\) & \(0.25\) & \(0.25\) & \(0.25\) & \((\alpha-1)e^{2}/h\) \\ \(\mathcal{N}=2\) (QAH) & \(1\) & \(0\) & \(0\) & \(0\) & \(1\) & \(-e^{2}/h\) \\ \hline \hline \end{tabular} \end{table} Table 3: Transmission amplitudes and number of hole modes required to compute the conductance \(G_{2}\) through Eq. (23). The values are given for all the possible topological phases which can be found in the central superconducting sector of the NSN junction. The conductances are given making explicit \(\beta=\alpha-1\). and \[G_{2}(V)=\frac{\partial I_{2}}{\partial V}=\beta\frac{e^{2}}{h}\left[-N_{2}^{h}( \beta V)-P_{22}^{eh}(\beta V)+P_{22}^{hh}(\beta V)\right]+\alpha\frac{e^{2}}{h} \left[P_{21}^{he}(\alpha V)-P_{21}^{ee}(\alpha V)\right]. \tag{23}\] The left-terminal conductance, Eq. (22), is given by the number of injected electrons \(N_{1}^{e}\), the normal \(P_{11}^{ee}\) and Andreev \(P_{11}^{he}\) reflection amplitudes for electrons injected in lead 1, and the normal \(P_{12}^{hh}\) and Andreev \(P_{12}^{eh}\) transmission amplitudes for holes injected in lead 2. Similarly, the right-terminal conductance Eq. (23) is given by the number of injected holes \(N_{2}^{h}\), normal \(P_{22}^{hh}\) and Andreev \(P_{22}^{eh}\) reflection amplitudes for holes injected in lead 2, and normal \(P_{21}^{ee}\) and Andreev \(P_{21}^{he}\) transmission amplitudes for electrons injected in lead 1. The number of injected quasiparticles \(N_{i}^{a}\) and the values of the transmission amplitudes \(P_{ij}^{ab}\) in the low-bias scenario described in the main article are given in Tabs. 2-3 for all the topological phases of the superconducting sector. Tab. 4 summarizes the corresponding values of the conductances \(G_{1},G_{2}\) and their sum \(G_{t}=G_{1}+G_{2}\). ## Appendix B Role of an Interface Barrier We consider in this section the role of an interface barrier between the central proximitized (S) sector and the right (N) lead. That is, an NSN'N structure where N' represents a slab of a normal MTI material without any propagating modes. The presence of N' breaks the left-right symmetry with respect to the central sector and, depending on the barrier transparency, it will affect the electrical connection to the right side. A small barrier length mimicks some interface disorder, while a large barrier length corresponds to the complete electrical insulation. We show here that the presence of a barrier does _not_ change our conclusions about the breaking of the gauge invariance. Indeed, despite the value of the conductance \(G_{1}\) and \(G_{2}\) may change, the total conductance keeps its meaning, with \(G_{t}\neq 0\) as long as Andreev processes take place in the junction. The different cases can be clearly understood when a completely insulating barrier is introduced, for instance, on the right side of the system: as the right lead is electrically disconnected from the proximitized MTI, \(G_{2}=0\) regardless of the topological phase realized in the proximitized sector. In a \(N_{BDI}=1\) superconducting wire, perfect Andreev reflections occurs on the left interface of the junction due to interaction with MBS. In a \(\mathcal{N}=1\) TSC film, the electron is completely reflected, since the transmission to the right side is prevented by the barrier. Normal and Andreev processes take place with same probability. In an analogous way, in a \(\mathcal{N}=2\) TSC, the electrons are perfectly reflected from the barrier, and no Andreev processes take place in the junction. The conductances \(G_{1}\) and \(G_{2}\) can be easily computed through Eqs. (9)-(10). Their values, together with the transmission amplitudes for the left interface of the junction, are summarized in Tab. 5. \begin{table} \begin{tabular}{l|c|c|c|c|c||c|c|c} \hline \hline S-Phase & \(N_{1}^{e}\) & \(P_{11}^{ee}\) & \(P_{11}^{he}\) & \(P_{12}^{hh}\) & \(P_{12}^{eh}\) & \(G_{1}\) & \(G_{2}\) & \(G_{t}\) \\ \hline \(N_{BDI}=1\) (MBS) & 1 & 0 & 1 & 0 & 0 & \(2\alpha e^{2}/h\) & 0 & \(2\alpha e^{2}/h\) \\ \(\mathcal{N}=1\) (MPCS) & 1 & 0.5 & 0.5 & 0 & 0 & \(\alpha e^{2}/h\) & 0 & \(\alpha e^{2}/h\) \\ \(\mathcal{N}=2\) (QAH) & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \hline \end{tabular} \end{table} Table 5: Transmission amplitudes for the scattering processes on the left interface and conductances \(G_{1},G_{2}\) and \(G_{t}\) assuming an insulating barrier on the right side of the system. The values are given for all the topologically nontrivial cases. A numerical simulation for the conductance in the NSNN junction with \(\alpha=0.25\) is shown in Fig. 4 for (a) a wire geometry with a proximitized sector in the \(N_{BDI}=1\) state and (b) a film geometry with a \(\mathcal{N}=1\) TSC in the central sector. We focus on the dependence on \(L_{x}\), the length of the intermediate barrier N'. For a completely transparent barrier (\(L_{x}\approx 0\)) the total conductance for \(\alpha=0.25\) is \(G_{t}=-e^{2}/h\) in presence of MBS and \(G_{t}=-e^{2}/2h\) in presence of MCPS. An increasingly opaque barrier (\(L_{x}\to\infty\)) changes these values, keeping \(G_{t}\neq 0\) as long as Andreev processes occur in the proximitized MTI. For the case of MBSs, a barrier with \(L_{x}\gtrsim 50\,\mathrm{nm}\) is long enough to prevent the electric transmission on the right side, leading to \(G_{t}=e^{2}/2h\). Remarkably, Fig. 4 shows a very different decay length along \(x\) for MCPSs. Indeed, a larger barrier \(L_{x}\gtrsim 5\,\mathrm{\mu m}\) is required to prevent completely the electric transmission in presence of MCPSs, changing the total conductance to \(G_{t}=e^{2}/4h\). Both limiting values are in agreement with Tab. 5 for the selected bias split parameter \(\alpha=0.25\). Focusing on the short barrier limit, which represents interface disorder effects, Fig. 4 suggests that the antisymmetric breaking of gauge invariance is robust for barriers with \(L_{x}\lesssim 5\) nm for the MBS and \(L_{x}\lesssim 0.5\,\mathrm{\mu m}\) for the MCPS, since \(G_{t}\) is almost unaffected by the barrier in these cases.
2307.04392
FODVid: Flow-guided Object Discovery in Videos
Segmentation of objects in a video is challenging due to the nuances such as motion blurring, parallax, occlusions, changes in illumination, etc. Instead of addressing these nuances separately, we focus on building a generalizable solution that avoids overfitting to the individual intricacies. Such a solution would also help us save enormous resources involved in human annotation of video corpora. To solve Video Object Segmentation (VOS) in an unsupervised setting, we propose a new pipeline (FODVid) based on the idea of guiding segmentation outputs using flow-guided graph-cut and temporal consistency. Basically, we design a segmentation model incorporating intra-frame appearance and flow similarities, and inter-frame temporal continuation of the objects under consideration. We perform an extensive experimental analysis of our straightforward methodology on the standard DAVIS16 video benchmark. Though simple, our approach produces results comparable (within a range of ~2 mIoU) to the existing top approaches in unsupervised VOS. The simplicity and effectiveness of our technique opens up new avenues for research in the video domain.
Silky Singh, Shripad Deshmukh, Mausoom Sarkar, Rishabh Jain, Mayur Hemani, Balaji Krishnamurthy
2023-07-10T07:55:42Z
http://arxiv.org/abs/2307.04392v1
# FODVid: Flow-guided Object Discovery in Videos ###### Abstract Segmentation of objects in a video is challenging due to the nuances such as motion blurring, parallax, occlusions, changes in illumination, etc. Instead of addressing these nuances separately, we focus on building a generalizable solution that avoids overfitting to the individual intricacies. Such a solution would also help us save enormous resources involved in human annotation of video corpora. To solve Video Object Segmentation (VOS) in an unsupervised setting, we propose a new pipeline (**FODVid**) based on the idea of guiding segmentation outputs using flow-guided graph-cut and temporal consistency. Basically, we design a segmentation model incorporating intra-frame appearance and flow similarities, and inter-frame temporal continuation of the objects under consideration. We perform an extensive experimental analysis of our straightforward methodology on the standard DAVIS16 video benchmark. Though simple, our approach produces results comparable (within a range of \(\sim 2\) mIoU) to the existing top approaches in unsupervised VOS. The simplicity and effectiveness of our technique opens up new avenues for research in the video domain. ## 1 Introduction Object segmentation, in its various forms, is a widely studied problem in computer vision [19]. The classic task finds critical applications across multiple domains, such as autonomous driving, augmented reality, human-computer interaction, video summarization etc. The deep neural networks employed for this purpose are typically trained on large annotated datasets created through enormous human efforts that take several months of focused work. Moreover, training a segmentation model on one such dataset does not guarantee transfer-ability to real-world data since dataset-specific considerations in model design overfit the model to a particular use case. These issues call attention to developing generalizable solutions that can work with minimal human supervision. To alleviate some of the problems with supervised segmentation, methods that work on weaker forms of human supervision were proposed. These methods function with weak supervision provided through scribbles [15, 30, 31, 33, 39] or clicks [1] or image-level tags [22, 31, 39] or even bounds [5]. Further, semi-supervised segmentation techniques [5, 7, 9, 22, 25] were proposed that attempt segmentation in a setting where only a fraction of the image datasets are human-labelled. Nonetheless, both weak-supervised and semi-supervised techniques still rely on bulky human supervision directly or indirectly. Therefore, in the present work, we focus on segmenting objects in a video without relying on any form of external supervision. The aim of Video Object Segmentation (VOS) is to localize the most salient object(s) in a given video frame [26]. In the literature [14, 21, 26, 43, 37], VOS is generally re-framed as a foreground-background separation problem. We propose an end-to-end pipeline as follows - for a given video frame, we encode its frames and their optical flows in RGB format using a self-supervised ViT like DINO [3]. The image and flow features are then used in a linear combination to form a similarity based adjacency matrix between the frame patches. By performing graph-cut on this adjacency matrix, we obtain a preliminary set of segmentation masks for all the frames in a video. These masks are then used as pseudo-ground truths to train a segmentation network. We devise a loss schedule that alternates between graph-cut mask of current frame and a nearby frame to enforce temporal consistency while training the segmentation network(refer 2.2). We summarize the contributions of this work below: 1. We present a new pipeline (FODVid) for unsupervised Video Object Segmentation (VOS), utilising both appearance and motion information contained in videos. 2. We demonstrate the importance of perceptual motion cues for object discovery. In particular, we employ the Gestalt principle, "things that move together, belong together" to enrich frame patch similarity. 3. Our methodology is simple to implement and produces results comparable to existing top unsupervised VOS approaches. We achieve an mIoU score of **78.71** on the standard DAVIS16 benchmark. Additionally, the proposed temporal refinement provides an improvement of as much as **+9.88 mIo** on certain video sequences. ## 2 Methodology Our approach involves guiding segmentation through - 1) graph-cut and 2) temporal warping via optical flow (Refer Fig. 1 for our proposed FODVid pipeline). ### Graph-cut Consider video frame \(f\), objects of which we wish to segment out. We start by creating a fully-connected graph \(G=(V,E)\), where \(V\) denotes the set of vertices obtained by dividing \(f\) into square patches of size \(p_{s}\times p_{s}\), and \(E\) denotes the set of edges such that each edge weight quantifies the similarity between connecting vertices (in our case, image patches). Formally, the adjacency matrix \(W\) underlying \(G\) is made of \(w_{ij}=S(v_{i},v_{j})\), where \(S(\cdot)\) denotes the similarity measure between two given vertices (patches). When compared to standalone images, video frames are special as they track information about a set of objects temporally, in continuation. Since the main aim of VOS is to segment out such objects, we wish to incorporate perceptual nuances in the similarity measure \(S\). Based on the Gestalt principle of common fate, image patches having similar flow direction and magnitude most likely belong to the same object. We utilise this complementary motion information while defining \(W\). Formally, the overall similarity score between two patches from the same frame is defined as a linear combination of similarity between standard patch embeddings (obtained using DINO encoder [3]) and that between DINO embeddings of the RGB-optical flow at the respective patches. In mathematical notation, \[S(v_{i},v_{j})=\alpha\!\cdot\!S^{\prime}(\phi(v_{i}),\phi(v_{j}))\!+\!(1\!-\! \alpha)\!\cdot\!S^{\prime}(\phi(\psi(v_{i})),\phi(\psi(v_{j})))\] where \(\alpha\in[0,1]\), \(\phi(\cdot)\) denotes the DINO encoder, \(\psi(\cdot)\) denotes the RGB-optical flow estimator, i.e., model computing optical flow in 3-channel RGB image format, and \(S^{\prime}(\cdot)\) is the cosine similarity function, given by \(S^{\prime}(\vec{x},\vec{y})=\frac{\vec{x}\cdot\vec{y}}{\|\vec{x}\|^{2}\|\|\vec{ y}\|_{2}}\). We obtain \(W\) using the \(S\) defined above. Further, in order to minimize the variance, we normalize \(w_{ij}\)'s by thresholding them. \[w_{ij}\leftarrow\begin{cases}1,&\text{if }w_{ij}\geq\tau\\ \epsilon,&\text{otherwise}\end{cases}\] where \(\tau\) denotes the weight threshold hyper-parameter, and Figure 1: **FODVid: Our proposed pipeline for unsupervised video object segmentation.** An image and its corresponding optical flow RGB are featurized using DINO [3] and graph-cut is performed to produce a set of preliminary object segmentation masks. These masks are then used as pseudo ground-truths to train a segmentation network by enforcing temporal consistency between nearby frames. value of \(\epsilon\) is set to \(10^{-5}(\neq 0)\) to ensure the fully connectedness of \(G\). The solution to graph-cut is well studied in the literature [28, 35, 36]. We compute the second-smallest eigenvector of the matrix \(W\), and threshold on the average value to create a bi-partition of \(G\); the partitions representing foreground and background. We use these binary masks as pseudo-ground truths for training an encoder-decoder style segmentation network. ### Temporal warping via optical flow The graph-cut approach described in the previous section relies only on information from the single frame under consideration. As a side note, we did not observe any gains in performance of the segmentation network by direct distillation using graph-cut masks- motivating us to devise this video-level temporal refinement scheme. Moreover, the graph-cut masks produced in Sec. 2.1 are noisy - part of background object is identified as foreground or vice-versa. As such, we intuit that the extra information needed to discern foreground from background can come from nearby frames in the temporal neighborhood of the current frame. Let the frame under consideration be \(f_{1}\) and the graph-cut mask obtained per the procedure described in Sec. 2.1 be \(m_{1}\). We sample frame \(f_{2}\) in the {-2, -1, +1, +2} temporal neighbourhood of \(f_{1}\). Let \(m_{2}\) denote the graph-cut mask for \(f_{2}\). Let the prediction of the segmentation network for \(f_{1}\) be \(\hat{m_{1}}\). We design a loss schedule for training the segmentation network such that, for 50% of the time, we use segmentation-loss between \(m_{1}\) and \(\hat{m_{1}}\). For the remaining 50% of the time, we temporally warp \(\hat{m_{1}}\) using optical flow estimated between \(f_{1}\) and \(f_{2}\) to obtain segmentation mask prediction for \(f_{2}\) and take the segmentation loss between this mask and \(m_{2}\). Under the assumption that all pseudo-ground truths are equally noisy, we give equal consideration to both the branches (note the threshold of 0.5 for \(p\) in Alg. 1). The algorithmic description is provided in Alg. 1. ``` 1forepoch in \(\{1,2,...,N\}\)do 2\(p\sim\mathcal{U}(0,1)\)// sample from uniform(0,1) 3if\(p<0.5\)then 4\(L=||\hat{m_{1}}-m_{1}||_{1}\)// graph-cut guidance 5 6else 7\(L=||\text{warp}(\hat{m_{1}})-m_{2}||_{1}\)// enforcing temporal consistency 8 9 Update weights based on the computed \(L\) ``` **Algorithm 1**Loss Schedule ## 3 Experiments and Results ### Experimental Setup We train the segmentation network using 4 NVIDIA A100 80GB GPUs for 200 epochs with a batch size of 16. We employ Adam optimizer [11] with a learning rate of \(10^{-4}\), momentum terms set to \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\). The training is performed at an image resolution of \(256\times 512\). We borrow the decoder architecture from SPADE [24] and keep DINO [3] as the encoder. We utilise optical flow obtained using pre-trained ARFlow [17] in the graph-cut stage (Sec. 2.1), and pre-trained GMFlow [38] for warping the segmentation masks in the second step (Sec. 2.2). ### Inference Setup One of the key advantages of our method is that the resultant segmentation network does not require optical flow as input unlike many VOS approaches. The segmentation network, while trained on videos, can work solely on images. As a result, we can infer the object segmentation for all the frames in a video independently. ### Results Table 1 depicts the comparison between our method and the existing unsupervised VOS approaches. Further, in tables 2 and 3, we depict the ablations on DINO architecture and the important hyper-parameters \(\tau\) (used for edge thresholding) and \(\alpha\) (used for similarity combination). Moreover, Figure 2: **Qualitative results of our method on DAVIS16 dataset**. Top row: RGB video frame, middle row: our prediction, bottom row: ground truth. The last column shows a failure case where only a part of the occluded object is identified. we perform an ablation on the flow estimator used in graph-cut experiments and present its results in table 4. We also show the qualitative results of our method on examples from the DAVIS16 dataset in figure 2. Additionally, we demonstrate the qualitative improvements observed because of the temporal refinement in figure 3.
2302.01434
Inference in Non-stationary High-Dimensional VARs
In this paper we construct an inferential procedure for Granger causality in high-dimensional non-stationary vector autoregressive (VAR) models. Our method does not require knowledge of the order of integration of the time series under consideration. We augment the VAR with at least as many lags as the suspected maximum order of integration, an approach which has been proven to be robust against the presence of unit roots in low dimensions. We prove that we can restrict the augmentation to only the variables of interest for the testing, thereby making the approach suitable for high dimensions. We combine this lag augmentation with a post-double-selection procedure in which a set of initial penalized regressions is performed to select the relevant variables for both the Granger causing and caused variables. We then establish uniform asymptotic normality of a second-stage regression involving only the selected variables. Finite sample simulations show good performance, an application to investigate the (predictive) causes and effects of economic uncertainty illustrates the need to allow for unknown orders of integration.
Alain Hecq, Luca Margaritella, Stephan Smeekes
2023-02-02T21:56:36Z
http://arxiv.org/abs/2302.01434v2
# Inference in Non-stationary High-Dimensional VARs ###### Abstract In this paper we construct an inferential procedure for Granger causality in high-dimensional non-stationary vector autoregressive (VAR) models. Our method does not require knowledge of the order of integration of the time series under consideration. We augment the VAR with at least as many lags as the suspected maximum order of integration, an approach which has been proven to be robust against the presence of unit roots in low dimensions. We prove that we can restrict the augmentation to only the variables of interest for the testing, thereby making the approach suitable for high dimensions. We combine this lag augmentation with a post-double-selection procedure in which a set of initial penalized regressions is performed to select the relevant variables for both the Granger causing and caused variables. We then establish uniform asymptotic normality of a second-stage regression involving only the selected variables. Finite sample simulations show good performance, an application to investigate the (predictive) causes and effects of economic uncertainty illustrates the need to allow for unknown orders of integration. keywords: Granger causality, Non-stationarity, Post-double-selection, Vector autoregressive models, High-dimensional inference. _JEL codes:_ C55, C12, C32. ## 1 Introduction In this paper we construct an inferential procedure for Granger causality in high-dimensional non-stationary vector autoregressive (VAR) models. Investigating causes and effects in time series models has a long and rich history, dating back to the seminal work of Granger (1969). The statistical assessment of the directional predictability among two (or blocks of) time series can have important consequences for decision making processes. Applications of Granger causality range over from macroeconomics, finance, network theory, climatology and even the neuroscience. For instance, the evidence of causality between money and gross domestic product is a long-debated issue in the macroeconomic literature. This was first discussed in Sims et al. (1990) and Stock and Watson (1989) and it is still argued to this day, see e.g., Miao et al. (2020). Financial applications of Granger causality include, among others, Billio et al. (2012) who study the connectedness among monthly returns of hedge funds, banks, broker/dealers, and insurance companies. Hecq et al. (2021) build Granger causality networks from high-dimensional vector autoregressive models, describing the dynamic volatility spillovers among a large set of stock returns. Many applications are also found in climate science, for instance in trying to understand and disentangle the causes of climate change. Among others, Stern and Kaufmann (2014) investigate causality between greenhouse gas emissions and temperature. In neuroscience, Granger causality is employed to understand mechanisms underlying complex brain function and cognition, with examples in the field of functional neuroimaging (see e.g., Seth et al., 2015, Friston et al., 2013, for some reviews). With the increased availability of larger and richer datasets, these causality concepts have recently been extended to a high-dimensional setting where they can benefit from the inclusion of many more series within the information set. In the original concept of Granger causality (Granger, 1969), conditioning on a given information set plays a central role.2 After all, Granger causality is a study of predictability. Only by considering predictability given a specific conditioning set, is possible to attach some sort of causal meaning to the outcome (where the nature of that causality is still up for debate, of course). To identify direct 'truly causal' links between variables would require one to condition upon all possible variables that may be related to the variables of interest. Otherwise, any discovered relation may simply be an artifact of omitted variable bias which would invalidate a causal interpretation. Indeed, Granger himself envisioned the information set as "all the knowledge in the universe available at that time" (Granger, 1980, p.330). While this concept cannot be operationalized with any finite amount of data, the availability of increasingly high-dimensional datasets, along with the econometric techniques to analyze them, provide a great opportunity for turning Granger causality from 'just predictability' into a concept to which at least some form of causal interpretation can be attached. Footnote 2: Throughout the paper we focus on Granger causality in mean. While there are clearly many other forms of causal or predictive relations possible, Granger causality in mean is the most prominent and therefore the focus of our attention. It should therefore be understood that whenever we mention Granger causality in the paper, we refer to Granger causality in mean. Recently, Hecq et al. (2021) (HMS henceforth) proposed a test for Granger causality in high-dimensional vector autoregressive models. This combines dimensionality reduction techniques based on penalized regression such as the lasso of Tibshirani (1996), with the post-double selection procedure of Belloni et al. (2014) designed to guarantee uniform asymptotic validity of the post-selection least squares estimator. However, HMS assumed stationarity of all the time series considered. This is a typical assumption in much of the literature on regularization methods, in particular when inference is considered. While the literature on estimation and forecasting with high-dimensional non-stationary processes is growing (see e.g. Smeekes and Wijler, 2020, and the references therein), this is not the case for inference due to the complexities arising with unit roots and cointegration, which already have severe effects in low-dimensional settings. Working with (data possibly transformed to yield) stationary time series avoids these complications in the asymptotic analysis and allows to invoke standard (Gaussian) limit theory, thereby enabling the use of standard inferential procedures such as \(t\) and \(F\)-tests. On the other hand, it assumes prior knowledge of the order of integration of all the time series entering the model. This prior knowledge is usually acquired via unit root tests such as the augmented Dickey-Fuller (ADF) test (see Dickey and Fuller, 1979). However, these tests are sensitive to their specifications, such as the inclusion of a deterministic time trend in the regression equation and the choice of the lag length. Even when performing what seems to be the same test in, for example, different R packages, it may actually lead to different outcomes (Smeekes and Wilms, 2020). That does not even take into account that there are many different units root tests, none of which is clearly preferred to the others, as well as how specific data features (and their treatment) such as seasonality (adjustments), structural breaks and outliers may affect these tests. As such, the outcome of a unit root test is ambiguous at best; even more so if one also takes into account their well-documented low power (Cochrane, 1991) on the one hand and the accumulation of the probability of obtaining false positives through multiple testing - especially in high dimensions - on the other hand. But even if one did know the true orders of integration, transformations such as differencing to achieve stationarity are not innocuous. In particular, information about long-run relations such as cointegration is deleted from the series. Yet the error correction mechanisms at play for the movement towards long-run equilibria may well induce (Granger) causal relations that are not apparent anymore in the transformed data. While this is not necessarily a problem if one is interested in the transformed data as such (for example the growth rate off economic series may be an object of interest in themselves), in many applications we are interested in the level of the series and the transformations are only done to avoid issues with non-stationarity. It would therefore be very beneficial to practitioners to have high-dimensional inference methods available that are robust to (unknown) orders of (co)integration. In this paper we develop a method which allows for testing Granger causality in high-dimensional VAR models, irrespective of the (co)integration properties of the time series in the VAR. We avoid any bias coming from unit root and cointegration pre-testing and instead use the VAR in levels directly to perform inference on the (Granger causality) parameters of interest. The procedure we design builds on the work of Toda and Yamamoto (1995) who first considered a simple lag augmentation of the system, which they showed provides asymptotically normal estimators irrespective of potential unit roots and cointegration (Dolado and Lutkeophil, 1996, also see). The same approach has also recently been used to develop inference on impulse response functions (Inoue and Kilian, 2020, Montiel Olea and Plagborg-Moller, 2021) that is robust to unit roots. We modify the approach outlined above by confining lag augmentation to only the variable(s) tested as Granger causing, instead of adding lags of all variables. We show that this modification, which makes it suitable for application to high-dimensional VARs, does not affect the asymptotic properties. In addition, we modify the post-double selection procedure of HMS to prevent the possibility of spurious regression, thereby extending its uniform validity to data with potential (co)integrated time series. The remainder of the paper is organized as follows: Section 2 introduces the model, the Granger causality testing and our lag-augmented post-double-selection approach. The theoretical properties of our method are studied in Section 3. In Section 4 we investigate finite-sample performance through a simulation study, while Section 5 proposes a data-driven method to find a sensible upper bound for the lag length in the VAR. Section 6 uses the proposed testing framework to investigate the causes and effects of economic uncertainty in the context of the FRED-MD dataset. Section 7 concludes. In Appendix A the theory underlying the lag augmentation and the asymptotic properties is developed. are reported introductory Lemmas and complementary results to the following two appendices. Appendix B is devoted to the validation of high-level assumptions, while additional empirical results are presented in Appendix C. A few words on notation. For any \(n\)-dimensional vector \(\mathbf{x}\), we let \(\left\|\mathbf{x}\right\|_{p}=\left(\sum_{i=1}^{n}|x_{i}|^{p}\right)^{1/p}\) denote the \(\ell_{p}\)-norm. For any index set \(S\subseteq\{1,\ldots,n\}\), let \(\mathbf{x}_{S}\) denote the sub-vector of \(\mathbf{x}_{t}\) containing only those elements \(x_{i}\) such that \(i\in S\). \(|S|\) denotes the cardinality of the set \(S\). We use \(\xrightarrow{p}\) and \(\xrightarrow{d}\) to denote convergence in probability and distribution, respectively. ## 2 Granger Causality Tests for Nonstationary High-Dimensional Data In this section we propose our model, our strategy for lag augmentation of the Granger causality tests, and the post-double-selection procedure needed to achieve uniformly valid inference. Section 2.1 first presents the model and lag-augmented Granger causality test. Next, 2.2 sets out the post-double-selection procedure. ### Lag-Augmented Granger Causality Testing Let \(\mathbf{z}_{1},\ldots,\mathbf{z}_{T}\) be a \(K\)-dimensional multiple time series process, where \(\mathbf{z}_{t}=\left(y_{t},x_{t},\mathbf{w}_{t}\right)^{{}^{\prime}}\). Here \(y_{t}\) is the series we would like to test being Granger caused, \(x_{t}\) the potentially Granger causing series and \(\mathbf{w}_{t}\) is a \(K-2\) dimensional vector of controls constituting the _information set_. We allow for \(K\) to be large, potentially larger than (and growing with) the sample size \(T\). We assume \(\mathbf{z}_{t}\) is generated by a VAR(\(p\)) process as \[\mathbf{z}_{t}=\mathbf{A}_{1}\mathbf{z}_{t-1}+\cdots+\mathbf{A}_{p}\mathbf{z}_{t-p}+\mathbf{u}_{t}, \qquad t=p+1,\ldots,T\, \tag{1}\] where \(\mathbf{A}_{1},\ldots,\mathbf{A}_{p}\) are \(K\times K\) parameter matrices and \(\mathbf{u}_{t}\) is a \(K\times 1\) vector of error terms. **Assumption 1**.: The VAR model in (1) satisfies: 1. \(\{\mathbf{u}_{t}\}_{t=1}^{T}\) is an mds with respect to the filtration \(\mathcal{F}_{t}=\sigma(\mathbf{z}_{t},\mathbf{z}_{t-1},\mathbf{z}_{t-2},\ldots)\ \mathbf{u}_{t}\) such that \(\mathbb{E}(\mathbf{u}_{t}|\mathcal{F}_{t-1})=\mathbf{0}\) for all \(t\); the \(K\times K\) covariance matrix \(\mathbf{\Sigma}_{u}=\mathbb{E}(\mathbf{u}_{t}\mathbf{u}_{t}^{\prime})\) is positive definite and \(\mathbb{E}|\mathbf{u}_{t}|^{2+\delta}\leq\infty\), for \(\delta>0\). 2. The roots of \(\det(\mathbf{I}_{K}-\sum_{j=1}^{p}\mathbf{A}_{j}z^{j})\) can either lie on the unit disc or outside. Note that Assumption 1(b) allows for the time series to have unit roots and be cointegrated. We specifically allow elements of \(\mathbf{z}_{t}\) to be integrated of order \(d\): \(I(d)\) for \(d=0,1,2\) and possibly cointegrated of order \(d,b\): \(CI(d,b)\) with \(0<b\leq d\). We formulate more specific assumptions on (co)integration properties in Section 3 and Appendix B. We are interested in testing the null hypothesis of Granger non-causality in mean between the Granger causing series \(x_{t}\) and the Granger caused \(y_{t}\), conditional on all the series in \(\mathbf{w}_{t}\). For the moment we assume the lag-length \(p\) in (1) to be known; we shall further elaborate on data-driven ways to select \(p\) in Section 5. Also, in order to ease the notation, we omit both the intercept and any polynomial time trend from the model; the results we derive easily extend to those cases as well. It is convenient to introduce the following stacked notation. Let \(\mathbf{X}_{-p}=(\mathbf{x}_{-1},\ldots,\mathbf{x}_{-p})\) denote the \(T\times p\) matrix containing the \(p\) lags of the Granger causing variable \(x_{t}\), where \(\mathbf{x}_{-j}\) is the vector containing the observations corresponding to its \(j\)-th lag.3 Similarly, we define \(\mathbf{Y}_{-p}\) containing the \(p\) lags of the Granger caused variable \(y_{t}\) and \(\mathbf{W}_{-p}\) for the conditioning set such that \(\mathbf{W}_{-p}\) is a \(T\times(K-2)p\) matrix. Then, our equation of interest for the Granger causality testing can be stated as Footnote 3: To keep \(T\) observations for the lags, one can replace the missing values for \(t\leq 0\) with zeros. Alternatively, the vectors are shortened to contain \(T-p\) observations only. Both approaches are equivalent asymptotically, and for notational simplicity in the following we do not explicitly distinguish between them. \[\mathbf{y}=\mathbf{X}_{-p}\mathbf{\beta}+\mathbf{Y}_{-p}\mathbf{\delta}_{1}+\mathbf{W}_{-p}\mathbf{\delta }_{2}+\mathbf{u}=\mathbf{X}_{-p}\mathbf{\beta}+\mathbf{V}\mathbf{\delta}+\mathbf{u}, \tag{2}\] where for notational simplicity we define \(\mathbf{V}=(\mathbf{Y}_{-p},\mathbf{W}_{-p})\) as the matrix containing all control variables and \(\mathbf{\delta}=(\mathbf{\delta}^{\prime}_{1},\mathbf{\delta}^{\prime}_{1})^{\prime}\) its coefficients. Testing for no Granger causality is then equivalent to testing the following null hypothesis: \[H_{0}:\mathbf{\beta}=\mathbf{0}\quad\text{against}\quad H_{1}:\mathbf{\beta}\neq\mathbf{0}. \tag{3}\] To account for the potential unit roots in the system, we follow the approach pioneered by Toda and Yamamoto (1995) and Dolado and Lutkepohl (1996) of augmenting the regression of interest with redundant lags of the variables. However, in contrast to the existing approaches, we only augment the lags of the Granger causing series \(x_{t}\). That is, we consider the lag-augmented regression \[\mathbf{y}=\mathbf{X}_{-p}\mathbf{\beta}+\sum_{j=1}^{d}\beta_{p+j}\mathbf{x}_{-(p+j)}+\mathbf{V} \mathbf{\delta}+\mathbf{u}=\mathbf{X}_{-(p+d)}\mathbf{\beta}_{+}+\mathbf{V}\mathbf{\delta}+\mathbf{u}, \tag{4}\] where \(\mathbf{X}_{-(p+d)}=(\mathbf{X}_{-p},\mathbf{x}_{-(p+1)},\ldots,\mathbf{x}_{-(p+d)})\) contains the \(p+d\) lags of \(x_{t}\). Here \(d\) represents the maximum order of integration one suspects the series are having. Note that in fact \(\beta_{p+j}=0\) for all \(j\geq 1\) such that \(\mathbf{\beta}_{+}=(\mathbf{\beta}^{\prime},\mathbf{0}^{\prime}_{d})^{\prime}\), as we are adding redundant variables. It should be clear that by testing whether the first \(p\) elements of \(\mathbf{\beta}_{+}\) - those corresponding to \(\mathbf{\beta}\) in (1) - we can perform the same test of Granger non-causality as in the original setup. By adding "free" lags of \(x_{t}\), we in essence allow this variable to "difference itself" into the correct order to remove the unit roots. As a simple illustration, consider the following the DGP where \(x_{t}\) may be \(I(1)\): \[y_{t}=\beta x_{t-1}+u_{1,t},\qquad x_{t}=\rho x_{t-1}+u_{2,t},\quad|\rho|\leq 1. \tag{5}\] Testing whether \(\beta=0\) by estimating this regression directly, is complicated as the limit distribution of the least squares estimator and the corresponding test changes depending on whether \(x_{t}\) is \(I(1)\) or \(I(0)\). By adding a redundant lag we can write (5) as \[y_{t}=\beta x_{t-1}+0x_{t-2}+u_{1,t}=\beta u_{2,t-1}+\beta\rho x_{t-2}+u_{1,t}, \tag{6}\] suggesting that regressing on \(x_{t-1}\) and \(x_{t-2}\) is equivalent to regressing on \(u_{2,t-1}\) and \(x_{t-2}\). Moreover, if \(u_{2,t}\) were observed, we could test \(\beta=0\) directly using its regression coefficient. In Appendix A.1 we show formally that not only the equivalence in (6) continues to hold in the presence of higher order lags as well as control variables, but also that testing for Granger causality in the lag-augmented regression is in fact equivalent to testing on the appropriately transformed variables. This, in turn, allows us to retrieve the asymptotic normality of the least squares estimator regardless of the order of integration, provided that the lag order \(p\) of the VAR and _maximum_ order of integration \(d\) are correctly specified. The main difference compared to the original results of Toda and Yamamoto (1995) is that we only augment the Granger causing series \(x_{t}\). While this difference is not of importance when \(K\) is small, it opens the door for high-dimensional applications where \(K\) is large, and may be even larger than \(T\). As we cannot estimate such regressions with least squares anymore, we combine lag-augmentation with the post-double-selection framework of HMS to construct Granger causality tests in high-dimensional models that are robust to unknown orders of integration. We describe the method in the next section. **Remark 1**.: For ease of exposition we confine our attention to the study of bivariate Granger causality relations, conditional on a large information set. At the cost of more involved notation and algebra, our approach can be extended to Granger causality between multiple variables. In that case, lag augmentation is needed for all Granger causing variables, which is only feasible if the block of Granger causing variables is not too large. HMS provide details about this in the stationary setting; the same approach can be adapted here. **Remark 2**.: One important aspect of the current framework is that the lag-length \(p\) of the VAR is necessarily larger than, or at least equal to, the suspected maximum order of integration \(d\). It is therefore important to specify \(p\) correctly in practice. We return to this issue in Section 5. One might also worry that specifying \(d\) too high, or having mixed orders of integration among multiple Granger causing variables would lead to over-differencing, i.e., moving average unit roots being introduced by differencing stationary time series (see e.g., Chang and Dickey, 1994). This, however, does not happen here as the additional lags of the Granger causing and Granger caused variables are used "at convenience"; if they are not needed because the variables are already stationary, inclusion of these redundant will at most marginally decrease the power of the test given the small over-specification of the lag length. **Remark 3**.: Even if the variables in a particular dataset are thought to be at most of order \(I(1)\), it may still pay off to take \(d=2\). The choice of \(d=2\) is supported by simulations reported later in Section 4. It is well known that when one or more roots of the characteristic polynomial are close to unity, the distribution of the least squares estimator becomes skewed, yielding estimators which tend to underestimate the true autoregressive parameters (see e.g. Fuller, 2009). This also causes difficulties in performing inference on these parameters. By augmenting with \(d=2\) lags, one can avoid any issues with near unit roots. The simulations reported in Section 4 confirm that in the presence of substantial autocorrelation beyond the first lag, augmenting with \(d=2\) lags is an easy way to improve the finite sample behaviour of the test. **Remark 4**.: If the \(I(d)\) series object of the Granger causality test are also actually cointegrated, then the lag-augmentation does not serve any purpose as no spurious relation would be estimated (Dolado and Lutkepohl, 1996). This cause a slight loss in power which however is in practice minimal, as shown in Section 4. If, however, the series in object are not cointegrated, then the lag-augmentation becomes paramount to render the series difference-stationary before the estimation and testing. Note that also all sorts of situations in between are allowed: some variables may be cointegrated, others contain 'pure' stochastic trends, and others may be stationary. In fact, in our high-dimensional setup where we apply the test to a large dataset containing many variables, such mixed properties seem likely. ### Inference after selection by the lasso Assuming that \(\mathbf{\delta}\) is sparse in the sense that many elements of the vector are equal to zero, one might consider estimating (4) with a technique that does variable selection such as the lasso, and then re-estimating the equation including only the selected control variables. However, doing so we run into the problems with inference after selection (Leeb and Potscher, 2005), where even if selection is done consistently, the (diminishing) probability of omitting a relevant variable causes a sufficiently large bias to prevent uniform convergence of the post-selection least squares estimator to a Gaussian limit distribution. To circumvent this problem Belloni et al. (2014) introduced the post-double-selection (PDS) method that not only selects relevant variables on the outcome variable, but also on the treatment variable. This double selection reduces the probability of admitting relevant variables to such an extent that it does not affect the asymptotic distribution anymore. The PDS framework was used by HMS to develop high-dimensional tests for Granger causality in sparse VAR models. In this section we show how to adapt the post-double selection framework of HMS to the unit root non-stationary framework with lag augmentation. In this framework we first perform a set of initial regressions - to be estimated with variable selection techniques such as the lasso - of the dependent variable plus the explanatory variables of interest (here the \(p\) lags of \(x_{t}\)) on all other variables. To properly account for potential unit roots and avoid spurious regression in these initial regressions, we need to slightly adapt the setup of HMS. Let \(\mathbf{X}_{-p,\setminus\{j\}}\) denote the matrix \(\mathbf{X}_{-p}\) from which the \(j\)-th column, corresponding to the \(j\)-th lag of \(x_{t}\), has been removed. Let \(\mathbf{Z}_{-p}=(\mathbf{X}_{-p},\mathbf{Y}_{-p},\mathbf{W}_{-p})\) denote the matrix of all (non-augmented) variables and \(\mathbf{Z}_{-p,\setminus\{j\}}=(\mathbf{X}_{-p,\setminus\{j\}},\mathbf{Y}_{-p},\mathbf{W}_{-p})\). Then we consider the following regressions in the first step: \[\mathbf{y}=\mathbf{Z}_{-p}\mathbf{\eta}_{0}+\mathbf{u},\qquad\mathbf{x}_{-j}=\mathbf{Z}_{-p,\setminus \{j\}}\mathbf{\eta}_{j}+\mathbf{e}_{j},\quad j=1,\ldots,p. \tag{7}\] In contrast to HMS, who follow Belloni et al. (2014) by only regressing \(\mathbf{y}\) and \(\mathbf{x}_{-j}\) on the controls \(\mathbf{Y}_{-p}\) and \(\mathbf{W}_{-p}\), we also add all other lags of \(x_{t}\). This is done to avoid that the error terms in (7) contain unit roots and the regressions become spurious. **Remark 5**.: Note that in the situation where \(p=d\), there may still be a risk of spurious regression in the first step. E.g. consider \(p=d=1\) where the regression of \(x_{t-1}\) on the remaining variables will not contain any lags (or leads) of \(x_{t-1}\). This yields a spurious regression if \(x_{t}\) is a non-cointegrated unit root process. We therefore recommend to always take \(p\geq d+1\) in such cases. If this is not desirable, e.g. if the sample size is too small to sustain such a large \(p\), it is also possible to augment the first-stage regressions with an additional lag of \(x_{t}\) to eliminate the possibility of spurious regression. In the following we work under the condition that the error terms \(\mathbf{e}_{j}\) are \(I(0)\), thereby implicitly assuming that one of the two measures suggested above have been taken if needed. Although the concept is tricky with integrated variables, the coefficients \(\mathbf{\eta}_{j}\) in (7) can be thought of as best linear predictors. Indeed, as the error terms \(\mathbf{e}_{j}\) are \(I(0)\), these coefficients are ensured to exist and can be defined as the probability limits of the respective least squares estimators while keeping the dimension \(K\) fixed. As will be formalized in Assumption 2, we assume that the coefficients \(\mathbf{\eta}_{j}\) are sparse. Let \(\mathcal{S}_{j}=\{m>p:\eta_{m,j}\neq 0\}\), \(j=0,1,\ldots,p\), be the sets of active variables in (7).4 Then, we need that the cardinality of these sets is small; that is, smaller than \(T\) and at most growing at a slow rate of \(T\). In fact, the union over all these sets, \(\mathcal{S}:=\bigcup_{j=0}^{p}\mathcal{S}_{j}\) is our set of interest. This set represents all variables needed to control for when regressing \(\mathbf{y}\) on \(\mathbf{X}_{-p}\), as they either have non-zero coefficients in (2) _or_ are correlated with \(x_{t}\). In fact, variables would need to have both properties to cause omitted variable bias if they were not included in the final regression. By aiming to recover \(\mathcal{S}\), we therefore have two opportunities to select relevant variables, which is sufficient to reduce the probability of missing them to be asymptotically negligible. Let \(\mathbf{V}_{\mathcal{S}}\) denote the matrix consisting of only those columns of \(\mathbf{V}\) that corresponds to the selected variables in \(\mathcal{S}\), and \(\mathbf{\delta}_{\mathcal{S}}\) the corresponding parameter vector. Our goal of the PDS regression is then to recover the regression \[\mathbf{y}=\mathbf{X}_{-(p+d)}\mathbf{\beta}_{+}+\mathbf{V}_{\mathcal{S}}\mathbf{\delta}_{ \mathcal{S}}+\mathbf{u}, \tag{8}\] in the second stage, and base inference on this. To obtain an estimate for the set \(\mathcal{S}\), one can use the lasso or any of its relatives that similarly ensure variable selection. The lasso (see Tibshirani, 1996) simultaneously performs variable selection and estimation of the parameters in (7) by solving the following minimization problems \[\hat{\mathbf{\eta}}_{0} =\underset{\mathbf{\eta}}{\arg\min}\bigg{(}T^{-1}\big{\|}\mathbf{y}-\mathbf{ Z}_{-p}\mathbf{\eta}\big{\|}_{2}^{2}+\lambda\big{\|}\mathbf{\eta}\big{\|}_{1}\bigg{)}, \tag{9}\] \[\hat{\mathbf{\eta}}_{j} =\underset{\mathbf{\eta}}{\arg\min}\bigg{(}T^{-1}\big{\|}\mathbf{x}_{-j}- \mathbf{Z}_{-p,\setminus\{j\}}\mathbf{\eta}\big{\|}_{2}^{2}+\lambda\big{\|}\mathbf{\eta} \big{\|}_{1}\bigg{)},\quad j=1,\ldots,p,\] where \(\lambda\) is a non-negative tuning parameter determining the strength of the penalty. Here, we follow the framework of HMS and use the Bayesian information criterion (BIC) in selecting the tuning parameter coupled with a penalty lower bound ensuring a maximum of selected variables per estimated equation (see Remark 7 for details). Minimizing an information criterion (IC) in order to determine an appropriate data-driven \(\lambda\) is one way to deal with dependent data (see HMS for an overview of other methods and their finite sample behaviors). The first step of our PDS method is then to estimate each of the regressions in (7) using a penalized regression technique such as in (9). Let \(\hat{\mathcal{S}}_{j}=\{m>p:\hat{\eta}_{m,j}\neq 0\}\) represent the corresponding sets of active variables retained in each of these regressions, and let \(\hat{\mathcal{S}}=\bigcup_{j=0}^{p}\hat{\mathcal{S}}_{j}\) denote the set of all active variables. Then we base our inference on the second-stage regression \[\mathbf{y}=\mathbf{X}_{-(p+d)}\mathbf{\beta}_{+}+\mathbf{V}_{\mathcal{S}}\mathbf{\delta}_{\mathcal{ S}}+\mathbf{u}, \tag{10}\] which may now be treated as if no selection took place, and can simply be estimated by least squares. As we will show in the next section, the OLS estimator converges uniformly to a normal distribution, which in turn makes standard tests such as the Wald test or the LM test applicable with their regular limiting distributions. Algorithm 1 describes the main steps of our post-double selection lag-augmented (PDS-LA) Granger causality test implemented through the Lagrange Multiplier (LM) or Wald test. **Remark 6**.: In Algorithm 1, the choice among Step (5a) or (5b) does not affect the finite sample results of the test whenever the sample size \(T\) is large enough. The small sample correction in [5b] (see Kiviet, 1986) has a wider practical applicability since [5a] suffers from size distortion in small samples, therefore in Section 4 we always use [5b] for the Monte-Carlo simulations of the PDS-LA-LM test. Unreported simulations show that the Wald test performs very similarly to the LM test, if with slightly bigger size distortions. **Remark 7**.: The algorithm designed in HMS employs a lower bound on the penalty to ensure that in each selection regression at most \(cT\) terms gets selected, for some \(0<c<1\). Similarly, we also employ a \(c=0.5\) lower bound on the selected variables in Algorithm 1. This ensures that in each equation the lasso does not select too many variables, as this would render the union too large and hence infeasible for post-least-squares estimation. Our PDS procedure does not require consistent model selection but only consistency (see Assumption 2(c) below). Mistakes are allowed to occur in the selection: variables might be incorrectly included as long as the estimator remains sufficiently sparse and consistency is guaranteed. Note that even with the lower bound it remains possible that the lasso selects sufficiently distinct variables at every selection step. Should such a case occur where the number of selected variables \(\hat{\mathcal{S}}\) is larger than the sample size, post-OLS would be infeasible. One way to avoid this would be to impose an ad-hoc increase on the tightness of the bound on the selected variables. Alternatively one could switch to a sparser estimator, such as the adaptive lasso. **Remark 8**.: Although it is not necessary for the theory to hold, we advocate to always include the \(p\) lags of \(y_{t}\) in the second stage regression. Erroneously omitting them might induce spurious regression, which is to be avoided. We achieved this by including them without penalty in the first stage regressions, such that they will be included in the active set by default. Additionally, we do not penalise the lags of \(x_{t}\) in the first-stage regressions, to further reduce the probability of spurious regression and improving finite sample behaviour. ## 3 Theoretical Properties In this section we present the main theoretical result. We show the post-selection, lag-augmented, least squares estimator \(\hat{\mathbf{\beta}}_{p}\) is asymptotically Gaussian uniformly over the parameter space. Therefore, tests for Granger causality are \(\chi^{2}\) distributed. For the PDS-LA method to deliver uniformly valid inference, a set of assumptions is necessary. Among others, sparsity needs to be assumed on the high-dimensional vector \(\mathbf{\delta}\). Also, a restricted (sparse) eigenvalue condition needs to be assumed on the (scaled) Gram matrix, bounding away its smallest eigenvalue over a subset (cone) of \(\mathbb{R}^{K}\). This condition essentially guarantees that over a sufficiently large subset of the parameter space the Gram matrix is well behaved and thus invertible. An empirical process bound is also required. This states that the (scaled) process \(\mathbf{Z}^{\prime}_{-p}\mathbf{u}\) uniformly concentrates around zero. These conditions are standard in the literature to prove the consistency of the lasso. However, when nonstationary processes are considered, more refined results are required. In particular, scaling becomes more complicated as time series of different orders need to be scaled with different rates. In addition, when time series are cointegrated, we first need to separate the stochastic trends from the stationary components before the appropriate scaling can be applied. For example, suppose that \(\mathbf{z}_{t}\) can be written as the cointegrated system \[\Delta\mathbf{z}_{t}=\mathbf{A}\mathbf{B}^{\prime}\mathbf{z}_{t-1}+\mathbf{u}_{t},\] where \(\mathbf{u}_{t}\) is \(I(0)\) and \(\mathbf{A}\) and \(\mathbf{B}\) are \(n\times r\) matrices with \(r<n\). Then define \(\mathbf{s}_{t}=\mathbf{z}_{t}\mathbf{Q}\) where \(\mathbf{Q}=(\mathbf{B},\mathbf{A}_{\perp})^{\prime}\). Then we can write \[\Delta\mathbf{s}_{t}=\mathbf{Q}\mathbf{A}\mathbf{B}^{\prime}\mathbf{Q}^{-1}\mathbf{s}_{t-1}+\mathbf{Q}\mathbf{ u}_{t}=\begin{bmatrix}\mathbf{B}^{\prime}\mathbf{\alpha}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}\end{bmatrix}\mathbf{s}_{t-1}+\mathbf{Q}\mathbf{u}_{t}.\] Now the first \(r\) variables in \(\mathbf{s}_{t}\) are \(I(0)\), while the remaining ones are \(I(1)\). Further details on constructing such a matrix \(\mathbf{Q}\) can be found in, e.g., Lutkepohl (2005, Chapter 7). Given the existence of such a matrix \(\mathbf{Q}\), we can without loss of generality assume that we may partition \(\mathbf{s}_{t}\) as \(\mathbf{s}_{t}=(\mathbf{s}_{0,t},\mathbf{s}_{1,t},\mathbf{s}_{2,t})^{\prime}\), where \(\mathbf{s}_{i,t}\) contains the \(I(i)\) variables. We then consider a scaling matrix \[\mathbf{D}_{T}:=\begin{bmatrix}T^{1/2}\mathbf{I}_{n_{0}}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&T^{-1}\mathbf{I}_{n_{1}}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&T^{-2}\mathbf{I}_{n_{2}}\end{bmatrix}, \tag{11}\] where \(n_{i}\) corresponds to the number of \(I(i)\) variables in \(\mathbf{s}_{t}\). Finally, we let \(\mathbf{G}_{T}:=\mathbf{Q}\mathbf{D}_{T}\) as the final'scaling and rotation' matrix. **Assumption 2**.: Let \(\delta_{T}\) and \(\Delta_{T}\) denote sequences such that \(\delta_{T},\Delta_{T}\to 0\) as \(T\to\infty\). For \(\mathbf{D}_{T}\) as defined in (11), assume there exists a \(Kp\times Kp\) matrix \(\mathbf{G}_{T,\mathbf{Z}}=\mathbf{Q}\mathbf{D}_{T}\) conformably with \(\mathbf{Z}_{-p}\) such that the following holds: 1. **Deviation Bound:** With probability at least \(1-\Delta_{T}\) we have that the empirical process satisfies \(\left\|\mathbf{G}_{T}^{-1\nu}\mathbf{Z}_{-p}^{\prime}\mathbf{u}\right\|_{\infty}\leq\bar{ \gamma}_{T}\), for the deterministic sequence \(\bar{\gamma}_{T}\). 2. **Boundedness:** Let \(\mathbf{\beta}\) in (2) be in the interior of a compact parameter space \(\mathbb{B}\subset\mathbb{R}^{K}\). 3. **Consistency:** With probability \(1-\Delta_{T}\), the following error bounds for \(\mathbf{\eta}_{j}\) and \(\hat{\mathbf{\eta}}_{j}\), defined in (7) and (9), holds: \[\left\|\mathbf{Z}_{-p}\left(\hat{\mathbf{\eta}}_{0}-\mathbf{\eta}_{0}\right) \right\|_{2}^{2}\leq\delta_{T}^{2}T^{1/2},\qquad\left\|\hat{\mathbf{\eta}}_{0}- \mathbf{\eta}_{0}\right\|_{\infty}\leq\delta_{T}T^{-1/4},\] \[\left\|\mathbf{Z}_{-p,\backslash\{j\}}\left(\hat{\mathbf{\eta}}_{j}-\mathbf{ \eta}_{j}\right)\right\|_{2}^{2}\leq\delta_{T}^{2}T^{1/2}\quad j=1,\ldots,p,\] 4. **Sparsity:** Let \(s=|\mathcal{S}|\) and \(\hat{\mathcal{S}}=\left|\hat{\mathcal{S}}\right|\) denote the number of active variables in population and sample, respectively. Then with probability at least \(1-\Delta_{T}\), we have that \(\max(s,\hat{\mathcal{S}})\leq\bar{s}_{T}\) for the deterministic sequence \(\bar{s}_{T}\). 5. **Restricted Sparse Eigenvalues:** For any \(\mathbf{\eta}\in\mathbb{R}^{Kp}\) with \(\left\|\mathbf{\eta}\right\|\leq\bar{s}_{T}\), we have with probability at least \(1-\Delta_{T}\) \[\left\|\mathbf{\eta}\right\|_{1}\leq\sqrt{\bar{s}_{T}}\Big{\|}\mathbf{Z}_{-p}\mathbf{G}_{T,\mathbf{Z}}^{-1}\mathbf{\eta}\Big{\|}_{2}/\kappa_{T,\min},\] for the deterministic sequence \(\kappa_{T,\min}>0\). 6. **Rate Conditions:** the deterministic sequences bounding sparsity (\(\bar{s}_{T}\)), thickness of empirical process tails (\(\bar{\gamma}_{T}\)) and minimum eigenvalue (\(\kappa_{T,\min}\)) must satisfy \[T\frac{\bar{s}_{T}\bar{\gamma}_{T}}{\kappa_{T,\min}}\leq\delta_{T}.\] (12) Condition (a) bounds the empirical process with high probability. Such deviation bounds for non-stationary time series can be found in, among others, Smeekes and Wijler (2021, Lemma A.3), Mei and Shi (2022, Proposition 1) and Wijler (2022, Lemma 2). Condition (b) is standard and assumes compactness of the parameter space of the vector \(\mathbf{\beta}\) which in turn implies the boundedness. Condition (c) supposes consistency of the first-stage estimator. Consistency is thereby established using an error bound or oracle inequality. Such inequality gives the rate of convergence of the estimator as a function of the tuning parameter \(\lambda\), the deviation bound, the cardinality \(s\) of the active set and the restricted (sparse) eigenvalue. As such, this condition is intimately related to (a), (d) and (e). For stationary time series many results exist under a variety of settings; see e.g. Masini et al. (2022, Theorem 1) for VAR models or Adamek et al. (2022, Corollary 1) for general time series models. Nonstationary time series are treated in Smeekes and Wijler (2021, Theorem 2), Mei and Shi (2022, Theorem 1 and 2) and Wijler (2022, Theorem 2). Condition (d) requires sparsity of the population parameters and the estimator. Sparsity of the first-stage estimator is needed in our framework as we perform OLS on the selected variables from the first-stage regressions. If the selected variables are not sparse enough, too many variables will be selected for OLS to be feasible. For simplicity we work under the assumption of exact sparsity; however, at the expense of more complicated notation and proofs this can be relaxed to approximate sparsity following Belloni et al. (2014), where it is assumed hat the exact sparse model is a (good) approximation to the true DGP, or weak sparsity following Adamek et al. (2022), where many non-zero but small coefficients are allowed. Condition (e) requires that for sufficiently sparse vectors, the eigenvalues of the subset of the Gram matrix corresponding to their nonzero support do not decrease to zero too fast. While this is a standard (and easily verifiable) assumption for stationary time series (see e.g., Adamek et al., 2022, Lemma A.3 and A.5 and Masini et al., 2022, Lemma 3 and Proposition 2), time series with unit roots require more care. Indeed, Smeekes and Wijler (2021, Theorem B.2), Mei and Shi (2022, Lemma 2) and Wijler (2022, Theorem 1) show for unit root regressors that the standard rate of \(T^{-2}\) coming from the \(\mathbf{D}_{T}\) scaling still results in at least a factor of \(\bar{s}_{T}^{-1}\) in \(\kappa_{T,\min}\). As we allow \(\kappa_{T,\min}\) to depend on the sample size, this can be accommodated, as long as the interplay of the rates in condition (f) is satisfied. This condition links the sparsity \(\bar{s}_{T}\), the tails thickness of the empirical process \(\bar{\gamma}_{T}\) and the minimum eigenvalue \(\kappa_{T,\min}\). The exact rates allowed for are a compromise between the number of moments existing, strength of the dependence, growth rate of the dimension \(K\) and sparsity of the parameters.5 Footnote 5: For an illustration of the complexities of these relations, we refer to Figure C.1 in Adamek et al. (2022) which visualises the feasible combinations with regards to lasso consistency. We can now state our first, and main, theoretical result. This establishes that the first stage regressions allow for sufficiently accurate variable selection and estimation such that the second stage is not affected by the variable selection performed. **Theorem 1**.: Let \(\hat{\mathbf{\beta}}_{p}\) denote the OLS estimator of the first \(p\) elements of \(\mathbf{\beta}_{+}\) (equal to \(\mathbf{\beta}\)) in (10), and the corresponding estimator in (8). Then uniformly over a parameter space \(\mathcal{B}\) for which Assumptions 1 and 2 holds for all elements in \(\mathcal{B}\), we have that \[\sqrt{T}(\hat{\mathbf{\beta}}_{+}-\mathbf{\beta})=\sqrt{T}(\hat{\mathbf{\beta}}_{+}-\mathbf{ \beta})+o_{p}(1).\] Theorem 1 establishes the asymptotic equivalence of the estimator in the feasible second-stage regression (10) based on the estimated active set \(\hat{\mathcal{S}}\), and the estimator in the infeasible second-stage regression (10) based on the true, unobserved, active set \(\mathcal{S}\). Note that the equivalence is only established for the estimators of the coefficients of the lags of the Granger causing variable -minus the augmented lags-this is however all that is needed to establish the validity of that Granger causality tests. Before stating this second result, we need another assumption on the asymptotic behaviour of the relevant variables. For a finite number of relevant variables, this Assumption follows directly from well-known results in the unit root and cointegration literature; see Appendix B. We state the assumption more generally to also accommodate an increasing number of relevant variables. **Assumption 3**.: Let \(\{\mathbf{s}_{t}\}_{t=1}^{T}\) denote an \(n\)-dimensional process, where \(n=n_{T}\) may increase with \(T\). Let \(n_{0}\), \(n_{1}\) and \(n_{2}\) denote integers such that \(n=n_{0}+n_{1}+n_{2}\) and partition \(\mathbf{s}_{t}=(\mathbf{s}_{0,t},\mathbf{s}_{1,t},\mathbf{s}_{2,t})^{\prime}\) comfortably with \((n_{0},n_{1},n_{2})\), and assume that \(\mathbb{E}[\mathbf{s}_{t}u_{t}]=\mathbf{0}\) for all \(t=1,\ldots,T\). Define \[\mathbf{D}_{T}:=\begin{bmatrix}T^{1/2}\mathbf{I}_{n_{0}}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&T^{-1}\mathbf{I}_{n_{1}}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&T^{-2}\mathbf{I}_{n_{2}}\end{bmatrix}=:\begin{bmatrix}\mathbf{D}_{T,0}& \mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{D}_{T,1}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\mathbf{D}_{T,2},\end{bmatrix}\] and let \(\Delta_{T}\) denote a deterministic sequence such that \(\Delta_{T}\to 0\) as \(T\to\infty\). Then assume that with probability \(1-\Delta_{T}\) the following statements hold jointly: 1. \(\left\|\mathbf{D}_{T,i}^{-1}\sum_{t=1}^{T}\mathbf{s}_{i,t}\mathbf{s}_{j,t}^{\prime}\mathbf{D} _{T,j}^{-1}\right\|_{2}\leq\phi_{T,ij}\) for \(i,j=0,1,2\) and \(i<j\); 2. \(\lambda_{\min}\left(\mathbf{D}_{T,0}^{-1}\sum_{t=1}^{T}\mathbf{s}_{0,t}\mathbf{s}_{0,t}^{ \prime}\mathbf{D}_{T,0}^{-1}\right)\geq\kappa_{T,0}\) and \(\lambda_{\min}\left(\mathbf{D}_{T,-0}^{-1}\sum_{t=1}^{T}\mathbf{s}_{-0,t}\mathbf{s}_{-0,t} ^{\prime}\mathbf{D}_{T,-0}^{-1}\right)\geq\kappa_{T,12}\), where \(\mathbf{s}_{-0,t}=(\mathbf{s}_{1,t}^{\prime},\mathbf{s}_{2,t}^{\prime})^{\prime}\) and \(\mathbf{D}_{t,-0}=\text{diag}(\mathbf{D}_{T,1},\mathbf{D}_{T,2})\); 3. \(\left\|\mathbf{D}_{T,i}^{-1}\sum_{t=1}^{T}\mathbf{s}_{i,t}u_{t}\right\|_{2}\leq\gamma _{T,u,i}\) for \(i=0,1,2\); 4. \(\left\|T^{-1}\sum_{t=1}^{T}\left[\mathbf{s}_{0,t}\mathbf{s}_{0,t}^{\prime}-\mathbb{E} \left(\mathbf{s}_{0,t}\mathbf{s}_{0,t}^{\prime}\right)\right]\right\|_{2}\leq\delta_{T}\); where \(\delta_{T}\to 0\) as \(T\to\infty\) and the following holds regarding the rates: 1. \(\kappa_{T,12}-(\phi_{T,01}^{2}+\phi_{T,02}^{2})/\kappa_{T,0}\geq\kappa_{T,-0}\); 2. \(\phi_{T,01}+\phi_{T,02}+2(\phi_{T,01}^{2}+\phi_{T,02}^{2})/\kappa_{T,0}\leq \phi_{T,-0}\); 3. \(\gamma_{T,u,1}+\gamma_{T,u,2}+(\phi_{T,01}+\phi_{T,02})\gamma_{T,u,0}/\kappa_ {T,0}\leq\gamma_{T,u}\); 4. \(\phi_{T,-0}(\phi_{T,-0}+\gamma_{T,u})/\kappa_{T,-0}\leq\delta_{T}\). In addition, let \(\mathbf{R}_{m}\) denote a deterministic \(m\times n_{0}\) matrix where \(m<\infty\) is not depending on \(T\). Then we have that \[T^{-1/2}\sum_{t=1}^{T}\mathbf{R}_{m}\mathbf{s}_{0,t}u_{t}\xrightarrow{d}N(\mathbf{0},\mathbf{ \Omega}),\] where \(\mathbf{\Omega}=\lim_{T\to\infty}T^{-1}\mathbf{R}_{m}\mathbb{E}(\mathbf{s}_{0,t}u_{t}u_{t} ^{\prime}\mathbf{s}_{0,t}^{\prime})\mathbf{R}_{m}^{\prime}=\sigma_{u}^{2}\mathbf{R}_{m} \mathbb{E}(\mathbf{s}_{0,t}\mathbf{s}_{0,t}^{\prime})\mathbf{R}_{m}^{\prime}\). We verify this assumption in Appendix B for a finite number of relevant variables (measured through \(s\)). One way to extend the result to a growing number of variables would be through a Gaussian approximation theorem; such an approach is considered in, e.g., Zhang et al. (2019, Remark 3.4) and Smeekes and Wijler (2021, Theorem B.3). This is a relatively crude approach that puts significant limitations on the allowed growth rate of \(s\). However, as the growth rate of \(s\) is anyway only allowed to be limited compared to the sample size \(T\), such an approach would be feasible here without imposing strong additional restrictions (unlike in the general case such as for establishing a result like the deviation bound 2(a) where it would significantly restrict the allowed growth rate of \(K\)). With this assumption in place we can now state our second theoretical result, which establishes the limit distribution of the Granger causality tests. **Theorem 2**.: Let Wald and LM be as defined in Algorithm 1. Assume that Assumptions holds for \(\mathbf{s}_{t}=\mathbf{Q}_{\mathcal{S}}\mathbf{v}_{+,\mathcal{S},t}\), where \(\mathbf{v}_{+,\mathcal{S},t}=(x_{t-p-1},\ldots,x_{t-p-d},\mathbf{v}_{\mathcal{S},t})\). Then, uniformly over a parameter space \(\mathcal{B}\) on which Assumptions 1 and 2 holds for all elements in \(\mathcal{B}\), we have that \[\text{LM},\text{Wald}\xrightarrow{d}\chi_{p}^{2},\qquad\text{as }T\to\infty,\] under the null hypothesis that \(\mathbf{\beta}=\mathbf{0}\). The proofs of Theorem 1 and 2 are reported in Appendix A.2. **Remark 9**.: Though we focus on homoskedastic error terms for simplicity, heteroskedasticity-robust versions of the test can easily be constructed and shown to be valid, as this would only affect the low-dimensional part of our results in Theorem 2. Standard techniques can therefore be used for constructing heteroskedasticity-robust tests: for the Wald test, the OLS standard errors can be replaced by Eicker-White standard errors, while the LM test can be modified as in Wooldridge (1987). We refer to HMS, Algorithm 2 for a full treatment. ## 4 Monte-Carlo Simulations We now evaluate the finite-sample performance of our proposed PDS-LA-LM Granger causality test. We consider the following Data Generating Processes (DGPs) in first differences inspired by Kock and Callot (2015): \[\text{DGP1:}\quad\quad\Delta\mathbf{z}_{t}=\begin{bmatrix}0.5&0&\dots&0 \\ 0&0.5&\dots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\dots&0.5\end{bmatrix}\Delta\mathbf{z}_{t-1}+\mathbf{u}_{t},\] \[\text{DGP2:}\quad\quad\Delta\mathbf{z}_{t}=\begin{bmatrix}(-1)^{|i-j|} a^{|i-j|+1}&\dots&(-1)^{|i-j|}a^{|i-j|+1}\\ (-1)^{|i-j|}a^{|i-j|+1}&\dots&(-1)^{|i-j|}a^{|i-j|+1}\\ \vdots&\ddots&\vdots\\ (-1)^{|i-j|}a^{|i-j|+1}&\dots&(-1)^{|i-j|}a^{|i-j|+1}\end{bmatrix}\Delta\mathbf{z}_ {t-1}+\mathbf{u}_{t},\] with \(a=0.3\). The diagonal VAR(1) for DGP1 allows the sparsity assumption to be met. Instead, for DGP2 the coefficients decrease with exponential pace departing from the main diagonal and hence although the farthest coefficients are small, the exact sparsity assumption is not met. We report simulations for Granger causality tests from the first variable to the second variable. Therefore, also when integrated out to nonstationary, DGP1 automatically satisfies the null of no Granger causality from unit 2 to 1, however DGP2 does not. Therefore, for the power analysis of both DGP1 and DGP2 we set the coefficient in position \((2,1)\) equal to \(0.2\). We set the same coefficient equal to zero for DGP2 for the size analysis. Simulations are reported for different types of covariance matrices of the error terms. We employ a Toepliz-version for calculating the covariance matrix as \(\Sigma_{i,j}=\rho^{|i-j|}\), where \((i,j)\) refer to row \(i\), column \(j\) of the matrix \(\Sigma_{u}\). We cover two scenarios of correlation: \(\rho=(0,0.7)\). The lag length is fixed to \(p=2\), while we employ a double (\(d=2\)) augmentation of the dependent and the Granger causing variable. Having \(p\geq 2\) guarantees that no spurious results occur in the selection steps. Following the recommendation in HMS, we employ the BIC in selecting the tuning parameter \(\lambda\) for the lasso. Table 1 reports the size and power of the PDS-LA-LM test out of 1000 replications. We use different combinations of time series length \(T=(50,100,200,500,1000)\) and number of variables in the system \(K=(10,20,50,100)\). All the rejection frequencies are reported using a burn-in period of fifty observations. Our PDS-LA-LM test shows good performance in terms of size and (unadjusted) power for all DGPs considered. The setting of no correlation is handled remarkably well by all DGPs and only moderate size distortion is visible in large systems for small samples. Whenever high correlation of errors is present, sizes are still in the vicinity of \(5\%\) for DGP1 where the sparsity assumption is met. However, we notice how for DGP2, for which the sparsity assumption is not met, some residual size distortion remains visible even in large systems. However, the power of the test is always increasing with the sample size \(T\) for all the considered cases. **Remark 10**.: As mentioned in Remark 7, in order to obtain the results for the size and power when \(T\leq Kp\) we need to impose a lower bound on the lasso penalty \(\lambda\) which guarantees to select at most \(c\,T\) variables in each relevant equation of the VAR, for some \(0<c<1\). The bound should be set as strict as the system requires and often there is not a universal constant \(c\) that works in all settings, therefore this choice needs to be adaptive. For instance, if the lag length is \(p=2\), this implies 3 selection steps plus \(d\) augmented lags of Granger causing variables to be added. In some cases this might lead to too many variables being selected in order to perform least squares in the second step. In these cases we tighten the bound using either \(c=0.33\) or \(c=0.25\). ## 5 Lag Length Selection Up until this point, we considered the lag length \(p\) as given. In reality, this is typically not the case. In this section we propose a simple, data-driven method to estimate \(p\). Standard techniques for tuning the lag length such as information criteria or sequential testing fail when applied directly to the high-dimensional VAR. The standard approach with penalized regression methods would be to set \(p\) as a generous upper bound, and let the method decide which lags are needed. This however provides complications for our approach as a large \(p\) means many first-stage regressions need to be done, with the potential of selecting too many variables. On top of that, we need to augment the second stage with \(d\) additional lags. It is therefore important for our approach to have a reasonable data-driven selection of the lag length. We are essentially looking for an 'informative upper bound' on the lag length; while mild over-specification \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline & & \(T\) & 50 & 100 & 200 & 500 & 1000 & 50 & 100 & 200 & 500 & 1000 \\ \cline{3-11} DGP & \(\rho\) & \(K\) & & & Size & & & & Power & & \\ \hline 1 & 0 & 10 & 10.4 & 7.2 & 5.5 & 4.2 & 5.5 & 23.3 & 41.0 & 74.2 & 99.4 & 100 \\ & & 20 & 12.5 & 9.5 & 7.3 & 6.3 & 5.9 & 19.7 & 40.1 & 74.6 & 99.5 & 100 \\ & & 50 & 10.7 & 7.1 & 7.2 & 5.4 & 4.7 & 17.7 & 38.9 & 66.5 & 99.2 & 100 \\ & & 100 & 9.9 & 8.4 & 6.8 & 6.1 & 5.3 & 16.0 & 34.1 & 68.1 & 98.5 & 100 \\ \hline 2 & & 10 & 9.0 & 6.2 & 4.7 & 4.2 & 5.4 & 19.7 & 36.0 & 69.3 & 99.2 & 100 \\ & & 20 & 13.3 & 8.1 & 5.6 & 5.5 & 6.1 & 17.3 & 36.9 & 69.4 & 98.5 & 100 \\ & & 50 & 8.6 & 7.3 & 6.9 & 5.9 & 4.7 & 18.2 & 34.4 & 63.9 & 98.8 & 100 \\ & & 100 & 8.6 & 8.2 & 7.5 & 6.4 & 5.4 & 13.9 & 31.4 & 62.4 & 98.0 & 100 \\ \hline 1 &.7 & 10 & 13.6 & 9.4 & 6.2 & 5.0 & 5.5 & 20.9 & 20.9 & 44.1 & 87.5 & 99.3 \\ & & 20 & 12.8 & 11.4 & 9.1 & 6.2 & 5.8 & 17.8 & 23.7 & 41.2 & 84.9 & 99.4 \\ & & 50 & 11.7 & 9.6 & 10.3 & 7.2 & 5.2 & 15.5 & 22.6 & 39.8 & 80.5 & 99.2 \\ & & 100 & 12.6 & 12.4 & 7.2 & 10.1 & 6.3 & 13.3 & 22.2 & 37.6 & 72.7 & 98.7 \\ \hline 2 & & 10 & 12.1 & 9.0 & 6.0 & 4.8 & 6.1 & 17.3 & 19.7 & 40.1 & 84.1 & 98.3 \\ & & 20 & 12.4 & 8.8 & 8.0 & 6.4 & 5.9 & 15.9 & 19.9 & 39.0 & 81.6 & 98.8 \\ & & 50 & 11.7 & 9.6 & 9.3 & 7.4 & 5.6 & 14.6 & 19.4 & 34.0 & 74.9 & 98.2 \\ & & 100 & 11.1 & 10.1 & 9.1 & 9.5 & 5.7 & 12.8 & 21.0 & 33.3 & 66.0 & 97.9 \\ \hline \hline \end{tabular} Notes: Size and Power for the different DGPs are reported for 1000 replications. \(T=(50,100,200,500)\) is the time series length, \(K=(10,20,50,100)\) the number of variables in the system, the lag length is fixed to \(p=2\) and BIC is used to select the tuninig parameter for the lasso. \(\rho\) indicates the correlation employed to simulate the time series with the Toeplitz covariance matrix. \end{table} Table 1: Simulation results for the PDS-LA-LM Granger causality test is not a problem, under-specification breaks the lag augmentation, and must be avoided. We therefore base selection on univariate autoregressions for all time series in our dataset, and applying an information criterion to those. This approach is motivated through the final equations representation of a VAR model, which implies that a VAR\((p)\) with \(K\) variables generates individual ARMA models with maximal orders \((Kp,(K-1)p)\). As such, lag length selection based on individual autoregressions is likely to yield lag lengths larger than \(p\)(Zellner and Palm, 1974; Cubadda et al., 2009). On the other hand, (Cubadda et al., 2009) observed that individual autoregressions often need much smaller lag lengths than the large orders implied by the final equations representation, thus making it plausible that the orders found are not overly conservative. We implement this method as follows. First, we estimate the autoregressions \[z_{i,t}=\sum_{j=1}^{p}\gamma_{j,p}z_{i,t-j}+\varepsilon_{i,p,t},\qquad i=1, \ldots,K,\] by OLS. Then, letting \(\hat{\omega}_{i}=\frac{1}{T}\sum_{t=1}^{T}\hat{\varepsilon}_{i,p,t}^{2}\), we set \(\hat{\mathbf{\Omega}}_{p}=\text{diag}(\hat{\omega}_{1},\ldots,\hat{\omega}_{K})\) and choose the \(p\) that minimizes \[\text{IC}^{*}(p)=\ln\left(\det\hat{\mathbf{\Omega}}_{p}\right)+C_{T}\frac{pK}{T}= \sum_{i=1}^{N}\log\hat{\omega}_{i}+C_{T}\frac{pK}{T},\] where \(C_{T}\) takes the standard values for well-known information criteria, e.g. \(C_{T}=\ln T\) yields BIC\({}^{*}\), while \(C_{T}=2\) yields AIC\({}^{*}\). Note that next to the dimensionality reduction offered by running individual autoregressions instead of a VAR, further reduction is achieved by letting \(\hat{\mathbf{\Omega}}\) be diagonal. We investigate the performance of this method in a small simulation study. We simulate VAR models using DGP2 with \(\rho=0\) as in Section 4 for the combinations of \(K=(10,20,50,100)\) and \(T=(50,100,200,500,1000)\).6 Knowing the true value of \(p=2\), we apply the model selection procedure on a grid of 10 values for \(p\). Footnote 6: We investigated DGP1 and \(\rho=0.7\) as well. The method showed similar or better performance there. Results are available on request. We show the results in Figure 1 for AIC\({}^{*}\) and BIC\({}^{*}\). Both criteria succeed in barely ever underestimating the lag length. Moreover, BIC is quite accurate, only overestimating the lag length by more than one when \(T=1000\), where larger values of \(p\) are not problematic. AIC is more liberal but still performs well in smaller samples, where it matters most. This simple way of selecting lag lengths therefore works remarkably well in finite examples, and is therefore recommended in combination with the PDS-LA method. ## 6 Empirical Application: Causes and Effects of Economic Uncertainty The role of economic uncertainty is a hot topic in macroeconomics. In particular, whether economic uncertainty should be seen as an exogenous shock, causing economic conditions such as business cycle fluctuations, or an endogenous response to economic fundamentals is much debated (Ludvigson et al., 2021). A complicating issue is that there is no universal definition or measurement of uncertainty. In this application we address both issues: we investigate whether uncertainty (Granger) causes or is caused by other variables in the economy, and whether different measurements contain the same information. Since the seminal paper of Bloom (2009) economic uncertainty has been at the forefront of macroeconomic debate. Bloom (2009) constructed a measure of uncertainty from the Chicago Board Options Exchange (CBOE) S&P 100 Volatility Index (VXO), arguing that stock market volatility expectations are a good proxy for overall economic uncertainty. The FRED-MD dataset of McCracken and Ng (2016) has introduced the series VXOCLSx in its September 2015 vintage. This series is constructed by splicing a synthetic historical VXO series obtained from Nicholas Bloom's website and VXOCLS from FRED.7 Footnote 7: In its December 2021 vintage, FRED-MD removed VXOCLSx and replaced it with VIXCLSx as the former has been discontinued from the source. VIX is a similar measure of implied volatility based on the S&P 500. While I made analysis focus on a pre-Covid vintage incorporating VXOCLSx, in Appendix C We extend our analysis to a recent post-Covid vintage with VIXCLSx included. An alternative very popular measure of uncertainty was proposed by Baker et al. (2016), who constructed the Economic Policy Uncertainty (EPU) index based on newspaper coverage frequency. That is, it accounts for the frequency with which certain strings of keywords related to economic uncertainty appear in the 10 leading U.S. newspapers. FRED issues a Global Economic Policy Uncertainty Index (GEPUCURRENT) which is a GDP-weighted average of national EPU indices for 20 countries. How Figure 1: Lag length selection frequencies for AIC\({}^{*}\) and BIC\({}^{*}\) ever, FRED-MD does not contain any EPU uncertainty index. This naturally raises the question whether EPU can serve as a better measure of economic uncertainty within FRED-MD or VXOCLSx is "good enough". If EPU and VXOCLSx measure the same content, one would expect that adding EPU to FRED-MD would not lead to much predictive power for EPU if VXOCLSx is accounted for in the information set. Our PDS-LA method can thus be used in this context with a twofold purpose. First, by estimating Granger causal relationships between FRED macroeconomic variables and the uncertainty index and counting the significant number of outgoing (index\(\rightarrow\)FRED) and incoming (FRED\(\rightarrow\)index) links, we contribute in the understanding of whether such indexes are respectively exogenous sources of business cycle (index\(\rightarrow\)FRED) or rather endogenous responses to economic fundamentals (FRED\(\rightarrow\)index). Second, by investigating the links between a second index (EPU) and the FRED variables _conditional_ on VXOCLSx in the dataset, we can investigate whether EPU measures information not in VXOCLSx. In our analysis we are particularly going to pay attention to the effect of how to treat potential nonstationarity in the data. The FRED-MD dataset comes with a detailed appendix (McCracken and Ng, 2016) and Matlab/R routines which allow not only to clean the data from missing values and outliers, but also to take the appropriate transformations to render all the time series stationary. This presents the practitioner with an easy and popular solution to deal with non-stationarity, however there are two potential issues with this approach. First, not every time series can be clearly categorized regarding the order of integration, and there are several variables for which a case can be made for two different orders. Indeed, as illustrated by Smeekes and Wijler (2020) and Smeekes and Wilms (2020), unit root tests may give ambiguous results and depending on the type of test employed, a different classification may arise. Second, even with a correct classification, differenced time series lose long-run information on any cointegrating relations, which may change the Granger causal relations. Our method provides an alternative way of performing the tests directly on the levels of the data, thereby avoiding the issue of transformations altogether.8. Footnote 8: To run these analyses we used the authors R package _HDGCvar_ available at [https://github.com/Marga8/HDGCvar](https://github.com/Marga8/HDGCvar) For the Economic Policy Uncertainty index we gather the data directly from Baker et al. (2016).9 Specifically, we use the three component index which combines (i) news coverage about policy-related economic uncertainty, (ii) tax code expiration data and (iii) economic forecaster disagreement.10 For details on how the index is computed we refer to the given reference. As the EPU index is only available starting January 1985, we accordingly use the FRED-MD data from January 1985. We split the analysis considering two different endpoints of the sample. First we consider the series until November 2019, thus intentionally excluding from the sample both the Covid pandemic and the war in Ukraine, but including the 2008 financial crisis. This makes for a total of 117 variables (when EPU is included) and 419 observations. In Appendix C we extend the analysis to a recent vintage which includes the aforementioned crises and spans until September 2022. First, we do not add yet US-EPU to the information set and in Figure 2 and 3 we instead just investigate the predictive relations to and from VXOCLSx with all the macroeconomic series of FRED-MD using PDS-LA-LM. At significance level \(\alpha=0.01\), eight macroeconomic series are found to Granger cause VXOCLSx while the other way around VXOCLSx Granger-causes 41 macroeconomic series. Before adding US-EPU to the dataset and investigate the changes, we repeat the analysis, this time applying the recommended stationary transformations (ST) from FRED-MD and we use the PDS-LM test of HMS instead of PDS-LA-LM to investigate the same relations. The difference between Figure 2, 3 and Figure 4, 5 is quite striking, especially for the case VXOCLSx\(\rightarrow\)FRED. In fact, the number of significant connections drops dramatically from 39 to just two and some differences can be found in the connections for the FRED\(\rightarrow\)VXOCLSx too. To zoom in on the connections and the difference between the lag-augmented (LA) and stationary transformed (ST) cases, in Figure 6 we loosen the p-value threshold up to 10% and group the variables by their FRED-MD sector classification. The bars now indicate the \(p\)-values of the test, with a full bar equaling a \(p\)-value of 0, and an empty bar indicating a \(p\)-value above 10%. The results again suggest how stationary transforming variables can have a profound impact on the inference performed, leading to very different pictures of Granger causality. It is particularly noticeable that sectors thought to be highly affected by uncertainty, such as the Output sector, barely has any significant connections left when testing for Granger causality from VXOCLSx. Similarly, the absence of Figure 4: PDS-LM, Causes of VXOCLSx, \(\alpha=0.01\) Figure 5: PDS-LM, Caused by VXOCLSx, \(\alpha=0.01\) causality from the Stocks category to VXOCLSx is surprising, given the latter's construction. Next we add US-EPU to the dataset and again repeat the analysis where now the focus is connections from and to US-EPU with all the macroeconomic series of FRED-MD using PDS-LA-LM. Importantly, now the results will be conditioned on VXOCLSx which remains in the information set. In Figure 7 we only find five connections when it come to Granger causality relations from FRED-MD series to US-EPU. Vice-versa, in Figure 8 there are 13 causal paths from US-EPU to the other macroeconomic series. The results are quantitatively similar to FRED\(\leftrightarrow\)VXOCLSx in Figure 2, 3, though less pronounced. Both US-EPU and VXOCLSx are Granger caused by the S&P500 and S&P:indust but the remaining connections are not equal. Importantly, VXOCLSx is found to be Granger causal for US-EPU. Given the network US-EPU\(\rightarrow\)FRED is conditional on VXOCLSx, the non-overlapping connections found in Figure 8 that are not occurring in Figure 3 are new connections that could make US-EPU an interesting uncertainty index to add to FRED-MD. A cross-check reveals that only UMCSENTx is new, all others connections were already uncovered using VXOCLSx. This would seem to give credit to FRED-MD in using VXOCLSx as main index of economic uncertainty without the need to further include US-EPU. On the other hand, finding the connections from US-EPU despite the presence of VXOCLSx in the information set, indicates that US-EPU may still add additional predictive information about the variables predictable by VXOCLSx, suggesting that could still be worth including in an empirical analysis. As before, in Figure 9, 10 we repeat the analysis, this time around however we apply the recommended stationary transformations (ST) from FRED-MD and we use Hecq et al. (2021) PDS-LM test to investigate the same relations. Again, a decrease in the number of connections in both directions is evident from the network results. In Figure 11 we again zoom in on the \(p\)-values and group the results by sector. The red bar refers to the VXOCLSx series which is found to be Granger causal for EPU in the analysis in levels. Interestingly, we find the opposite result with stationary transformed setting. Again, the differences in the relations found indicate that transforming variables to stationarity may delete useful information about predictability. Our analysis suggests that uncertainty accompanying a wide variety of global events, if measured in terms of expected volatility on the financial market, is primarily a _cause_ rather than an effect of variations in the economic activity. This is in line with the recent work of Ludvigson et al. (2021) who finds uncertainty about financial markets to be a source of output fluctuations. In addition, there were different measures of uncertainty clearly related, it may still be beneficial in empirical applications to consider multiple measurements. ## 7 Conclusion We propose an inferential procedure for Granger causality testing in high-dimensional non-stationary VAR models which avoids any knowledge or pre-tests about integration or cointegration in the data. To do so we adapt the Toda and Yamamoto (1995) approach of augmenting the lag length of the system and we show that by reducing this augmentation to only the variables of interest for the test, we are able to minimize parameter proliferation in high dimensions. We develop a post-double selection LM test which is based on penalized least squares estimators to partial-out those variables having no influence on the variables tested on while safeguarding from omitted variable bias using a double-selection mechanism. We prove that the augmentation of the Granger causing variables has no effect on the null hypothesis tested, yet provides an automatic differencing mechanism letting the OLS estimator have standard asymptotic results. Also, we extend the relevant assumptions needed for the post-double selection esti mator to work in the context of potentially unit roots. We derive the asymptotics of the post-selection augmented estimator, showing it attains standard asymptotic normality hence allowing for a valid test with standard \(\chi^{2}\) limiting distribution. Our proposed test shows good finite sample properties over different DGPs. We also give practical recommendations on both the optimal augmentation \(d\) and on how to estimate the lag-length \(p\). We argue that \(d=2\) lags is the optimal augmentation in order to take into account possible I(2) as well as near I(2) variables that could compromise the size of the test. In order to estimate the lag length \(p\) we propose to reduce the original VAR to a diagonal VAR. This reduces the high-dimensional system to a sequence of low-dimensional autoregressions, to which ab information criteria is applied in order to select the correct lag length. Finally, we investigate how our test performs in practice by analysing the causes and effects of economic uncertainty. Using the FRED-MD dataset directly, without needing to apply their recommended stationary transformations, we compare two different ways of measuring economic uncertainty (VXO and EPU) and their relationship with all macroeconomic variables within the dataset. We also compare the analysis with the stationary transformation case, highlighting how such transformation can profoundly impact the results and give a very different picture of the causal structure. Our results suggest that uncertainty is primarily a cause rather than an effect of variations in the economic activity.
2303.09341
Accreting Primordial Black Holes as Dark Matter Constituents
We show how magnetic accretion of positronium (electron-positron) plasma by primordial black holes might significantly contribute to the mass of dark matter in the present Universe. Assuming that background gamma radiation is primordial black hole Hawking radiation rules out Bondi accretion, while magnetic accretion known from studies of active galactic nuclei could explain the abundance of dark matter. Various accretion scenarios are discussed.
T. Kenneth Fowler, Richard Anantua
2023-03-16T14:21:45Z
http://arxiv.org/abs/2303.09341v3
# Accreting Primordial Black Holes as Dark Matter Constituents ###### Abstract We show how magnetic accretion of positronium (electron-positron) plasma by primordial black holes might significantly contribute to the mass of dark matter in the present Universe. Assuming that background gamma radiation is primordial black hole Hawking radiation rules out Bondi accretion, while magnetic accretion known from studies of active galactic nuclei could explain the abundance of dark matter. Various accretion scenarios are discussed. Dark Matter, Positronium, MRI ## 1 Introduction As reviewed in [1], many authors have pursued Hawking's early suggestion that primordial black holes might have contributed significantly to dark matter. Here we focus on the period 0.01 s to 14 s into the Big Bang, when relativistic positronium plasma was a dominant constituent of the mass, with \(10^{9}\) electrons and positrons per proton and neutron [2]. Then accreting only a tiny fraction of primordial positronium preserved as black holes could account for dark matter mass \(M_{DM}\). This paper makes three main points: (a) assuming background gamma radiation is Hawking radiation from dark matter fixes \(M^{15-18}\)g as the range of black hole masses; (b) this mass range is accessible by accretion by magnetic turbulence, similar to that identified in general relativistic MHD simulations of active galactic nuclei (AGNs) [3; 4; 5]; and (c) transport by magnetic turbulence can explain dark matter abundance. ## 2 Hawking Radiation and Accretion Scenarios Primordial black holes are theorized to be surrounded by a thermal bath of radiated particles whose temperature is inversely proportional to black hole mass [6]. Such particles range from electromagnetic rays across the spectrum to gravitons [7]. Among radiations not fully understood is the gamma ray background, extending from keV's to 10 MeV and more, cited as possibly being of primordial black hole origin [1]. In notes below, we obtain a mass distribution f(M) fitting Hawking radiation to background gammas. We set \(\int_{M}^{M2}dMMf=M_{DM}=6M_{0}\) (ordinary matter, current best estimate). We obtain \(M_{2}\approx 10^{18}\) g while \(M_{1}\approx 10^{15}\) g is the smallest not yet evaporated [1]. This mass range \(10^{15-18}\) g places strong constraints on accretion scenarios. Standard Bondi accretion [8][Sect. 2.5] yields masses greater than the Sun [1]. Only accretion sweeping up mass over a large radius can give smaller masses \(M\approx M_{BONDI}\) (\(\varrho_{amb}/\varrho\)) for ambient density \(\varrho_{amb}\) and density \(\varrho\) piling up at the black hole. Examples are Shakura-Sunyaev viscous flow [8; Chap. 5] and, more importantly, magneto-rotational-instability (MRI) [9] that turns out may explain dark matter abundance. ## 3 Dark Matter Abundance Consider a poloidal magnetic seed field \((B_{\phi}=0)\). MRI accretion geometry is determined by the evolving magnetic field, giving \(V_{z}/V_{r}=B_{z}/B_{\phi}\)[10][Eqs. 12-14], giving in turn spherical flow \(V_{z}/V_{r}\geq 1\) while \(B_{\phi}\) is growing, true in the time available in the expanding primordial plasma. A toroidal seed field behaves similarly [4; 9]. Using this argument, we consider accretion by an isolated black hole mass M with accretion velocity \(V_{r}=V_{MRI}\) everywhere on a sphere of expanding radius \(r=R_{0}(t)\). Let \(f^{*}\) be the fraction of mass accreted inside this sphere. We obtain: \[dM/dt\approx M/t\approx 4\pi\rho_{amb}R_{0}^{2}|V_{r}| \tag{1a}\] \[f^{*}\approx[4\pi\rho_{amb}R_{0}^{2}|V_{r}|t/((4\pi/3)\rho_{amb}R _{0}^{3}]\approx(3|V_{R}|t/R_{0})\] (1b) \[|V_{r}|\approx(f^{*}/3)(R_{0}/t)\approx(\xi/r)^{2}(MG/R_{0})^{1/2}\] (1c) \[R_{0}(t)\approx[0.021/(\xi/r)^{4/3}]M(t)^{1/3}t^{2/3}\] (1d) \[M_{DM}/M_{0}\approx[10^{9}/((m_{p}+m_{n})/m_{e})](3|V_{r}|t/R_{0}) \tag{1e}\] Equation (1b) uses Equation (1a) where here and hereafter t serves both as dynamical elapsed time and as primordial time in ambient density \(\rho_{amb}\approx 4\times 10^{5}/t^{2}\) and temperature \(T=T_{keV}=1000/\sqrt{t}\)[2][pp. 4,5]. Equation (1c) equates \(V_{r}\) from Equation (1b) to MRI on the far right hand side (in terms of magnetic field line fluctuation \((\xi/r)\)). Combining Equations (1a, 1c) gives \(R_{0}\) in Equation (1d). Combining Equations (1c, 1d, 1e) gives the ratio of dark matter \(M_{DM}\) to ordinary matter \(M_{0}\), using \(10^{9}\) electrons per proton and neutron cited above. Typically \((\xi/r)\approx 0.1\)[11][Sect. 6.1], yielding the current best estimate \(M_{DM}/M_{0}\approx 6\). The assumption that M is isolated is marginally satisfied. The average spacing between black holes \(\Delta_{BH}\approx(1/n_{BH})^{1/3}=(<M>/f^{*}\rho_{amb})^{1/3}\approx R_{0}\) for \(<M>\approx 10^{16}\) gm/cc at t=1 s. Acceleration between black holes with this spacing is small \((<M>G/\Delta_{BH}^{2})t\approx 10^{-7}(\Delta_{BH}/t))\). Primordial conditions were right for MRI. The primordial magnetic field \(\approx\) 1 gauss is sufficient to initiate MRI [12]; and the Peebles mechanism could create the required rotation [13]. Differences in pressure and scale compared to AGNs and the active role of pair creation turn out not to alter this process. The simplest theoretical derivation of MRI assumes constant pressure [9][Eqs. (106)-(110)]; and, while positronium particles disappear and regenerate in flight, overall momentum exchange between radiation and particles preserves gravitational flow. Viscous flow and MRI are additive. We interpret \(R_{0}\) as the radius where MRI first dominates, giving \(V_{MRI}=0.01(MG/R_{0})^{1/2}\approx(\nu/R_{0})\)[9][Sect. IV.D], with collisional viscosity \(\nu=(C_{s}^{2}\gamma_{c}/(\gamma_{c}^{2}+\omega_{c}^{2}))\leq(C_{s}^{2}/ \gamma_{c})\) for relativistic collision frequency \(\gamma_{c}\) and cyclotron frequency \(\omega_{c}\). This requires \(M>10^{14}\) g, consistent with the mass range fitting background gamma radiation: where \(C_{s}^{2}=(T/m\gamma_{l})\); and \(\gamma_{c}=(\rho/m\gamma_{l}^{3}4\times 10^{8}T_{keV}^{3/2})\). ## 4 Gravitational Collapse We note that steady flow \(\nabla\cdot(n\mathbf{v})=0\) assumed in Equations (1a, 1b) is only approximate. The condition to drop \(\partial n/\partial t\) in \((\partial n/\partial t+\nabla\cdot(n\mathbf{v}))\) is \((-\upsilon_{r}t>r)\). That this already fails near \(R_{0}\) for pure MRI follows from \((-\upsilon_{MRI}t)=0.01\upsilon_{K}t\approx 10^{-5}R_{0}<<R_{0}\) by Equation (1c). The resolution is gravitational collapse at \(r<R_{0}\)[14]. Collapse can be separated into two stages - steady flow at constant average n and T, followed by spherical collapse onto shells of mass \(\delta M\) self-confined by their own gravity (\(k_{r}\delta MG\)) but still attracted to the black hole. Both stages can be treated as a gravitational quasi-spherical wave (spherical radius r) coupled to cylindrical MRI, as follows: \[\partial n/\partial t+\nabla.n\textbf{v}=-n\gamma_{A} \tag{2a}\] \[m^{*}\partial/\partial t(n\textbf{v})=-nm^{*}\nabla(\Phi_{G}+v^ {2})-\nabla p;m^{*}=m\gamma_{l}\] (2b) \[m^{*}\partial/\partial t(\nabla\cdot n\textbf{v})=-m^{*}\partial/ \partial t(\partial n/\partial t+n\gamma_{A})=\nabla.[-nm^{*}\nabla(\Phi_{G}+v ^{2})-\nabla p]\] (2c) \[-\partial^{2}n/\partial^{2}t\approx n\partial^{2}/\partial^{2}r[( MG/r)+v^{2}]-\partial^{2}(p/m^{*})/\partial^{2}r-\partial n\gamma_{A}/\partial t\] (2d) \[(\omega+\frac{1}{2}i\gamma_{A})^{2}\approx(v_{A}^{2}k_{z}^{2}-3 \Omega^{2})+(k_{r}^{2}c_{s}^{2}-\frac{1}{2}k_{r}^{3}c^{2}R_{s})+(i\gamma_{A}/ 2)^{2}\] (2e) \[|v_{r}|\approx\Sigma_{k}a_{k}(\omega/k)(k\xi)^{2}\approx|v_{r-MRI}|+(k_{r}^{2} \xi_{r}^{2})c_{s}\] (2f) \[\textbf{E}+c^{-1}\textbf{v}\times\textbf{B}=-<c^{-1}\textbf{v}_{1} \times\textbf{B}_{1}>+\imath\textbf{j}\] (2g) \[\partial\textbf{B}/\partial t=-c\nabla\times\textbf{E}=\nabla \times\textbf{[v}\times\textbf{B+}<\textbf{v}_{1}\times\textbf{B}_{1}>-c \eta\textbf{j}] \tag{2h}\] Here **x**, t are dynamical variables; \(\Phi_{G}\) is the spherical gravitational potential; and \(\gamma_{A}\) represents net annihilation (\(\gamma_{A}=0\) for balanced pair creation and annihilation in the primordial medium at t<14s [2]). As already noted, exchange of momentum between radiation and particles conserves average gravitational flow. Equation (2c) applies the divergence operator to Equation (2b), then uses Equation (2a) to obtain Equation (2d). Equation (2e) approximates Equation (2d)by a representative frequency \(\omega\), cylindrical wave number \(k_{z}\) and spherical \(k_{r}\) in a wave packet \(\propto\Sigma_{k}\exp\ i(k_{r}r+k_{z}z-\omega t)\) yielding the flow in Equation (2f), derived from Ohm's Law in Equation (2g) with velocity and magnetic perturbations \(\textbf{v}_{1}\) and \(\textbf{B}_{1}\). Equation (2e) includes both pure MRI driven by rotation \(\Omega\)[9] and MRI Alfven waves coupled to Jeans gravity-driven sound waves [14]. This spectrum of modes is included in wave packet Equation (2f). We omit magnetic interchange, observed in [4] but weakly additive to pressure here. Waves give \(v_{r}=(\omega/k_{r})\), derived from Ohm's Law. Ohm's Law also gives Equation (2h) showing why MRI can initiate accretion at \(R_{0}>>R_{s}=\sqrt{(MG/c^{2})}\). This equation describes the outward propagation of the MRI dynamo magnetic field, whereby strong MRI near the jet radius propagates to drive current at \(R=R_{0}\) that serves as the O-point around which poloidal flux circulates - a phenomenon analogous to kink-driven current drive in laboratory spheromaks [10][App. B]. That Equation (2h) yields the scaling of \(R_{0}\propto M^{1/3}t^{2/3}\) is shown in [11][Sect. 6.1]. We assume that annihilation controls, serving to adjust perturbations \(\xi_{r}\) to maintain \(v_{r}r^{2}\) constant in Equation (2f). Constant \(v_{r}r^{2}\) preserves \(\nabla\cdot(n\textbf{v})=0\) giving constant n and T at their primordial values; hence \(\gamma_{A}=0\)[2]. This condition finally fails when accretion flow attains its maximum velocity \(v_{r}\to c_{s}\), occurring at a spherical radius \(r=r_{1}\) given by \(R_{0}^{2}v_{MRI}\approx r_{1}^{2}c_{s}\). At fixed flow \(v_{r}=c_{s}\) inside \(r_{1}\), spherical convergence forces n to grow in competition with radiation and annihilation. It is this that gives a wave structure \(k_{r}r>>1\) representing collapse into thin shells of mass \(\delta M\), too thin to annihilate, with \(\gamma_{A}\approx A^{*}n\sigma_{t}c(3/16)(\gamma_{l}^{-2}ln\tau\gamma_{l})\) with \(A^{*}\leq 1\)[15; 16][Thomson \(\sigma_{t}\)]. For large \(\gamma_{A}\) we obtain: \[[\omega+i(\gamma_{A}/2)]^{2}\approx-k_{r}^{3}c^{2}R_{s}-(\gamma_{ A}/2)^{2} \tag{3a}\] \[\omega=-i(\gamma_{A}/2)+i(\gamma_{A}/2)[1+(k_{r}^{3}c^{2}R_{s}/( \gamma_{A}/2)^{2})]^{1/2}\approx i(k_{r}^{3}c^{2}R_{s}/\gamma_{A})\] (3b) \[(\omega t/k_{r})>r\to k_{r}R_{s}>[\gamma_{A}rR_{s}/c^{2}t]^{1/2} \tag{3c}\] As in Equation (2e), \(\omega\) and \(k_{r}\) are representative values in a Fourier analysis of the shell structure described above. Equation (3a) has two solutions, \(\omega=-i\gamma_{A}t\) if annihilation dominates, or Equation (3b), giving Equation (3c) as the condition on \(k_{r}\) for gravity to compete with annihilation \(k_{r}R_{s}>10^{-4}\). Details do not matter as long as an accretion solution does persist to the black hole. That some \(k_{r}\) can insure this follows from Equation (3c). The required \(k_{r}\) is largest where \(\gamma_{A}r\) is maximum. The limits on \(k_{r}\) are mode coupling (equipartion) and the magnetic field. The field is unimportant at short wavelengths where \((k_{r})^{-1}<<\)Larmor radius. Only coupling to small \(k_{r}\) modes (flattening gradients) would defeat Equation (3c), not likely here in the time available, due to competition of faster growth at larger \(k_{r}\). Examples of fast turbulence overcoming dynamo equipartition appear in simulations [17]. Competing with growth is quasi-linear saturation that acts to reduce the growth rate [18], the net result of growth and mode coupling giving \(k_{r}\) hovering around the instability threshold at \(k_{r}\delta MG\to c_{s}^{2}\). Absent mode coupling to larger wavelengths, wave pressure does not impede flow. Pressure is balanced locally inside wave peaks. In fact, instability driving \(V_{r}=\omega/k_{r}\) requires that gravity exceed pressure, any excess gravitational force being balanced by \(\partial^{2}n/\partial^{2}t\), as we assumed in deriving instability Equation (3a). How \(v=(\omega/k)\) is obtained from Ohm's Law, Equation (2g), follows from \(\mathbf{v}=-[<c^{-1}(\mathbf{v}_{1}\times\mathbf{B}_{1})c\times\mathbf{B}/B^ {2}]\approx(kv_{1}^{2}/\omega)\approx\omega/k\). That this v transports mass in the time available follows from \(vt>r\) in Equation (3c). That \(v\leq c\) follows from \(r_{1}/t<c\), and energy is conserved. So no physics is violated. That v is independent of T means that accretion proceeds with or without severe cooling, the maximum annihilation rate being \(\approx n\sigma_{t}c\) also independent of T [15][Eq. (8.47)]. ## 5 Summary We have found a possible path for MRI-driven accretion creating black holes that could explain dark matter abundance. The regime of interest is defined by matching background gamma radiation to Hawking radiation, yielding \(M=10^{15-18}\) gm as the mass range for dark matter black holes. A rotating plasma suppressing Bondi accretion and stimulating MRI can produce masses in this range. As already noted, accretion by MRI is an extension of gravitational collapse creating a seed mass. The main role of MRI is to propagate the accretion radius far from the black hole. That MRI does this around AGNs is inferred from evidence for strong magnetic fields near the black hole [19] and other evidence for magnetic fields far from the jet axis [11]. Theoretically, both for AGNs and the primordial plasma, propagation of the magnetic field to its O-point is accounted for by turbulence in Ohm's law independent of pressure. It is slow MRI flow at the O-point that produces pressure gradients relieved by collapse. Collapse occurs in two stages. The first stage is a perturbation on quasi- steady incompressible flow that does not much disturb the primordial medium, from \(r=R_{0}\) down to the radius \(r_{1}\approx 10^{-5}R_{0}\) mentioned above. The other critical dimension is the MRI jet radius \(a\approx(C^{2}\eta t/4\pi)^{1/2})\approx 0.1t^{7/8}<10^{-5}R_{0}\) determined by collisional resistivity \(\eta\). The second stage of collapse joins steady flow at \(r=a\) to Jeans gravitational instability, self-adjusted to outrun annihilation if the temperature cools along accretion paths. In both stages accretion flow is sustained by unstable accretion waves. As noted above, the fact that strong instability is required in the final stage is a natural extension of Jeans gravitational collapse creating the seed mass [14]. ## 6 Concluding points Gravity couples MRI Alfven waves to Jeans sound waves \(\propto\exp\,\mathrm{i}(k_{r}r)\). The full dispersion relation then has the form \(\omega^{4}-C\omega^{2}+D=0\)[9][Eq. (111)], with additional terms \(\propto i\omega^{3}\), \(i\omega\) to account for resistivity and viscosity [9][Eqs. (135-138)]. Adding gravity adds \(H^{*}\) to both C and D, where \(H^{*}=(k_{r}^{2}c_{s}^{2}-\frac{1}{2}k_{r}^{3}c^{2}R_{s})\) appearing in the approximate Equation (2e). Primordial MRI accretion is due to dynamo space charge, \(\sigma=\nabla.(\mathbf{E}/4\pi)\), giving as the jet momentum equation \(d\mathbf{p}/dt=c^{-1}\mathbf{J}^{*}\times\mathbf{B}=c^{-1}\mathbf{j}\times \mathbf{B}+\sigma\mathbf{E}\)[20][Eq. (38b)]. Here \(\mathbf{J}^{*}\) is a short-circuit current circulating near the black hole [20][Fig. 4]. It is this short circuit that drives spherical accretion in Equation (1a). Numbers in Equation (1a) are: density \(\rho=nm\gamma_{0}\propto T^{4}\)[21p.158] with ambient \(T_{amb}=10^{10}\ ^{\circ}K/\sqrt{t}\) fitting the range \(T=10^{9-11}\ ^{\circ}K\)[2][pp. 4,5] and Lorentz \(\gamma_{l}\approx[1+(T/mc^{2})]\). Then \(\rho=(X10^{5}/t^{2})\) g/cc, with X = 4 at t = 0.01 s [2][p.4] down to X = 1.22 after annihilation [2][p.159], giving \(f^{*}\) independent of M for all M's within our mass range above. Magnetic jets are reviewed in [21]. For the highly collisional primordial plasma, a crucial point is that MRI does not depend on cyclotron orbit closure. What does matter is resistive and viscous diffusion in competition with instability growth rates [9][Sect. IV D], which we take into account in criteria such as requiring viscous \(k_{r}^{2}\nu<\Omega\), and the dynamo failure when \(\nu/R>v_{MRI}\). Another issue is turbulence fluctuation levels. As for AGN's, we assume that fluctuation levels in Equation (2f) always adjust to sustain accretion, since otherwise there could be no MRI power source. We found that fluctuation \(\xi/R\approx 0.1\) explained dark matter abundance. A demonstration of these turbulence levels awaits simulations of the primordial plasma on general relativistic codes like that in [4]; or special relativistic codes treating the black hole as a conducting sphere [22]. The run times for GRMHD codes, sometimes too short to produce a fully-developed jet tower [20][App. A], may be better suited to the primordial regime. The primordial magnetic field order 1 gauss could have been created by Biermann battery action due to pressure fluctuations in the ordinary matter content of primordial plasma (though positronium contributions tend to cancel) [23].The only source of rotation is the Peebles torque coupling neighboring black hole domains, giving \(\int dx\ \partial/\partial t\) (\(\rho R(MG/R)^{1/2})=\alpha(M^{2}G/R)\) for elliptic distortion \(\alpha\)[13]. Other terms (MRI jet, viscosity) only recycle angular momentum [9], Eq. (29). The Peebles source is only essential near peak MRI activity around the jet radius R = a, sufficient for \(M>(\int dx\ \partial/\partial t\ [(\varrho R(G/R)^{1/2}/\alpha G])^{2/3}\approx 10^{6 }(a^{7/3}/\alpha^{2/3})\). As already noted, MRI current drive propagates from R = a to R = \(R_{0}\). Fitting our mass range \(M=10^{15-18}\) g to observed background gamma rays comes from: \[Eg(E)=\Sigma_{i}N_{i}M_{i}(R_{si}/<R>)^{2}(\sigma_{SB}T_{Hi}^{3}/ kM_{i})P_{Planck} \tag{4a}\] \[=\int_{M1}^{A2/E}dM(f(M)/M_{DM})(A_{1}/M)[(EM/A_{2}-(EM/A_{2})^{2}]\] (4b) \[\int_{M1}^{M2}dMMf(M)\approx(M_{2}/0.5\times 10^{18})^{3}M_{DM} \tag{4c}\] where \(M_{1}=10^{15}\) g and \(g(E)=0.1(10keV/E)^{2}\) fits data in [24]. Equation (4a) describes Hawking radiation (temperature \(T_{H}\propto 1/M\), Planck distribution) from \(N_{i}\) black holes each of mass \(M_{i}\). The result is about the same for universal and local signals, giving \(\Sigma_{i}N_{i}M_{i}(R_{si}/<R>)^{2}=M_{DM}/<R>^{2}=0.03-0.05\) for average radius \(<R>=3\times 10^{23}\) and \(M_{DM}=3\times 10^{45}\)g for the Milky Way and \(<R>=4.4\times 10^{28}\) and \(M_{DM}=10^{56}\)g for the visible Universe. Approximating the sum by a mass distribution f(M) gives Equation (4b) as an integral equation for f(M) with approximate solution \(f(M)=(20/4\pi A_{1}A_{2})M_{DM}M\) with \(A_{1}=5.4\times 10^{32}\) and \(A_{2}=7.5\times 10^{19}\). The magnitude of f(M) is determined, giving Equation (4c) showing that the total mass of black holes approximately equals the total mass of dark matter, for \(M_{2}\approx 10^{18}\). **Acknowledgements** TKF thanks Christopher McKee and Hui Li for many enlightening discussion. TKF and RA also thank Nathan Ngata for timely aid in editing the manuscript.
2310.09279
Control of Vehicle Platoons with Collision Avoidance Using Noncooperative Differential Games
This paper considers a differential game approach to the predecessor-following vehicle platoon control problem without and with collision avoidance. In this approach, each vehicle tries to minimize the performance index (PI) of its control objective, which is reaching consensual velocity with the predecessor vehicle while maintaining a small inter-vehicle distance from it. Two differential games were formulated. The differential game problem for platoon control without collision avoidance is solved for the open-loop Nash equilibrium and its associated state trajectories. The second differential game problem for platoon control with collision avoidance has a non-quadratic PI, which poses a greater challenge to obtaining its open-loop Nash equilibrium. Since the exact solution is unavailable, we propose an estimated Nash strategy approach that is greatly simplified for implementation. An illustrative example of a vehicle platoon control problem was solved under both the without and with collision avoidance scenarios. The results showed the effectiveness of the models and their solutions for both scenarios.
Hossein B. Jond
2023-10-13T17:42:59Z
http://arxiv.org/abs/2310.09279v1
# Control of Vehicle Platoons with Collision Avoidance Using Noncooperative Differential Games* ###### Abstract This paper considers a differential game approach to the predecessor-following vehicle platoon control problem without and with collision avoidance. In this approach, each vehicle tries to minimize the performance index (PI) of its control objective, which is reaching consensual velocity with the predecessor vehicle while maintaining a small inter-vehicle distance from it. Two differential games were formulated. The differential game problem for platoon control without collision avoidance is solved for the open-loop Nash equilibrium and its associated state trajectories. The second differential game problem for platoon control with collision avoidance has a non-quadratic PI, which poses a greater challenge to obtaining its open-loop Nash equilibrium. Since the exact solution is unavailable, we propose an estimated Nash strategy approach that is greatly simplified for implementation. An illustrative example of a vehicle platoon control problem was solved under both the without and with collision avoidance scenarios. The results showed the effectiveness of the models and their solutions for both scenarios. collision avoidance, differential game, Nash equilibrium, vehicle platoon ## I Introduction Convoy and platoon group driving are the salient collective behaviors of connected and automated vehicles on the road [1, 2]. Vehicles in a platoon or convoy drive at a consensual speed in the direction of the flow of traffic while maintaining a small inter-vehicle distance from their adjacent vehicles. In a platoon, we are concerned only with the longitudinally coordinated control of vehicles moving in the same lane of the road or highway [3, 4]. In a convoy, both longitudinal and lateral coordination of vehicles over different lanes is necessary [5, 6]. Platooning is the most studied group behavior of connected and automated vehicles [7, 8]. Such coordination is achieved by exchanging local information among the vehicles [9]. Vehicle platoons offer remarkable benefits, as listed in [1, 8, 10]. Platooning boosts road capacity and decreases fuel consumption and emissions of pollutants due to the decrease in gaps between vehicles and the elimination of dispensable changes in speed and aerodynamic drag on the following vehicles, respectively. Besides, driving safety and passenger satisfaction are enhanced since detection and actuation times are shorter, and the small inter-vehicle gaps between vehicles prevent cutting by other vehicles. The most common platooning methods are the Leader-Follower approach [11], the Behavior-Based Approach [12, 13], and the Virtual Structure approach [14, 15]. Several other methods were also researched [16, 17, 18]. In the classical optimal control framework, vehicles attempt to acquire a platoon formation by optimizing a team objective [19]. This framework, however, is incompatible with automated and autonomous vehicles, which are supposed to make independent and selfish operational decisions without the need for human intervention. Game theory provides tools and concepts for determining the best strategy or action choices for each vehicle with self-interests and acting selfishly. A vehicle's interest in a platoon could be to penalize its relative displacement, velocity, and acceleration errors, taking its fuel amount into account [20]. The strategic interactions among vehicles acting independently and selfishly naturally portray a noncooperative game. Nash equilibrium allows for self-enforcing strategic interactions in a noncooperative game [21]. Platooning emerges as a result of a Nash equilibrium [4]. Game-theoretic platoon control has recently attracted increasing interest in the control community. Some recent reports include platooning at the hubs in a transportation network as a noncooperative coordination game [22], attacker-detector game for improving the security of platoons against cyber attacks [23], platoon formation as a coalitional game [24], and complete and incomplete information behavioral decision-making in a platoon using noncooperative game theory [25]. Differential games have been extensively used to address multi-robot systems formation control [26, 27, 28, 20]. A platoon is a line formation. However, only a few research studies have utilized differential games for platooning. In [4], differential games for platooning under the predecessor-following and two-predecessor-following topologies for platoon control problems with vehicles governed by single integrator dynamics were solved analytically, and the closed-form expressions for the open-loop Nash equilibrium were derived. As the main contribution with respect to [4], this paper considers \(i\)) a linearized dynamics model that approximates the longitudinal dynamics of car-like vehicles and \(ii\)), collision avoidance. It is shown that a closed-form solution for the platoon control problem without collision avoidance in the context of a noncooperative differential game exists. Realizing that a closed-form solution for the game problem with collision avoidance is not available, we propose an estimated Nash strategy that is greatly simplified for implementation under an open-loop information structure. Both solutions' effectiveness is shown by the simulation studies. The paper is organized as follows. Section II presents a differential game model of the platoon control problem without collision avoidance. In Section III, we derive the open-loop Nash equilibrium and its associated opinion trajectories. The platoon control problem with collision avoidance is studied in Section IV. In Section V, the results from previous sections are verified by simulations. Conclusions and future works are discussed in Section VI. ## II Differential Game Formulation We consider a homogeneous platoon of vehicles with the predecessor-following information topology as depicted in Fig. 1. Each vehicle follows its predecessor by maintaining a predefined fixed inter-vehicle distance using unidirectional information acquired directly from onboard sensors or via vehicle-to-vehicle connections in connected environments. The vehicles are equipped with cameras that detect their immediate preceding vehicle and laser scanners for measuring the distances. Suppose that there are \(N+1\) vehicles in the platoon, indexed by \(0\) through \(N\) where \(0\) corresponds to the lead vehicle and the rest to the following vehicles. The lead vehicle, or simply the leader, is at the front of the platoon and has a constant velocity. The following vehicles, or followers, adjust their control input to maintain their predefined distances from their predecessors. Vehicle dynamics is a nonlinear function of tire friction, rolling resistance, aerodynamic drag, gravitational force, the engine, the brake system, etc., fundamentally challenging theoretical analysis. Simplified nonlinear vehicle dynamics models have commonly been used to model vehicle longitudinal dynamics in a platoon. These nonlinear dynamics models govern the engine dynamics, brake system, and aerodynamic drag of each vehicle. By using the feedback linearization technique, as shown in [29, 30], these nonlinear dynamics become linearized, which cases further theoretical analysis. Let \(p_{i}(t)\), \(v_{i}(t)\), \(a_{i}(t)\), and \(u_{i}(t)\) denote the position, velocity, acceleration, and control input of the \(i\)th vehicle in the platoon, respectively. The engine time constant \(\tau\) (also called the inertial time-lag) encompassed by the feedback linearized dynamics is, in reality, different even for identical vehicles. However, in this work, we consider a homogeneous platoon of follower vehicles with identical \(\tau\). This assumption ensures that the platoon control problem that will be defined in this paper can be solved analytically with a closed-form solution. Each follower vehicle \(i\in\{1,\ldots,N\}\)'s feedback linearized dynamics is given by \[\left\{\begin{array}{l}\dot{p}_{i}(t)=v_{i}(t)\\ \dot{v}_{i}(t)=a_{i}(t)\\ \tau\hat{a}_{i}(t)+a_{i}(t)=u_{i}(t)\end{array}\right.\] or, equivalently, in the following state-space form \[\dot{x}_{i}(t)=Ax_{i}(t)+Bu_{i}(t) \tag{1}\] where \(x_{i}(t)=\begin{bmatrix}p_{i}(t)\\ v_{i}(t)\\ a_{i}(t)\end{bmatrix}\), \(A=\begin{bmatrix}0&1&0\\ 0&0&1\\ 0&0&-\frac{1}{\tau}\end{bmatrix}\), and \(B=\begin{bmatrix}0\\ 0\\ 0\end{bmatrix}\). Note that the leader is supposed to move with constant velocity, i.e., \(x_{0}=[p_{0}(t),v_{0},0]^{\top}\), under the steady-state condition, i.e., \(u_{0}(t)=0\). **Theorem 1**.: _Consider a platoon of vehicles with the feedback linearized dynamics (1) and PIs (3). The platoon control problem as a noncooperative differential game admits a unique open-loop Nash equilibrium given by_ \[u_{i}(t)=-\sum_{j=1}^{i}\xi_{j}(t) \tag{4}\] _where_ \[\xi_{i}(t) =-\omega_{i}B^{\top}\mathrm{e}^{(T-t)A^{\top}}\left(I+\omega_{i} \Psi(T)\right)^{-1}\mathrm{e}^{TA}y_{i}(0), \tag{5}\] \[\Psi(t) =\int_{0}^{t}\mathrm{e}^{(t-s)A}BB^{\top}\mathrm{e}^{(t-s)A^{ \top}}\mathrm{ds}. \tag{6}\] _The state trajectories associated with the equilibrium actions are given by_ \[x_{i}(t)=x_{0}(t)-\sum_{j=1}^{i}(y_{j}(t)+\hat{d}_{j}) \tag{7}\] _where_ \[y_{i}(t)=\left(\mathrm{e}^{tA}-\omega_{i}\Psi(t)\left(I+\omega_{i}\Psi(T) \right)^{-1}\mathrm{e}^{TA}\right)y_{i}(0). \tag{8}\] Proof.: Let \(y_{i}(t)=x_{i-1}(t)-x_{i}(t)-\hat{d}_{i}\) and \(\xi_{i}(t)=u_{i-1}(t)-u_{i}(t)\) for all \(i\in\{1,\dots,N\}\) where (7) and (4) are easily verified, respectively. Vehicle dynamics (1) is then expressed in terms of the new state vector \(y_{i}(t)\) and new control input \(\xi_{i}(t)\) as \[\dot{y}_{i}(t)=Ay_{i}(t)+B\xi_{i}(t). \tag{9}\] Therefore, the platoon control problem as the noncooperative differential game (1) and (3) reduces to the following optimization \[\min_{\xi_{i}}\mathcal{J}_{i}=\omega_{i}y_{i}^{\top}(T)y_{i}(T)+\int_{0}^{T} \xi_{i}^{2}(t)\ \mathrm{dt}\] subject to (9). Define the Hamiltonian for the above minimization \[H_{i}=\xi_{i}^{2}(t)+\lambda_{i}^{\top}(t)(Ay_{i}(t)+B\xi_{i}(t)) \tag{10}\] for all \(i\in\{1,\dots,N\}\) where \(\lambda_{i}(t)\) is the costate. According to Pontryagin's minimum principle, the necessary conditions for optimality are \(\frac{\partial H_{i}}{\partial\xi_{i}}=0\) and \(\dot{\lambda}_{i}(t)=-\frac{\partial H_{i}}{\partial y_{i}}\). Applying the necessary conditions on (10) yield \[\xi_{i}(t) =-B^{\top}\lambda_{i}(t), \tag{11}\] \[\dot{\lambda}_{i}(t) =-A^{\top}\lambda_{i}(t),\quad\lambda_{i}(T)=\omega_{i}y_{i}(T) \tag{12}\] for \(i\in\{1,\dots,N\}\). The solution of (12) is given by \[\lambda_{i}(t)=\mathrm{e}^{(T-t)A^{\top}}\lambda_{i}(T)=\omega_{i}\mathrm{e} ^{(T-t)A^{\top}}y_{i}(T). \tag{13}\] Substituting (11) into (9) and using (13), we have \[\dot{y}_{i}(t) =Ay_{i}(t)-BB^{\top}\lambda_{i}(t)\] \[=Ay_{i}(t)-\omega_{i}BB^{\top}\mathrm{e}^{(T-t)A^{\top}}y_{i}(T)\] where its solution is given by \[y_{i}(t)=\mathrm{e}^{tA}y_{i}(0)-\omega_{i}\Psi(t)y_{i}(T) \tag{14}\] where \(\Psi(t)\) is defined in (6). Consider (14) at \(T\) as \[y_{i}(T)=\mathrm{e}^{TA}y_{i}(0)-\omega_{i}\Psi(T)y_{i}(T). \tag{15}\] Equation (15) can be rewritten as \[\left(I+\omega_{i}\Psi(T)\right)y_{i}(T)=\mathrm{e}^{TA}y_{i}(0)\] or \[y_{i}(T)=\left(I+\omega_{i}\Psi(T)\right)^{-1}\mathrm{e}^{TA}y_{i}(0). \tag{16}\] Note that \(y_{i}(T)\) exists for every initial condition \(y_{i}(0)\) iff \(\left(I+\omega_{i}\Psi(T)\right)^{-1}\) exists. In other words, the game has an open-loop Nash equilibrium for every initial states \(x_{0}(0),\cdots,x_{N}(0)\) iff (16) can be calculated for any arbitrary final state \(y_{i}(T)\) and accordingly, \(x_{i}(T)\). If so, the equilibrium actions are unique and exist for all \(t\in[0,T]\). Otherwise, the game does not have a unique open-loop Nash equilibrium for every initial states \(x_{0}(0),\cdots,x_{N}(0)\). In the following, we show that the matrix \(I+\omega_{i}\Psi(T)\) is invertible. From (6), we have \[\mathrm{e}^{(t-s)A}BB^{\top}\mathrm{e}^{(t-s)A^{\top}}=\mathrm{e}^{(t-s)A}B( \mathrm{e}^{(t-s)A}B)^{\top}.\] The product of any matrix and its transpose is always symmetric. Thus, \(\omega_{i}\Psi(T)\) is symmetric. The matrix \(\mathrm{e}^{(t-s)A}\) is positive definite and all its eigenvalues are positive. The matrix \(BB^{\top}\) is a nonnegative diagonal matrix, and then the eigenvalues of the product of \(\mathrm{e}^{(t-s)A}BB^{\top}\mathrm{e}^{(t-s)A^{\top}}\) still have nonnegative real parts. Therefore, all the eigenvalues of \(I+\omega_{i}\Psi(T)\) in (6) have positive real parts. Substituting (16) into (14) and rearranging it, we obtain (8). Similarly, substituting (13) into (11) and then (16) into it, we get (5). ## IV Collision Avoidance Estimated Nash Strategy The platoon control problem (1) and PIs (3) and its solution in _Theorem 1_ satisfy only the control objectives of maintaining a constant inter-vehicular spacing with the predecessor and maintaining the consensual velocities and accelerations of all followers with the leader. In addition, each following vehicle in the platoon has to ensure the crucial requirement of collision avoidance. Control designs that simultaneously guarantee the time-headway spacing and collision avoidance in platoons were the focus of a few reports [31, 32]. The differential game literature on collision avoidance is from multi-robot systems [27, 33]. For the platoon control problem with collision avoidance, the PI for vehicle \(i\) is redefined as \[\hat{J}_{i}=J_{i}+\frac{1}{\mu_{i}\|x_{i-1}(T)-x_{i}(T)-\hat{r}_{i}\|^{2}+\epsilon} \tag{17}\] for all \(i\in\{1,\dots,N\}\) where \(\mu_{i}>0\) is a weighting parameter, \(\epsilon>0\) is a positive scalar to ensure a non-zero denominator, and \(\hat{r}_{i}=[r_{i},0,0]^{\top}\) where \(r_{i}\) is a safe distance from the predecessor for collision avoidance. If vehicle \(i\) gets closer to its predecessor than \(r_{i}\), a collision is unavoidable. The platoon control problem with collision avoidance in (1) and (17) is non-trivial and challenging to solve for its closed-form solution. We attempt to constitute an estimation of the exact solution that guarantees the collision avoidance behavior of followers. Define the following positive scalar function of \(y_{i}(t)\) \[f(y_{i}(t))=\frac{1}{\Big{(}\mu_{i}(y_{i}(t)+\hat{d}_{i}-\hat{r}_{i})^{\top}(y_ {i}(t)+\hat{d}_{i}-\hat{r}_{i})+\epsilon\Big{)}^{2}}. \tag{18}\] Also, define \[z_{i}(t)= \Big{(}I+\big{(}\omega_{i}-\mu_{i}f(\mathrm{e}^{tA}y_{i}(0))\big{)} \Psi(t)\Big{)}^{-1}\] \[\Big{(}\mathrm{e}^{tA}y_{i}(0)-\mu_{i}f(\mathrm{e}^{tA}y_{i}(0)) \Psi(t)(\hat{r}_{i}-\hat{d}_{i})\Big{)} \tag{19}\] where \(z_{i}(T)=\hat{y}_{i}(T)\). **Theorem 2**.: _Consider a platoon of vehicles with the feedback linearized dynamics given in (1) and PIs in (17). Suppose that every vehicle \(i\) utilizes the following estimation of its terminal state vector \(y_{i}(T)\) from (19), i.e., \(\hat{y}_{i}(T)\). For the platoon control problem with collision avoidance as a noncooperative differential game, the following estimations of the unique Nash equilibrium form collision-avoidance control inputs_ \[\hat{u}_{i}(t)=-\sum_{j=1}^{i}\hat{\xi}_{j}(t) \tag{20}\] _where_ \[\hat{\xi}_{i}(t)=-B^{\top}\mathrm{e}^{(T-t)A^{\top}}\times\\ \Big{(}\big{(}\omega_{i}-\mu_{i}f(\mathrm{e}^{TA}y_{i}(0))\big{)} \hat{y}_{i}(T)+\mu_{i}f(\mathrm{e}^{TA}y_{i}(0))(\hat{r}_{i}-\hat{d}_{i})\Big{)}. \tag{21}\] _The state trajectories associated with the equilibrium actions are given by_ \[\hat{x}_{i}(t)=x_{0}(t)-\sum_{j=1}^{i}(\hat{y}_{j}(t)+\hat{d}_{j}) \tag{22}\] _where_ \[\hat{y}_{i}(t)=\mathrm{e}^{tA}y_{i}(0)-\Psi(t)\times\\ \Big{(}\big{(}\omega_{i}-\mu_{i}f(\mathrm{e}^{TA}y_{i}(0))\big{)} \hat{y}_{i}(T)+\mu_{i}f(\mathrm{e}^{TA}y_{i}(0))(\hat{r}_{i}-\hat{d}_{i}) \Big{)}. \tag{23}\] Proof.: The platoon control problem in (1) and (17) in terms of the state vector \(y_{i}(t)\) and control input \(\xi_{i}(t)\) reduces to the minimization of the following optimization \[\min_{\xi_{i}}\hat{\mathcal{J}}_{i}=\mathcal{J}_{i}(\xi_{i}(t))+\\ \frac{1}{\mu_{i}(y_{i}(T)+\hat{d}_{i}-\hat{r}_{i})^{\top}(y_{i}(T) +\hat{d}_{i}-\hat{r}_{i})+\epsilon}\] subject to (9). Define the Hamiltonian (10) and by using the necessary conditions for optimality, we obtain (11) and (12) with the following terminal condition \[\lambda_{i}(T)=\omega_{i}y_{i}(T)-\mu_{i}f(y_{i}(T))(y_{i}(T)+\hat{d}_{i}-\hat{ r}_{i}) \tag{24}\] for \(i\in\{1,\ldots,N\}\). The solution of (12) using the terminal condition (24) is given by \[\lambda_{i}(t)=\mathrm{e}^{(T-t)A^{\top}}\times\\ \Big{(}\big{(}\omega_{i}-\mu_{i}f(y_{i}(T))\big{)}y_{i}(T)+\mu_{i} f(y_{i}(T))(\hat{r}_{i}-\hat{d}_{i})\Big{)}. \tag{25}\] Substituting (25), respectively, into (11) and then into (9), we have \[\xi_{i}(t)=-B^{\top}\mathrm{e}^{(T-t)A^{\top}}\times\\ \Big{(}\big{(}\omega_{i}-\mu_{i}f(y_{i}(T))\big{)}y_{i}(T)+\mu_{i} f(y_{i}(T))(\hat{r}_{i}-\hat{d}_{i})\Big{)}, \tag{26}\] \[\hat{y}_{i}(t)=Ay_{i}(t)-BB^{\top}\mathrm{e}^{(T-t)A^{\top}}\times\\ \Big{(}\big{(}\omega_{i}-\mu_{i}f(y_{i}(T))\big{)}y_{i}(T)+\mu_{i} f(y_{i}(T))(\hat{r}_{i}-\hat{d}_{i})\Big{)} \tag{27}\] for all \(i\in\{1,\ldots,N\}\). The solution of (27) is given by \[y_{i}(t)=\mathrm{e}^{tA}y_{i}(0)-\\ \Psi(t)\Big{(}\big{(}\omega_{i}-\mu_{i}f(y_{i})\big{)}y_{i}(T)+ \mu_{i}f(y_{i})(\hat{r}_{i}-\hat{d}_{i})\Big{)}\] where at \(T\) it can be rearranged as the following \[\Big{(}I+\big{(}\omega_{i}- \mu_{i}f(y_{i}(T))\big{)}\Psi(T)\Big{)}y_{i}(T)=\\ \mathrm{e}^{TA}y_{i}(0)-\mu_{i}f(y_{i}(T))(\hat{r}_{i}-\hat{d}_{i} )\Psi(T)\] or equivalently, \[y_{i}(T)=\Big{(}I+ \big{(}\omega_{i}-\mu_{i}f(y_{i}(T))\big{)}\Psi(T)\Big{)}^{-1}\] \[\Big{(}\mathrm{e}^{TA}y_{i}(0)-\mu_{i}f(y_{i}(T))(\hat{r}_{i}-\hat {d}_{i})\Psi(T)\Big{)}. \tag{28}\] It is obvious from (28) that every player \(i\) (i.e., every vehicle in the platoon) for all \(i\in\{1,\ldots,N\}\) requires the knowledge of \(f(y_{i}(T))\) for every possible terminal state vector \(y_{i}(T)\), which is too complex to acquire from the current expression. Therefore, control inputs \(\xi_{i}(t)\) and their associated state trajectories \(y_{i}(t)\) will not be available explicitly, and thus neither will the true Nash equilibrium \(u_{i}(t)\) and its associated state trajectories \(x_{i}(t)\). However, it is possible to obtain a simplified expression for \(\xi_{i}(t)\) and \(y_{i}(t)\) from (28) as follows. Assume that every player \(i\) utilizes \(y_{i}(T)=\mathrm{e}^{TA}y_{i}(0)\) to calculate \(f(y_{i}(T))\). Then we arrive at the terminal state estimation of \(\hat{y}_{i}(T)\) from (19). Substituting \(\hat{y}_{i}(T)\) into (26) and (27) we get the estimations of control inputs \(\hat{\xi}_{i}(t)\) in (21) and their associated state trajectories \(\hat{y}_{i}(t)\) in (23), respectively. Therefore, the estimations of the unique Nash equilibrium actions and their associated state trajectories are given by (20) and (22), respectively. The proposed estimated solution approach is incapable of dealing with collision avoidance since it is designed to implement the collision avoidance behavior only at the horizon time \(T\). To consider collision avoidance for all \(t\in[0,T]\), the PIs (17) must have the collision avoidance term inside the integration, which brings far more difficulty to designing an implementable solution. To implement the estimated Nash strategy design approach to include collision avoidance for \(t\in[0,T]\), we utilize the following solution \[\begin{split}&\hat{\xi}_{i}(t)=-B^{\top}\mathrm{e}^{(T-t)A^{\top}} \times\\ &\Big{(}\big{(}\omega_{i}-\mu_{i}f(\mathrm{e}^{tA}y_{i}(0)) \big{)}z_{i}(t)+\mu_{i}f(\mathrm{e}^{TA}y_{i}(0))(\hat{r}_{i}-\hat{d}_{i}) \Big{)}\end{split} \tag{29}\] and \[\begin{split}&\hat{y}_{i}(t)=\mathrm{e}^{tA}y_{i}(0)-\Psi(t) \times\\ &\Big{(}\big{(}\omega_{i}-\mu_{i}f(\mathrm{e}^{tA}y_{i}(0))\big{)} z_{i}(t)+\mu_{i}f(\mathrm{e}^{tA}y_{i}(0))(\hat{r}_{i}-\hat{d}_{i})\Big{)}.\end{split} \tag{30}\] Note that the solution above still has an open-loop information structure since it consists only of the initial state vector and time. As it is seen from its definition in (19), the vector \(z_{i}(t)\) is also a function of the initial state vector and time. ## V Simulation Results In this section, we provide simulation results to demonstrate the effectiveness of the proposed platoon control schemes in Section III and IV. Consider a homogeneous platoon of five vehicles, i.e., \(N=4\), of which the front vehicle is the leader and is not subject to control, and the rest are the followers with their control inputs to be designed. The inertial time-lag parameter is arbitrarily selected as \(\tau=0.5\). The initial states of the vehicles are given by \(x_{0}(0)=[23,2,0]^{\top}\), \(x_{1}(0)=[18,2.5,1]^{\top}\), \(x_{2}(0)=[11,3,1.5]^{\top}\), \(x_{3}(0)=[6,1.5,0.8]^{\top}\), \(x_{4}(0)=[1,2,1.2]^{\top}\). Suppose that vehicles in the desired platoon will be equally spaced by \(d_{1}=d_{2}=d_{3}=d_{4}=2\). Also, let \(r_{1}=r_{2}=r_{3}=r_{4}=1\), meaning that if vehicle \(i\in\{1,\ldots,4\}\) gets closer to its predecessor than \(1\), a collision occurs. In the PIs, \(\omega_{1}=6\), \(\omega_{2}=3\), \(\omega_{3}=8\), \(\omega_{4}=5\), \(\mu_{1}=12\), \(\mu_{2}=10\), \(\mu_{3}=1\), \(\mu_{4}=5\), and \(T=10\) for the time interval of the game. We first solve the platoon control problem without collision avoidance with the open-loop Nash strategy given by (4) and its associated state trajectory (7) in _Theorem 1_. Fig. 2 shows the following vehicles achieving the desired platoon at the horizon time \(T=10\). In addition, it also shows the longitudinal velocities and accelerations of the following vehicles reaching the lead vehicle's velocity and acceleration. However, a collision between the lead vehicle and vehicle 1 occurs approximately at \(t=5\). Once again, we resolve the platoon control problem with collision avoidance with the estimated Nash strategy solution (29) and (30). The results in Fig. 3 show the vehicles achieving basically the identical desired platoon at the horizon time \(T=10\) as in the previous problem. However, it follows from Fig. 3 that the collision between the lead vehicle and vehicle 1 is eradicated. Moreover, the vehicles achieve the desired platoon from approximately \(t=3\) on, compared to the previous problem at horizon time \(T=10\). This demonstrates the effectiveness of the proposed estimated solution implementation (29) and (30) in coping with collision avoidance, early acquiring the desired platoon, and maintaining it. Note that the velocities of all follower vehicles should be equal to or greater than the leader's velocity (i.e., \(v_{i}(t)\geq v_{0}\) for all \(i\in\{1,\ldots,N\}\),) in order to avoid a collision risk [31]. While this requirement is not seen in Fig. 2 for the platoon control problem without collision avoidance, it is totally assured in Fig. 3 for the platoon control problem with collision avoidance. A smaller velocity than the lead vehicle's velocity describes a braking maneuver for the following vehicle that creates a collision risk with its predecessor. Such maneuvers are seen for vehicles 3 and 4 in Fig. 2. Finally, the scalar function \(f(\mathrm{e}^{tA}y_{i}(0))\) has been plotted for \(t\in[0,T]\) in Fig. 4. This function reveals collision risk for each following vehicle as a function of time and its initial state. It is seen that vehicles 1, 2, and 4 experience the peak collision risks at approximate times 4, 8, and 6, respectively, while vehicle 3 is not facing any collision risk the whole time. ## VI Conclusions In this paper, we have introduced differential game models for the homogeneous predecessor-following vehicle platoon control problem without and with collision avoidance. We obtained the closed-form expression for the unique Nash equilibrium and its associated state trajectories for the following vehicles. Simulation results have shown that the following vehicles acquire the desired platoon by committing to their self-enforcing controller based on Nash equilibrium actions. Furthermore, collision avoidance was considered in the platoon control problem, and the estimated Nash solution was proposed. The effectiveness of the proposed estimated Nash strategy design approach was observed in the simulation results. Future work will include more sophisticated platoon information topologies for connected environments, investigating feedback Nash equilibrium under feedback information differential games, and studying the string stability of the platoon. Fig. 4: Time histories of the scalar function \(f(\mathrm{e}^{tA}y_{i}(0))\). ## Acknowledgment This work was supported by SGS, VSB - Technical University of Ostrava, Czech Republic, under grant No. SP2023/012 "Parallel processing of Big Data X".
2310.11436
Sadness, Anger, or Anxiety: Twitter Users' Emotional Responses to Toxicity in Public Conversations
Cyberbullying and online harassment have serious negative psychological and emotional consequences for the victims, such as decreased life satisfaction, suicidal ideation, self-harming behaviors, depression, anxiety, and others. Most of the prior works assessed people's emotional responses via questionnaires, while social media platforms contain data that could provide valuable insights into users' emotions in real online discussions. Therefore, this data-driven study investigates the effect of toxicity on Twitter users' emotions and other factors associated with expressing anger, anxiety, and sadness in terms of account identifiability, activity, conversation structure, and conversation topic. To achieve this goal, we identified toxic replies in the large dataset consisting of 79,799 random Twitter conversations and obtained the emotions expressed in these conversations. Then, we performed propensity score matching and analyzed causal associations between toxicity and users' emotions. In general, we found that users receiving toxic replies are more likely to express emotions of anger, sadness, and anxiety compared to users who did not receive toxic replies. Finally, analysis results indicate that the conversation topic and users' account characteristics are likely to affect their emotional responses to toxicity. Our findings provide a better understanding of toxic replies' consequences on users' emotional states, which can potentially lead to developing personalized moderation methods that will help users emotionally cope with toxicity on social media.
Ana Aleksandric, Hanani Pankaj, Gabriela Mustata Wilson, Shirin Nilizadeh
2023-10-17T17:46:16Z
http://arxiv.org/abs/2310.11436v1
# Sadness, Anger, or Anxiety: Twitter Users' Emotional Responses to Toxicity in Public Conversations ###### Abstract. Cyberbullying and online harassment have serious negative psychological and emotional consequences for the victims, such as decreased life satisfaction, suicidal ideation, self-harming behaviors, depression, anxiety, and others. Most of the prior works assessed people's emotional responses via questionnaires, while social media platforms contain data that could provide valuable insights into users' emotions in real online discussions. Therefore, this data-driven study investigates the effect of toxicity on Twitter users' emotions and other factors associated with expressing anger, anxiety, and sadness in terms of account identifiability, activity, conversation structure, and conversation topic. To achieve this goal, we identified toxic replies in the large dataset consisting of 79,799 random Twitter conversations and obtained the emotions expressed in these conversations. Then, we performed propensity score matching and analyzed causal associations between toxicity and users' emotions. In general, we found that users receiving toxic replies are more likely to express emotions of anger, sadness, and anxiety compared to users who did not receive toxic replies. Finally, analysis results indicate that the conversation topic and users' account characteristics are likely to affect their emotional responses to toxicity. Our findings provide a better understanding of toxic replies' consequences on users' emotional states, which can potentially lead to developing personalized moderation methods that will help users emotionally cope with toxicity on social media. social media, toxicity, emotional responses + Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*] †: [leftmargin=*]Footnote †: [leftmargin= matching to find users with similar account characteristics. As a result, we obtained a balanced dataset containing two groups of similar users: _treatment_ group and _control_ group, where each pair of users has a similar propensity (probability) of receiving toxic replies, isolating the effect of toxicity. Finally, we performed appropriate statistical tests to find a causal association between receiving toxicity and users' emotions. Note that causal associations describe relationships between variables where a causal link is suggested, but it does not make a definitive causal inference. The following hypotheses were formulated to get a better understanding of the _causal associations_ among our dependent and independent variables: **H1:** Users receiving toxic replies are more likely to express anxiety compared to users who did not receive any toxic replies. **H2:** Users receiving toxic replies are more likely to express anger compared to users who did not receive any toxic replies. **H3:** Users receiving toxic replies are more likely to express sadness compared to users who did not receive any toxic replies. **H4:** A larger amount of toxicity will likely increase users' anxiety. **H5:** A larger amount of toxicity will likely increase users' anger. **H6:** A larger amount of toxicity will likely increase users' sadness. This observational study presents multiple relevant findings. We found that users who receive toxic replies are more likely to express all three emotions compared to users who do not receive toxic replies. Furthermore, our results indicate that the amount of toxicity does not play a significant role in changing the anger or anxiety of users who already received at least one toxic reply, while higher toxicity leads to users being more likely to express more sadness in toxic conversations. Moreover, expressing emotions before the first toxic reply is likely to lead to boosting such emotions in the rest of the conversation. Finally, conversation topics are important factors that contribute to the emotional structure of the conversation. To the best of our knowledge, this is the first large-scale data-driven study that examined the impact of toxicity on users' emotions on social media, with a particular focus on Twitter. The findings in our study can help to develop prediction models of possible emotions, which can be used to provide interventions to mitigate the negative emotional impacts. ## 2. Related Work There are different types of antisocial online behaviors, such as toxicity (Zhou et al., 2017), racist attacks against minorities (Zhou et al., 2017; Li et al., 2017; Li et al., 2018; Li et al., 2019), misognyistic hatred (Zhou et al., 2017; Li et al., 2018; Li et al., 2019), toxic masculinity (Zhou et al., 2017), and others. Even though many recent studies detect antisocial behavior after such behavior occurred (Zhou et al., 2017; Li et al., 2019) there are some studies using certain features to predict whether the conversation will be developed in an antisocial manner (Li et al., 2017; Li et al., 2018; Li et al., 2018; Li et al., 2019). In this study, the focus is on the literature examining the psychological and emotional impact of antisocial behavior on the victims. **The Psychological Consequences of Online Harassment.** Previous literature shows that cyberbullying leaves adverse consequences on mental health, particularly for adolescents (Li et al., 2017). Moreover, victims are likely to commit self-harm and suicidal attempts (Li et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019) as well as suffer from psychological distress (Li et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019), depression, anxiety, and lower self-esteem (Li et al., 2019; Li et al., 2019; Li et al., 2019). Also, females tend to report a higher prevalence of cyberbullying assaults and they are more likely to report distress and suicidal ideation compared to males (Li et al., 2019). Other literature found a correlation between cyberbullying victimization and substance use (Li et al., 2019; Li et al., 2019) while victims of online harassment might also respond by acceptance and self-blame (Li et al., 2019; Li et al., 2019). **The Emotional Consequences of Cyberbullying.** Prior works showed that emotional harm is one of the victims' common experiences after online abuse and cyberbullying (Li et al., 2019; Li et al., 2019; Li et al., 2019). Moreover, literature found that problems with emotion regulation increase the likelihood of individuals cyberbullying others or becoming the victim of cyberbullying (Li et al., 2019; Li et al., 2019; Li et al., 2019). Furthermore, perpetrators and victims show different sets of emotions, where victims are likely to express passive emotions such as sadness, humiliation, and embarrassment (Li et al., 2019). Also, the emotion of anger received attention from researchers investigating cyberbullying and cybeructimisation (Li et al., 2017), where anger has been shown as the most common reaction to cyberbullying (Li et al., 2019; Li et al., 2019; Li et al., 2019) as well as sadness (Li et al., 2019). However, there are no prior data-driven studies that aimed to evaluate the impact of online attacks on individual emotions in the online setting. To the best of our knowledge, this is the first observational study analyzing social media data to examine the effect of toxicity on users' feelings of anger, anxiety, and sadness. The goal of the study is to analyze how users' emotions change after a toxic attack occurs, which could potentially lead to developing strategies to help users mitigate emotional reactivity (Li et al., 2019). ## 3. Data Collection The study framework has been shown in Figure 1. It is demonstrated that the data collection process involved multiple steps, which will be described in more detail in this section. Firstly, we collected a Twitter dataset and detected topics and emotions, followed by the process of creating the ground-truth dataset to choose the tool to detect toxic replies with the highest accuracy. **Dataset**: The dataset analyzed originates from a recent study investigating users' reactions in online toxic conversations (Li et al., 2017). The data has been collected by utilizing Twitter API (Li et al., 2019) to obtain a daily random sample of tweets from Aug 14 - Sep 28, 2021. The dataset includes both main tweets and their replies, as well as their toxicity scores obtained by Google's Perspective API (Li et al., 2019). **Reply Trees**: Similarly as in previous studies (Li et al., 2017; Li et al., 2019), conversations were represented as _reply trees_, where a tweet is a _child node_ of another tweet when it is a reply to that tweet. The _root_ of a reply tree is the initial tweet that receives replies. The authors of the root tweets are named as _root authors_. _Direct replies_ are located in the first layer of reply trees (replies to the root tweet) while _nested replies_ are located in other layers of reply trees other than the first layer, i.e., replies to replies. Each _reply tree_ has the following traits: _Depth_ referring to the depth of the conversation's deepest node (the longest path from the root tweet to last reply); _Width_ referring to the maximum number of tweets at any tree level. **Discovering Conversation Topics**: The dataset used in this study contains a random sample of Twitter conversations which can potentially include a large number of topics. However, there might be some topics that provoke more emotional responses from the users involved in the discussion. For example, users might get more angry if they receive a toxic reply concerning their political views, or they might get sad if the toxicity is directed at their health-related decisions. Thus, a topic classification model [7] used in previous studies [18; 35; 49; 79] was utilized to determine the main topic of the conversation by passing the text of the main tweet as the input. Note that this model has been fine-tuned for multi-label classification on 11,267 tweets yielding 19 discussion topics such as _news & social concern_, _diaries & daily life_, _business & entrepreneurs_, and others. Scores obtained per topic are in range from 0 to 1, where a higher score suggests that the text is more related to that topic. **Detecting Emotions**: We used LIWC-22 [14] to detect emotions of each tweet in the dataset. This tool analyzes text to provide insights into the person's emotions, social and cognitive processes, etc. It has been widely used for psychological analysis of users online [21; 58; 52] and it has shown a decent performance for detecting emotions in verbal expression [43]. It treats each tweet individually and provides scores for each post in range 0-99, representing a percentage of words in the text related to a specific attribute. **Creating the Ground-truth Dataset**: The next step of data collection involved detecting toxic replies in the dataset. We first obtained tweets and their corresponding toxicity scores from the previous study [2]. However, it was not clear which threshold should be used to detect toxic replies in the dataset. To evaluate the scores, we manually extracted a random sample of 50 toxic conversations that contains at least one reply with the _Sewere toxicity_ score higher than 0.5, and 50 with no reply with the _Sewere toxicity_ score higher than 0.5. The total number of tweets included in the random sample was 943 (843 replies). Then, four annotators manually labeled each tweet as 1 for toxic, and 0 if the reply is not toxic by looking at the whole conversation, trying to capture the context of conversations. Two labelers annotated 50 (25 toxic and 25 non-toxic) conversations, and the other two labelers annotated the rest. The computed Cohen's Kappa score [47] was 0.5 showing a 93.5% agreement. Finally, the ground-truth dataset consisted of 64 (6.8%) toxic tweets belonging to 26 conversations and 879 (93.2%) non-toxic tweets. **Detecting Toxic Replies**: As toxicity detection is a popular topic in natural language processing (NLP) literature, there are many tools that researchers developed to accomplish the task with high accuracy. Recently, ChatGPT became a tool used for different NLP tasks, such as detecting offensive language and hate speech [40; 50], stance detection [88], detoxification [78], etc. On the other hand, Google's Perspective API has been widely used in the previous literature for toxicity detection on social media [5; 19; 61]. This API processes the given text input and provides output scores in the range from 0 to 1 such as _Sewere toxicity_, _Toxicity_, _Profanity_, _Sexually explicit_, and others where a score closer to 1 means a higher severity for a specific attribute. The evaluation of Perspective API received a lot of attention in previous studies. There are some works classifying texts with a score greater than 0.5 as toxic [31; 60; 80] and others using a stricter threshold of 0.8 [39] while it is also possible to use both [73]. However, it is not clear whether the proposed approaches performed better than the newer tools that emerged in the meantime, such as ChatGPT. Thus, we conducted an experiment to compare the performance of Google's Perspective API and OpenAI API in the toxicity detection task. **Obtaining New Toxicity Scores**: Even though toxicity scores have been provided by the previous study [2], we still re-ran the Perspective API on the dataset as the new version of the API has been released [48]. The goal was to find a threshold that reaches the highest accuracy compared to the manually labeled sample. For that, we used _Sewere toxicity_ and _Toxicity_ attributes. As the scores provided by the API are in range from 0 to 1, we increased the testing value for 0.1 in each iteration to find the most accurate threshold for classification. **GPT Labels**: We used the OpenAI API _gpt-3.5_-turbo_-16k_ version to obtain the binary toxicity labels. We passed the following prompt _'Given the following post_, _determine if it is contextually toxic. Respond in an array format [toxic, value, explanation]_, _where the first element is either '1' for yes or '0' for no, and the second element is the explanation for the value_" to the API with each reply individually. However, there are two different parameters that could be changed when passing prompts to the OpenAI API: _temperature_ and _top_p_. According to endpoint documentation, it is not suggested to change both parameters at the same time [63]. Thus, as the _temperature_ has been defined as the randomness of the output, we decided to test how different temperatures affect the accuracy of the classification. Once again, we increased the temperature by 0.1 in each iteration to determine which temperature provided the best results. Obtained labels were compared with the manually labeled sample. **Evaluating the performance**: Finally, we were able to compare the labels obtained from Perspective API and OpenAI to the manually labeled sample. Based on the results presented in Table 1, we found that the Toxicity attribute outperforms both Severe Toxicity and gpt-3.5-turbo-16k with the highest accuracy of 0.95 (95%) for the threshold of 0.6. Therefore, we will use this threshold to detect toxic replies in the dataset. Any reply that shows a score equal to or higher than 0.6 in the Toxicity is considered as _toxic_. Figure 1: Study framework. In the dataset, there are 20,544 (3.9%) toxic tweets belonging to 13,172 (16.5%) conversations, where 11,784 (57.4%) of the toxic tweets were posted by users other than the root author. On the other hand, there are 507,497 tweets belonging to 79,580 conversations that are not considered toxic. ## 4. Independent, Dependent, and Control Variables Causal inference is used paired with multivariate regression analysis to find causal associations between different levels of toxicity and users' emotions. In more detail, we examine how toxicity impacts users' emotions and what other factors contribute to triggering anger, anxiety, and sadness in Twitter users. **Dependent Variables** included in the analysis are users' emotions where we describe two sets of variables, the average emotions of root authors in whole conversations, and the average emotions of root authors after the first toxic reply occurs in a toxic conversation. The first set of variables was included as the goal is to compare the emotions expressed throughout the conversation by root authors who received toxic replies and root authors who did not. The second set aims to clarify how emotions change after receiving the first toxic reply and whether the amount of toxicity plays a significant role in emotional reactions. The following variables were used in the models: (1) _anxiety_: a numeric variable representing the average root author's anxiety in a conversation. (2) _anger_: a numeric variable representing the average root author's anger in a conversation. (3) _sadness_: a numeric variable representing the average root author's sadness in a conversation. (4) _anxiety_after_: a numeric variable representing the average root author's anxiety after a toxic reply occurs in a conversation. (5) _anger_after_: a numeric variable representing the average root author's anger after a toxic reply occurs in conversation. (6) _sadness_after_: a numeric variable representing the average root author's sadness after a toxic reply occurs in a conversation. Note that all the dependent variables were rounded up to the closest integer for the analysis purposes. In addition, we are able to calculate the emotions after the toxic reply only in the conversations containing toxic replies. **Independent Variables** consist of computed percentages of toxic replies in the conversations. Note that we consider the location of the toxicity as well as the level of toxicity in the conversation reply tree. Thus, we considered the following independent variables: (1) _direct_toxicity_, a numeric variable representing a ratio of the toxic direct replies to the total number of direct replies in the conversation. (2) _nested_toxicity_, a numeric variable representing a ratio of the number of toxic nested replies to the total number of nested replies in the conversation. **Control Variables** While evaluating the effect of toxicity on Twitter users' emotions, we need to include certain variables as confounding factors. In more detail, we included features related to conversation structure, the emotions of the root tweet, the topic of the root tweet, and users' _activity_, _visibility_, and _identifiability_. **Users' online activity** contains _num_friends_, _num_tweets_ and _account_age_ (in years) as numeric variables. Such variables can affect how users emotionally respond to toxic content. For example, users who post a lot might not be so emotionally affected by toxic replies, while users whose accounts are younger might care more about their reputation and show emotions of anger, sadness, or anxiety more compared to older accounts. **Online visibility** contains _num_followers_, and _listed_counts_ as numeric variables and _verified_ as a binary variable. Previous literature (Han et al., 2017) found that there is a relationship between online visibility and receiving hate. Therefore, there might be a correlation between online visibility and how users emotionally respond to that hate. For example, verified accounts might not emotionally react to toxic content, but accounts that have fewer followers might express anger, anxiety, or sadness more in these situations. **Identifiability** consists of profile characteristics that help identify a user, such as profile _description_length_ (in characters) as a numeric variable, and _has_URL_ and _has_location_ as binary variables, indicating whether the profile contains URLs to other user-related websites and whether the user provided the location on their profile. A variable _has_image_ was also collected, but all users in our dataset had images provided. Previous literature suggested that anonymous accounts show more abusive behavior compared to other identifiable accounts (Han et al., 2017; Wang et al., 2018; Wang et al., 2018). Therefore, we believe that in this study, it is possible that anonymous accounts might show less severe emotions of anger, anxiety, and sadness when receiving toxic content compared to more identifiable users. **Emotions before a toxic reply**: furthermore, we believe that if the user was already expressing a certain emotion before the toxic reply, it is important to acknowledge whether their emotions changed or not. For example, if the user is already angry and receives a toxic comment, the user would either become more angry or their emotions might remain the same. Therefore, we included the following control variables in the analysis to find the impact of the toxicity on users' emotions: (1) _anger_before_: a numeric variable representing the average root author's anger before a toxic reply occurs in conversation; (2) _anxiety_before_ representing a numeric variable indicating the average root author's anxiety before a toxic reply reply occurs in a conversation, and (3) _sadness_before_: a numeric variable representing the average root author's sadness before a toxic reply occurs in a conversation. Once again, such variables are only computed for conversations with toxic replies. Also, they were computed by chronologically ordering the tweets within the conversation and calculating averages before the toxic reply occurred. **Conversation structure** consists of _width_ and _depth_ as numeric variables. We believe that the users' emotions might be affected depending on how big the conversation is. For example, users might not feel the same if they received a couple of toxic comments in a very large online discussion, while they might express emotions more in smaller conversations. We also included **root_toxicity** indicating if the root tweet is toxic or not. Root authors that share \begin{table} \begin{tabular}{c c c|c c} & \multicolumn{2}{c}{Perspective API} & \multicolumn{2}{c}{Open API} \\ Score Threshold & Severe Toxicity & Toxicity & Temperature & rpt\({}^{-3.5-\text{turbo}}\)\(-\)14k \\ \hline 0.1 & 0.92 & 0.58 & 0.1 & 0.09 \\ 0.2 & 0.94 & 0.74 & 0.2 & 0.89 \\ 0.3 & 0.94 & 0.84 & 0.3 & 0.89 \\ 0.4 & 0.94 & 0.89 & 0.4 & 0.89 \\ 0.5 & 0.93 & 0.93 & 0.5 & 0.89 \\ 0.6 & 0.93 & 0.95 & 0.6 & 0.88 \\ 0.7 & 0.93 & 0.94 & 0.7 & 0.85 \\ 0.8 & 0.93 & 0.94 & 0.8 & 0.88 \\ 0.9 & 0.93 & 0.93 & 0.9 & 0.87 \\ \hline \end{tabular} \end{table} Table 1. Comparing the accuracy of each model and corresponding thresholds. already toxic tweets might receive more toxicity while they might express emotions of anger more than others. Finally, all 19 conversation topic variables were also included as confounding factors as there might be a case that the main topic of the conversation affects how users emotionally respond to toxic content. For example, political or daily life topics might attract more emotional responses compared to other topics. ## 5. Descriptive Statistics of Variables This section provides detailed descriptive statistics on the emotions and topics expressed in the conversations. We compare the prevalence of emotions and topics discussed in conversations with and without toxic replies. Table 2 includes minimum, median, mean, and maximum values for topics and average emotions of the root authors in conversation with and without toxic replies. Interestingly, we observed that the mean anxiety and anger were higher in conversations with toxic replies being 0.15 vs. 0.13 and 0.28 vs. 0.18, respectively. On the other hand, the mean sadness expressed by the root authors is higher in conversations without toxic replies (0.52 vs 0.48), which can potentially indicate that users who receive toxic replies emotionally respond in an anxious or angry manner. Note that the mean direct_toxicity is significantly higher compared to nested_toxicity (0.4 vs. 0.08). Furthermore, the most prevalent topics discussed in our conversations are _diaries & daily life_, _news & social concern_, and _sports_ in both datasets, suggesting that these topics are most widely discussed on the platform in general. We believe that certain topics from the list might naturally trigger more toxicity compared to others. Thus, we created a plot in Figure 2, to discover the relationship between the percentage of replies being toxic and conversation topics. The topics that lured the most toxic replies were news & social concern, fitness & health, and diaries & daily life. On the other hand, topics that received the least toxic replies were business & entrepreneurs and science & technology, receiving only 1.41% and 1.52% of replies that are toxic, respectively. This might indicate that social media users tend to attack others based on their opinions about social and everyday life, while discussions about professional development do not lead to conversations developing in a toxic manner. ## 6. Analysis and Results This section describes the statistical analysis performed to test the formulated hypotheses. The first step of analysis involved performing propensity score matching, a technique employed for balancing two datasets (Sandes et al., 2017) so the conversation dataset without toxic replies includes the same number of conversations as the dataset with toxic replies. In more detail, the main purpose of propensity score matching is balancing treatment and control groups by choosing the observations with a similar propensity of receiving treatment in order to provide insights about the causal impact of the treatment (Sandes et al., 2017) (in our case, receiving toxicity). The scores were calculated by using the user's characteristics as independent variables in the logistic regression model where the dependent variable was a binary variable indicating whether the conversation received toxic replies or not. Therefore, the balanced dataset included the same number of conversations with and without toxic replies (7,205), suggesting that the whole dataset included 14,410 conversations in total. The first part of the analysis included three multivariate Poisson regression models where we investigated causal associations between root authors' emotions and receiving toxicity (testing H1-3). Therefore, the dependent variables in these models were root authors' average emotions expressed in conversations (anger, anxiety, and sadness), while independent variables were direct_toxicity and nested_toxicity. This analysis was performed on the balanced dataset described above, in order to compare the emotions of root authors of conversations with and without toxic replies. The Poisson model was the most suitable for our dataset, as all the dependent variables were count variables very skewed to the right. The second part only involved the analysis of toxic conversations, where conversations that did not include root authors' comments after the first toxic replies were discarded. Therefore, the number of conversations with toxic replies in this analysis was 3,408 conversations. The goal of this analysis is to find out how the amount of toxicity affects the emotions of users who received toxic replies, and whether their emotions expressed before the toxic attack occurred play a significant role in their emotional responses (testing H4-H6). The dependent variable of the multivariate Poisson regression models were average emotions after the first toxic comment took place (anxiety_after, anger_after, and sadness_after), where we also included average emotions before the first toxic replies as control variables besides other variables discussed in section 4. Furthermore, to address possible bias due to multiple hypotheses testing, we use Bonferroni correction (Bohringer et al., 2017). Therefore, we divided the \(p-value\) of 0.05 by the total number of hypotheses, yielding a value of 0.008. Thus, the \(p-values\) lower than 0.008 would signify statistical significance in the analysis. Finally, even though propensity score matching has been used in the literature to isolate the effect of the treatment, we cannot certainly claim that we are able to identify causal inferences without having any other confounding factors that also impact users' emotions. Therefore, the results presented are not causations; they represent causal associations. **H1: Users receiving toxic replies are more likely to express anxiety compared to users who did not receive any toxic replies**. The results from the regression model are presented in Table 3 (M1). The model suggests that there is a positive statistically significant relationship between the direct_toxicity and average anxiety of the root author of the conversation (\(p<0.001\)). In other words, the larger percentage of replies being toxic that are direct responses to the main tweet is likely to increase the average anxiety expressed by the main user throughout the conversation. On the other hand, a relationship between nested_toxicity and anxiety is statistically significant and negative (\(p<0.0001\)), suggesting that the higher percentage of nested replies being toxic is associated with the lower anxiety of the main user. The reason can be that users might not feel personally attacked when the bigger toxic thread occurs while receiving toxic replies directly to their main posts might make them feel more anxious as they might consider such attacks more personal. Moreover, nested replies might be less visible to the root authors compared to direct replies to their tweets. Furthermore, root authors who started the conversation with the toxic tweet are less likely to express anxiety compared to other users (\(p<0.001\)). Finally, users who provide longer profile descriptions (\(p<0.0001\)) and locations (\(p<0.001\)) tend to express more anxiety, indicating that more identifiable accounts might get more anxious about their content compared to more anonymous users. _Therefore, the results of M1 support H1 partially, suggesting that root authors who receive more toxic direct replies are more likely to express anxiety_. Once again, it is important to note that the results obtained represent causal associations, not causations. **H2: Users receiving toxic replies are more likely to express anger compared to users who did not receive any toxic replies**. Model results presented in Table 3 (M2) demonstrate a positive statistically significant correlation between both direct_toxicity and nested_toxicity with anger (\(p<0.0001\)). In more detail, root authors receiving either toxic direct or nested replies tend to express more anger. Moreover, root authors who initiated a conversation with the toxic tweet are likely to express more anger compared to users who started a conversation with a non-toxic tweet (\(p<0.0001\)). Interestingly, the model suggests that verified accounts are more likely to convey anger compared to other users (\(p<0.0001\)). It could potentially mean that verified users care about the replies shared on their posts and get angry once their point of view is under attack, or that they feel more comfortable to express their anger without worrying about its consequences. _In summary, M2 supports our hypothesis that users receiving toxic replies are more likely to express anger compared to other Twitter users_. **H3: Users receiving toxic replies are more likely to express sadness compared to users who did not receive any toxic replies**. Our findings (Table 3 M2) illustrate that the association between direct_toxicity and sadness is positive and statistically significant (\(p<0.001\)), meaning that users who receive direct toxic replies on their main post are likely to get sadder compared to users who did not receive toxic replies. In addition, users who are participants of more public lists and younger accounts tend to express more sadness (\(p<0.0001\)), which can potentially be due to such accounts being more concerned about the opinions of their friends and followers and therefore, expressing sadness when under attack. However, verified users tend to convey less sadness than others (\(p<0.0001\)). As shown in M2, verified accounts tend to communicate more anger rather than sadness which can potentially indicate that such users insist on their beliefs on social media. _Once again, our results partially support H3, suggesting that authors who receive toxic direct replies are more likely to express sadness compared to users who did not receive toxic replies_. **H4: A larger amount of toxicity will likely increase users' anxiety**. According to the results presented in Table 3 (M4), there exists a negative significant relationship between nested_toxicity and anxiety_after (\(p<0.008\)), while the association between direct_toxicity and anxiety_after does not show statistical significance. Thus, we can conclude that even though users who receive toxic replies are likely to express more anxiety compared to users who did not receive toxic replies, the amount of toxicity itself does not increase the anxiety in the first group of users, _rejecting H4_. However, we observe that users who were already feeling anxious before the first toxic reply took place tend to express more anxiety after, as the correlation between anxiety_before and anxiety_after is positive and significant (\(p<0.0001\)). \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{3}{c}{Conversations with Toxic Replies} & \multicolumn{3}{c}{Conversations without Toxic Replies} \\ \hline Characteristics & Min & Median & Mean & Max & Min & Median & Mean & Max \\ \hline anxiety & 0 & 0 & 0.15 & 50 & 0 & 0 & 0.13 & 100 \\ sadness & 0 & 0 & 0.48 & 40 & 0 & 0 & 0.52 & 100 \\ anger & 0 & 0 & 0.28 & 50 & 0 & 0 & 0.18 & 100 \\ anxiety\_before & 0 & 0 & 0.15 & 50 & NA & NA & NA & NA \\ sadness\_before & 0 & 0 & 0.48 & 40 & NA & NA & NA & NA \\ anger\_before & 0 & 0 & 0.28 & 50 & NA & NA & NA & NA \\ anxiety\_after & 0 & 0 & 0.15 & 50 & NA & NA & NA & NA \\ sadness\_after & 0 & 0 & 0.48 & 40 & NA & NA & NA & NA \\ anger\_after & 0 & 0 & 0.28 & 50 & NA & NA & NA & NA \\ \hline direct\_toxicity & 0 & 0.25 & 0.4 & 1 & NA & NA & NA & NA \\ nested\_toxicity & 0 & 0 & 0.08 & 1 & NA & NA & NA & NA \\ \hline arts \& culture & 0.001 & 0.02 & 0.05 & 0.88 & 0.001 & 0.03 & 0.06 & 0.92 \\ business \& entrepreneurs & 0.001 & 0.01 & 0.04 & 0.97 & 0 & 0.01 & 0.05 & 0.98 \\ celebrity \& pop culture & 0.003 & 0.12 & 0.99 & 0 & 0.03 & 0.12 & 0.99 \\ diaries \& daily life & 0.005 & 0.29 & 0.37 & 0.98 & 0 & 0.3 & 0.38 & 0.98 \\ family & & 0.001 & 0.01 & 0.03 & 0.92 & 0 & 0.01 & 0.04 & 0.95 \\ fashion \& style & 0.0005 & 0.005 & 0.03 & 0.98 & 0 & 0.01 & 0.03 & 0.98 \\ film \& video & 0.002 & 0.02 & 0.1 & 0.99 & 0 & 0.02 & 0.1 & 0.99 \\ fitness \& health & 0.001 & 0.01 & 0.03 & 0.98 & 0 & 0.01 & 0.03 & 0.98 \\ food \& dining & 0.0004 & 0.004 & 0.04 & 0.97 & 0 & 0.004 & 0.04 & 0.97 \\ gaming & 0.001 & 0.01 & 0.04 & 0.95 & 0 & 0.01 & 0.03 & 0.96 \\ learning \& educational & 0.001 & 0.01 & 0.03 & 0.92 & 0 & 0.01 & 0.04 & 0.94 \\ music & & 0.001 & 0.01 & 0.07 & 0.99 & 0 & 0.01 & 0.08 & 0.99 \\ news \& social concern & 0.001 & 0.07 & 0.25 & 0.99 & 0 & 0.05 & 0.18 & 0.99 \\ other hobbies & 0.003 & 0.05 & 0.09 & 0.8 & 0 & 0.05 & 0.1 & 0.84 \\ relationships & 0.001 & 0.02 & 0.07 & 0.9 & 0 & 0.02 & 0.09 & 0.93 \\ science \& technology & 0.001 & 0.01 & 0.03 & 0.96 & 0 & 0.01 & 0.03 & 0.96 \\ sports & & 0.0004 & 0.01 & 0.13 & 0.99 & 0 & 0.01 & 0.13 & 0.99 \\ travel \& adventure & 0.001 & 0.01 & 0.02 & 0.91 & 0 & 0.01 & 0.03 & 0.93 \\ youth \& student life & 0.001 & 0.005 & 0.02 & 0.91 & 0 & 0.005 & 0.02 & 0.91 \\ \hline \# conversations & & & & & & & & 72,594 & \\ \hline \hline \end{tabular} \end{table} Table 2. Characteristics of conversations in our dataset. **H5: A larger amount of toxicity will likely increase users' anger.** As demonstrated in Table 3 (M5), the relationship between toxicities and anger of users after the first toxic reply is not statistically significant (\(p>0.008\)), _rejecting H5_. We infer that users receiving toxic replies are likely to express more anger compared to others, however, the amount of toxicity they receive does not affect their anger. Similarly, as in testing H4, we noticed that the association between the average anger of users before the first toxic reply and average anger after the first toxic reply is positive and significant (\(p<0.0001\)), meaning that users who were already angry are likely to express this emotion more after receiving toxicity. **H6: A larger amount of toxicity will likely increase users' sadness.** As shown in Table 3 (M6), there is a statistically significant positive relationship between direct_toxicity and sadness of users after the first toxic reply (\(p<0.0001\)). In other words, if the user receives more toxic replies on their main posts, such users tend to express more sadness. Also, the model reveals that users showing more sadness before the first toxic reply tend to get even more sad after the first toxic reply occurs (\(p<0.0001\)). Such findings signify that the amount of toxicity is likely to impact the amount of sadness users convey, _partially supporting H6_. **Examining the impact of conversation topics on users' emotions.** Here, we compare the impact of specific topics in our models that showed as significant contributors to users' emotions in conversations. For example, in models M1, M2, and M3, we found that the conversation topic diaries_daily_life is associated significantly with elevated anxiety, anger, and sadness, indicating that users might get emotional when discussing daily life matters, which can be more personal. On the other hand, in the same models, food_dining shows a negative correlation with all three emotions, signifying that such discussions do not trigger users' emotions. Interestingly, we found that the topic of gaming increases anger, while decreasing anxiety and sadness, meaning that the users get angry rather than anxious or sad which aligns with existing literature stating that aggression is perceived as more normal in online gaming than in offline setting (Sadness et al., 2016). Some of the other topics, such as relationships, science_technology, other_hobbies, travel_adventure, and business_entrepreneurs were also found as likely to decrease all emotions discussed. Additional research is required to understand the reasoning behind such a phenomenon. ## 7. Discussion This study examines the emotional responses of users to toxic replies they receive on their tweets. For that, we analyzed a large dataset consisting of 79,799 conversations where 7,205 conversations contained at least one toxic reply. Then, we detected toxic replies, emotions, and topics in these conversations and performed causal association analysis by leveraging propensity score matching to balance two datasets. Our results contribute to the general understanding of the way toxicity impacts users' emotions. For example, we showed that users who received toxic replies on their tweets are more likely to express anger, anxiety, and sadness throughout the conversation compared to users who did not receive any toxic comments. However, problematic emotion regulation is associated with a higher likelihood of people becoming victims or cyberbullying others (Beng et al., 2016; Wang et al., 2017; Wang et al., 2017). Furthermore, our results align with existing Figure 2. Percentage of replies being toxic per topic. literature stating that victims of cyberbullying express emotions of anger (Han et al., 2016; Krawczyk et al., 2017; Krawczyk et al., 2018), sadness (Krawczyk et al., 2018; Krawczyk et al., 2018), and anxiety (Krawczyk et al., 2018; Krawczyk et al., 2018; Krawczyk et al., 2018). This way, we demonstrate that there is a need for more research to dive deeply into the users' reactions to toxic content and establish the framework for the implementation of advanced moderation techniques to help users emotionally cope with the toxicity. In addition, this is the first observational data-driven study investigating the way users' emotions change after receiving a toxic reply, where we showed that sadness after the first toxic reply is likely to increase as the amount of toxicity grows. Moreover, we demonstrated that users who already expressed emotions of anger, sadness, and anxiety before receiving toxic replies might be more vulnerable to toxicity attacks as their emotions are likely to increase after the toxic reply occurs. At the same time, certain findings raise concerns, as previous studies showed that not coping successfully with anger can potentially lead to further cyberbullying behavior (Krawczyk et al., 2018). Despite the relevant findings presented in this study, there are obvious limitations that have to be mentioned. Firstly, tools used for detecting topics, toxic replies, and emotions are not perfect and could potentially contain biases in their classifications. Also, the dataset used in the study contains posts originating from unique users and, therefore, we believe that the nature of the data prevents us from analyzing the long-term emotional consequences of receiving toxicity. Therefore, the study results indicate a causal association between receiving toxicity and expressing emotions of anger, anxiety, and sadness, however, further research is needed to prove a final causal relationship between these variables. ## 8. Conclusion In conclusion, in this study, we formulated six hypotheses aiming to explores the emotional response of Twitter users to toxic replies they receive on their posts. Our preliminary findings show that \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{6}{c}{_Dependent variable:_} \\ \cline{2-6} & anxiety & anger & sadness & anxiety\_after & anger\_after & sadness\_after \\ & M1 & M2 & M3 & M4 & M5 & M6 \\ \hline direct\_toxicity & \(0.227^{**}\) (\(0.064\)) & \(0.576^{**}\) (\(0.047\)) & \(0.114^{**}\) (\(0.033\)) & \(-0.096\) (\(0.159\)) & \(-0.001\) (\(0.102\)) & \(0.535^{***}\) (\(0.056\)) \\ nested\_toxicity & \(-1.136^{***}\) (\(0.254\)) & \(0.702^{**}\) (\(0.121\)) & \(-0.126\) (\(0.102\)) & \(-1.613^{*}\) (\(0.568\)) & \(0.685\) (\(0.268\)) & \(-0.450\) (\(0.177\)) \\ width & \(-0.006^{*}\) (\(0.002\)) & \(-0.006^{**}\) (\(0.002\)) & \(-0.016^{***}\) (\(0.002\)) & \(-0.012\) (\(0.006\)) & \(-0.025^{***}\) (\(0.006\)) & \(-0.002\) (\(0.002\)) \\ depth & \(-0.004\) (\(0.006\)) & \(0.001\) (\(0.004\)) & \(0.003\) (\(0.003\)) & \(0.004\) (\(0.006\)) & \(-0.006\) (\(0.007\)) & \(-0.012\) (\(0.005\)) \\ root\_toxicity True & \(-0.283^{**}\) (\(0.083\)) & \(0.545^{**}\) (\(0.048\)) & \(-0.090\) (\(0.039\)) & \(-1.553^{**}\) (\(0.265\)) & \(0.427^{**}\) (\(0.084\)) & \(0.249^{**}\) (\(0.051\)) \\ num\_followers & \(-0.0\) (\(0.0\)) & \(-0.0\) (\(0.0\)) & \(-0.09^{**}\) (\(0.0\)) & \(0.0\) (\(0.0\)) & \(0.0\) (\(0.0\)) & \(-0.0\) (\(0.0\)) \\ num\_friends & \(-0.0\) (\(0.0\)) & \(-0.0002^{**}\) (\(0.00001\)) & \(-0.0\) (\(0.0\)) & \(0.0001\) (\(0.0001\)) & \(-0.0001^{**}\) (\(0.0003\)) & \(-0.0004^{**}\) (\(0.0001\)) \\ num\_fweets & \(-0.0\) (\(0.0\)) & \(-0.0\) (\(0.0\)) & \(0.0\) (\(0.0\)) & \(0.0\) (\(0.0\)) & \(0.0\) (\(0.0\)) & \(0.0\) (\(0.0\)) & \(0.0\) (\(0.0\)) \\ listed\_counts & \(-0.0\) (\(0.0003\)) & \(0.0001\) (\(0.0002\)) & \(0.0001^{**}\) (\(0.0001\)) & \(-0.001\) (\(0.001\)) & \(0.0004\) (\(0.002\)) & \(-0.001\) (\(0.0005\)) \\ description\_length & \(0.002^{**}\) (\(0.0005\)) & \(-0.0002\) (\(0.0004\)) & \(-0.006^{**}\) (\(0.0003\)) & \(-0.001\) (\(0.001\)) & \(0.003^{**}\) (\(0.001\)) & \(-0.005^{***}\) (\(0.0004\)) \\ verified True & \(-0.223\) (\(0.113\)) & \(0.402^{**}\) (\(0.081\)) & \(-0.539^{**}\) (\(0.085\)) & \(0.169\) (\(0.518\)) & \(-0.528\) (\(0.314\)) & \(0.010\) (\(0.254\)) \\ account\_age & \(0.002\) (\(0.006\)) & \(-0.006\) (\(0.005\)) & \(-0.034^{**}\) (\(0.003\)) & \(0.001\) (\(0.014\)) & \(-0.004\) (\(0.010\)) & \(-0.045^{**}\) (\(0.006\)) \\ has\_location True & \(0.195^{**}\) (\(0.057\)) & \(0.15\) (\(0.044\)) & \(0.307^{**}\) (\(0.030\)) & \(0.889^{**}\) (\(0.163\)) & \(0.307^{**}\) (\(0.089\)) & \(0.679^{**}\) (\(0.058\)) \\ has\_url True & \(-0.100\) (\(0.048\)) & \(-0.136^{**}\) (\(0.039\)) & \(-0.023\) (\(0.025\)) & \(-0.363^{*}\) (\(0.113\)) & \(-0.156\) (\(0.073\)) & \(0.108\) (\(0.041\)) \\ anxiety\_before & & & & & \(0.140^{***}\) (\(0.013\)) & \\ anger\_before & sadness\_before & & & & & & \(0.102^{***}\) (\(0.004\)) \\ arts\_culture & \(-0.571\) (\(0.311\)) & \(-0.413\) (\(0.260\)) & \(-0.465^{*}\) (\(0.164\)) & \(-0.275\) (\(0.720\)) & \(2.112^{***}\) (\(0.326\)) & \(-0.854^{*}\) (\(0.270\)) \\ business\_entrepreuneurs & \(-0.681^{**}\) (\(0.249\)) & \(-1.406^{***}\) (\(0.252\)) & \(-1.340^{***}\) (\(0.186\)) & \(2.012^{***}\) (\(0.354\)) & \(-0.305^{*}\) (\(0.687\)) & \(-4.614^{*}\) (\(0.624\)) \\ celebrity\_pop\_culture & \(-0.613^{***}\) (\(0.143\)) & \(-0.405^{**}\) (\(0.121\)) & \(0.909\) (\(0.065\)) & \(0.173\) (\(0.349\)) & \(0.208\) (\(0.211\)) & \(0.168\) (\(0.106\)) \\ diaries\_daily\_life & \(0.517^{***}\) (\(0.122\)) & \(0.668^{***}\) (\(0.097\)) & \(0.554^{***}\) (\(0.065\)) & \ receiving toxicity is likely to in certain significantly increase users' emotions of anger, anxiety, and sadness, whereas the user characteristics and conversation topic play a significant role in the ways users emotionally react to toxic comments. In summary, the presented findings provide a better understanding of the ways users' emotions change after receiving toxic replies and they represent the initial step leading to building a moderation framework that would help all social media users emotionally cope with toxic content.
2301.08626
Electroweak Sphaleron in a Magnetic field
Using lattice simulations we calculate the rate of baryon number violating processes, the sphaleron rate, in the Standard Model with an external (hyper)magnetic field for temperatures across the electroweak cross-over, focusing on the broken phase. Additionally, we compute the Higgs expectation value and the pseudocritical temperature. The electroweak cross-over shifts to lower temperatures with increasing external magnetic field, bringing the onset of the suppression of the baryon number violation with it. When the hypermagnetic field reaches the magitude $B_Y \approx 2 T^2$ the cross-over temperature is reduced from $160$ GeV to $145$ GeV. In the broken phase for small magnetic fields the rate behaves quadratically as a function of the magnetic flux. For stronger magnetic fields the rate reaches a linear regime which lasts until the field gets strong enough to restore the electroweak symmetry where the symmetric phase rate is reached.
Jaakko Annala, Kari Rummukainen
2023-01-20T15:18:12Z
http://arxiv.org/abs/2301.08626v3
# Electroweak Sphaleron in a Magnetic field ###### Abstract Using lattice simulations we calculate the rate of baryon number violating processes, the sphaleron rate, in the Standard Model with an external (hyper)magnetic field for temperatures across the electroweak cross-over, focusing on the broken phase. Additionally, we compute the Higgs expectation value and the pseudocritical temperature. The electroweak cross-over shifts to lower temperatures with increasing external magnetic field, bringing the onset of the suppression of the baryon number violation with it. When the hypermagnetic field reaches the magitude \(B_{Y}\approx 2T^{2}\) the cross-over temperature is reduced from \(160\,\)GeV to \(145\,\)GeV. In the broken phase for small magnetic fields the rate behaves quadratically as a function of the magnetic flux. For stronger magnetic fields the rate reaches a linear regime which lasts until the field gets strong enough to restore the electroweak symmetry where the symmetric phase rate is reached. ## I Introduction The results from the ATLAS and CMS experiments at the LHC are in complete agreement with the Standard Model of particle physics: a Higgs boson with a mass of \(\approx 125\,\)GeV has been discovered [1; 2], and no evidence of beyond-the-Standard-Model physics has been observed. If the electroweak-scale physics is fully described by the Standard Model, the electroweak symmetry breaking transition in the early Universe was a smooth cross-over from the symmetric phase at \(T>T_{c}\), where the expectation value of the Higgs field was approximately zero, to the broken phase at \(T<T_{c}\) where it is finite, reaching the value \(246/\sqrt{2}\,\)GeV at zero temperature. The infrared problems inherent in high-temperature gauge theories [3; 4] make the physics non-perturbative. The overall nature of the transition was resolved already in 1990's using lattice simulations [5; 6; 7; 8], which indicated that the transition is first order with Higgs masses \(\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}}\)72 GeV, and cross-over otherwise. More recently, the precise thermodynamics of the cross-over at the physical Higgs mass was analysed in ref. [9] (see also [10]), and e.g. the cross-over temperature was determined to be \(T_{c}=159.6\pm 1.5\,\)GeV. The chiral anomaly of the electroweak interactions lead to the non-conservation of the baryon and the lepton number [11]. In _electroweak baryogenesis_ scenarios [12; 13] the baryon number of the Universe arises through processes at a first order electroweak phase transition, and a smooth cross-over makes these ineffective. Thus, in electroweak baryogenesis the origin of the baryon asymmetry must be due to beyond-the-Standard-Model physics (for reviews, see e.g. [14; 15]). The Chern-Simons (CS) number for weak SU(2) gauge is defined as \[N_{\rm CS}^{W}(t)\equiv\frac{g^{2}}{32\pi^{2}}\int_{0}^{t}{\rm d}t\int{\rm d }^{3}x\epsilon_{\alpha\beta\gamma\delta}{\rm Tr}\,F^{\alpha\beta}F^{\gamma \delta}\, \tag{1}\] where \(F^{\alpha\beta}\) is the field strength tensor of SU(2), \(g\) is the SU(2) gauge coupling and \(\epsilon_{\alpha\beta\gamma\delta}\) is the totally antisymmetric tensor. Analogous to SU(2), the hypercharge U(1) CS number is given by \[N_{\rm CS}^{Y}(t)\equiv\frac{{g^{\prime}}^{2}}{32\pi^{2}}\int_{0}^{t}{\rm d}t \int{\rm d}^{3}x\epsilon_{\alpha\beta\gamma\delta}B^{\alpha\beta}B^{\gamma \delta}\, \tag{2}\] where \(g^{\prime}\) is the hypercharge gauge coupling. The chiral anomaly couples the baryon and lepton numbers to the change in the Chern-Simons numbers as \[\Delta B=\Delta L=3\Delta N_{\rm CS}\, \tag{3}\] where \[N_{\rm CS}(t)\equiv N_{\rm CS}^{W}(t)-N_{\rm CS}^{Y}(t). \tag{4}\] For SU(2), the Chern-Simons number is topological, and there exists infinitely many classically equivalent but topologically distinct vacua that cannot be continuously transformed into one another without crossing an energy barrier. The _sphaleron_ is a saddle point finite energy solution of the classical field equations separating two topologically distinct vacua [16; 17]. The CS number is an integer for vacuum field configurations and a half integer \(N_{\rm CS}^{W}=\frac{1}{2}+n,\ n\in\mathbb{Z}\) for sphaleron configurations [17]. In contrast, the U(1) field has trivial topology and without an external (hyper)magnetic field its CS number in vacuum vanishes. However, in an external magnetic field the vacuum is degenerate with respect to the U(1) CS number which can obtain any value in contrast to the SU(2) case where it is an integer [18; 19]. This can lead to baryon and lepton number change on its own [18; 19; 20; 21; 22; 23]. Close to thermal equilibrium the evolution of the CS number is diffusive and is described by a diffusion constant known as the sphaleron rate \[\Gamma=\lim_{V,t\to\infty}\frac{\langle N_{\rm CS}(t)^{2}\rangle}{Vt}. \tag{5}\] In the absence of hypermagnetic fields the contribution of the U(1) can be neglected due to it having little effect on the form of the phase transition [5; 9; 24] and the sphaleron rate [25; 26; 17; 27]. In this framework the sphaleron rate has been studied extensively with analytical and numerical lattice methods. The general behavior is that in the broken phase the rate is suppressed by the energy of the sphaleron \(\Gamma_{\rm brk}\sim\alpha_{W}^{4}T^{4}e^{-E_{sph}/T}\)[28; 29] and in the symmetric phase the rate is unsuppressed behaving as \(\Gamma_{\rm sym}\sim\ln(1/\alpha_{W})\alpha_{W}^{5}T^{4}\)[30; 31; 32; 33; 34; 35], where \(\alpha_{W}=g^{2}/(4\pi)\). In the Standard Model with the physical Higgs mass the sphaleron rate was recently measured using lattice simulations across the cross-over from the symmetric phase to deep in the broken phase [36]. The temperature where the transitions decouple, i.e. the baryon number freezes, was found to be \(\approx 132\,\)GeV, substantially below the cross-over temperature \(\approx 160\,\)GeV. The presence of a U(1) hypercharge magnetic field can affect both the thermodynamics of the cross-over and the sphaleron rate. Large scale magnetic fields exist in the universe which may have primordial origin, see e.g. reviews [37; 38]. Primordial magnetic fields could have been generated before the electroweak transition corresponding to hypermagnetic fields before the transition which turn into the U(1)\({}_{\rm em}\) magnetic fields after the transition. The magnitude of such fields are largely unconstrained [39]. However, see [40] for recent stronger constraints at larger scales. When the U(1) is taken into account the spherical symmetry of the sphaleron reduces to axial symmetry and the sphaleron has a magnetic dipole moment. (It has been shown to be formed from magnetic monopole-antimonopole pair and a loop of electric current [41].) Thus the minimum energy of the sphaleron can be lowered by an external magnetic field. In a small external field analytical estimates give a simple dipole interaction \(\Delta E_{sph}=-\vec{B}_{ext}\cdot\vec{\mu}_{sph}\)[42]. In addition the form of the phase transition is modified by an external magnetic field [43] which has an effect on the sphaleron rate through the transition. At zero temperature the classical sphaleron energy has been computed on the lattice for a wide range of magnetic field values [44]. The situation is complicated by the appearance of Ambjorn-Olesen phase for large magnetic field values. At a critical field value \(B_{c1}=m_{W}^{2}/e\) the ground state becomes a non-trivial vortex structure and at a second critical value \(B_{c2}=m_{H}^{2}/e\) the electroweak symmetry is restored [45; 46; 47; 48]. The sphaleron energy is found to decrease until at the second critical field value when the symmetry is restored the energy vanishes [44]. At finite temperature around the electroweak scale previous studies have not been able to find the aforementioned vortex phase [43]. Elaborate methods have been developed to compute the sphaleron rate on the lattice accurately [49; 50; 51; 29]. We employ the dimensionally reduced effective theory of the Standard Model and perform the first dynamical simulations of the sphaleron rate which includes the U(1) field and compute the sphaleron rate for different magnitudes for the external hypermagnetic field over the electroweak cross-over with focusing on the behavior in the broken phase. The structure of the paper is as follows. In section II we describe the effective theory and its lattice formulation that we will use in our simulations. In section III the methods used to measure the sphaleron rate from the lattice is described. In section IV we present the results and finally in V we conclude. ## II Effective three-dimensional theory In our simulations we use dimensionally reduced three-dimensional effective theory of the Standard Model. The method of dimensional reduction is made possible due to the fact that in finite temperature the fields are naturally expressed in terms of three-dimensional Matsubara modes having thermal masses around \(\pi T\). This and the fact that Standard Model couplings are sufficiently small around the electroweak scale gives rise to a parametric hierarchy of scales in the Euclidean path integral \(\pi T\), \(gT\), \(g^{2}T\), called superheavy, heavy and light scales respectively. This allows us to integrate out the superheavy and heavy modes by well defined perturbative methods. All the fermionic modes are integrated out since their Matsubara frequencies \(\omega^{f}=(2n+1)\pi T\) are all proportional to \(\pi T\). In addition all temporal bosonic modes \(\omega^{b}=2\pi nT\), \(n\neq 0\) are also integrated out. Thus we are left with a 3d purely bosonic effective theory with the soft scales \(g^{2}T\). The soft scales have to be studied regardless with non-perturbative methods due to the infrared problem in thermodynamics of Yang-Mills fields [52]. The resulting (super-)renormalizable Lagrangian reads \[L= \frac{1}{4}{\rm Tr}\,F_{ij}F_{ij}+\frac{1}{4}B_{ij}B_{ij}\] \[+(D_{i}\phi)^{\dagger}D_{i}\phi+m_{3}^{2}\phi^{\dagger}\phi+ \lambda_{3}(\phi^{\dagger}\phi)^{2}, \tag{6}\] where \[F_{ij} =\partial_{i}A_{j}-\partial_{j}A_{i}-g_{3}[A_{i},A_{j}],\quad A_{ i}=\frac{1}{2}\sigma_{a}A_{i}^{a} \tag{7}\] \[B_{ij} =\partial_{i}B_{j}-\partial_{j}B_{i}\] \[D_{i} =\partial_{i}+ig_{3}A_{i}+ig_{3}^{\prime}B_{i}/2.\] Here \(A_{i}\),\(B_{i}\) are the 3d SU(2) and U(1) gauge fields; \(g_{3}\), \(g_{3}^{\prime}\) are the dimensionful SU(2) and U(1) couplings and \(\phi\) is a complex scalar doublet. The dimensionful parameters of the 3d theory \(g_{3},g_{3}^{\prime},\lambda_{3},m_{3}^{2}\) are mapped to Standard Model parameters \(\alpha_{S},G_{F},m_{H},m_{W},m_{Z},m_{t}\) and the temperature \(T\) via a perturbatively computable functions. All the details of the construction of the effective 3d theory and the mapping of parameters can be found in the refs. [53; 54; 55]. The accuracy of the 3d effective theory has been estimated to be \(\sim 1\%\)[53; 54; 55; 56; 57]. We choose the SU(2) coupling \(g_{3}^{2}\) to set the scale and use a set of dimensionless couplings defined by \[x\equiv\frac{\lambda_{3}}{g_{3}^{2}},\qquad y\equiv\frac{m_{3}^{2}}{g_{3}^{4}}, \qquad z\equiv\frac{g_{3}^{\prime 2}}{g_{3}^{2}}. \tag{8}\] The three parameters and the scale are plotted in Fig. 1 in the relevant temperature range (a code for computing these parameters can be found in zenodo [58]). As seen from the plot only the parameter \(y\) varies significantly over temperature and it is the natural choice for the temperature variable of the system. In [9] the cross-over temperature (defined as the peak of the susceptibility of the Higgs condensate) was found to be few GeV below the temperature where \(y=0\). We find \(y=0\) at \(T=162.9\,\)GeV which is slightly different from [9] due to using an updated value for the top mass. ### Lattice action The 3d effective theory in purely bosonic and straight forward to put on the lattice. For convenience we write the Higgs field as \[\Phi=\frac{1}{g_{3}^{2}}\bigg{(}(\tilde{\phi})(\phi)\bigg{)}\equiv\frac{1}{g_ {3}^{2}}\left(\begin{array}{cc}\phi_{2}^{*}&\phi_{1}\\ -\phi_{1}^{*}&\phi_{2}\end{array}\right), \tag{9}\] which transforms under the SU(2)\(\times\)U(1) gauge transformation as \[\Phi(x)\to G(x)\Phi e^{-i\theta(x)\sigma_{3}}, \tag{10}\] where \(\sigma_{3}\) is the third Pauli matrix and \(G(x)\) is an element of SU(2). Now the lattice action that corresponds to the continuum theory (6) can be written as \[S =\beta_{G}\sum_{x}\sum_{i<j}[1-\tfrac{1}{2}\text{Tr}\,P_{ij}]+ \beta_{Y}\sum_{x}\sum_{i<j}\tfrac{1}{2}\alpha_{ij}^{2}\] \[-\beta_{H}\sum_{x}\sum_{i}\tfrac{1}{2}\text{Tr}\,\Phi^{\dagger}(x )U_{i}(x)\Phi(x+i)e^{-i\alpha_{i}(x)\sigma_{3}} \tag{11}\] \[+\beta_{2}\sum_{x}\tfrac{1}{2}\text{Tr}\,\Phi^{\dagger}(x)\Phi(x )+\beta_{4}\sum_{x}\big{[}\tfrac{1}{2}\text{Tr}\,\Phi^{\dagger}(x)\Phi(x) \big{]}^{2},\] where \[P_{ij}(x) =U_{i}(x)U_{j}(x+\hat{i})U_{i}^{\dagger}(x+\hat{j})U_{j}^{\dagger} (x), \tag{12}\] \[\alpha_{ij}(x) =\alpha_{i}(x)+\alpha_{j}(x+\hat{i})-\alpha_{i}(x+\hat{j})- \alpha_{j}(x). \tag{13}\] Here \(U_{i}(x),\alpha_{i}(x)\) are the SU(2) and non-compact U(1) link variables respectively and \(P_{ij},\alpha_{ij}\) are their corresponding plaquettes. The lattice parameters \(\beta_{G},\beta_{Y},\beta_{H},\beta_{2},\beta_{4}\) are related to the continuum parameters by perturbatively computable functions computed in [60]. In addition we employ the partial \(O(a)\) improvements on these relations [61; 62]. Notably, \(\beta_{G}=4/(g_{3}^{2}a)+0.6674...\) with \(a\) being the lattice spacing. Rest of the lengthy relations can be found in the appendix A. Now the lattice observable \(\langle\tfrac{1}{2}\text{Tr}\,\Phi^{\dagger}\Phi\rangle\) is related to the \(\overline{\text{MS}}\) renormalized 3d continuum value \(\langle\phi^{\dagger}\phi\rangle\) by [60]: \[\frac{\langle\phi^{\dagger}\phi\rangle}{g_{3}^{2}}= Z_{g}Z_{m}\Big{[}\langle\tfrac{1}{2}\text{Tr}\,\Phi^{\dagger}\Phi \rangle-\frac{\Sigma\beta_{G}}{8\pi}\] \[-\frac{3+\bar{z}}{16\pi^{2}}\Big{(}\log(3\beta_{G}/2)+0.6679... \Big{)}\Big{]}, \tag{14}\] where \(Z_{g},Z_{m}\) and \(\bar{z}\) are defined in the appendix A. Finally, the 3d expectation value is related to the physical SM Higgs expectation value \(v\) as \[v^{2}/T^{2}=2\langle\phi^{\dagger}\phi\rangle/T. \tag{15}\] ### Hypermagnetic field on the lattice A flux of magnetic field perpendicular to \(x_{3}\) axis \[g_{3}^{\prime}\Phi_{B}=\int\text{d}x_{1}\text{d}x_{2}B_{12}(x), \tag{16}\] can be imposed to the lattice by modifying the periodic boundary conditions of the U(1) link variables \(\alpha_{i}\)[43]. Requiring the action to be periodic quantizes the total flux \(g_{3}^{\prime}\Phi_{B}/2=2\pi n_{b}\), \(n_{b}\in\mathbb{N}\). Without this restriction there would be boundary defects and the translational invariance would be lost. One possible way to add a flux of magnitude \(g_{3}^{\prime}\Phi_{B}/2=2\pi n_{b}\) is by modifying the boundary conditions as \[\alpha_{1}(n_{1},0,n_{3})-\alpha_{1}(n_{1},L_{2},n_{3})=2\pi n_{b}\delta_{n_{1}, 1}\, \tag{17}\] for each \(n_{3}\) in a lattice with extent \(L_{1}L_{2}L_{3}\). We define a dimensionless parameter describing the average magnetic flux density as \[b\equiv\frac{g_{3}^{\prime}B_{Y}^{3d}}{g_{3}^{d}}=\frac{4\pi n_{b}}{L_{1}L_{2} }\left(\frac{1}{g_{3}^{2}a}\right)^{2}\, \tag{18}\] where \(B_{Y}^{3d}\equiv\Phi_{B}/(L_{1}L_{2})\) is the magnetic flux density which is related to the four dimensional density as \(B_{Y}^{3d}\simeq B_{Y}^{4d}/\sqrt{T}+\mathcal{O}(g^{\prime 2})\). The dimensionless parameter then relates to the \(4d\) flux approximately as \(B_{Y}^{4d}=(g^{\prime}/g^{4})bT^{2}+\mathcal{O}(g^{\prime 3})\). We can not use the effective 3d theory to simulate arbitrarily large magnetic fields due to the external magnetic field affecting higher dimensional operators invalidating the effective theory, so we require \(b\ll 2\pi^{2}\)[43]. ## III Measuring the Sphaleron rate ### Real time evolution In itself the effective 3d theory (6) does not describe dynamical phenomena, such as the sphaleron process. As shown by Arnold, Son and Yaffe [30], the classical equations of motion suffer from ultraviolet divergences which prevent taking the continuum limit on the lattice. However, in SU(2) gauge theory the dynamics of the soft modes (\(k\lesssim g^{2}T\)), which are relevant for sphaleron transitions, are fully overdamped and to leading logarithmic accuracy \(1/\ln(1/g)\) the evolution can be described by Langevin equation with Gaussian noise \(\xi_{i}^{a}\) (in \(A_{0}=0\) gauge) [31; 32; 49; 63]: \[\partial_{t}A_{i} =-\frac{1}{\sigma_{el}}\frac{\partial H}{\partial A_{i}}+\xi_{i}^ {a}\, \tag{19}\] \[\langle\xi_{i}^{a}(x,t)\xi_{j}^{b}(y,t^{\prime})\rangle =2\sigma_{el}T\delta_{ij}\delta^{ab}\delta^{3}(x-y)\delta(t-t^{ \prime})\, \tag{20}\] where \(H/T=S\) with \(S\) defined in (11). Here \(\sigma_{el}\simeq 0.9239T\)[64] is the non-Abelian color conductivity of SU(2). It can be shown that any diffusive field update algorithm, for example the heat bath update, is equivalent to Langevin evolution [33]. This is advantageous because heat bath update is computationally much more efficient. The Langevin time \(t\) for SU(2) can be related to performing \(n\) full random order heat bath update sweeps as \(\Delta t=\frac{1}{4}\sigma_{el}a^{2}n\) and the leading corrections are observed to be small [33; 65]. The heat bath approach enables us to take a well-defined continuum limit on the lattice. The Higgs field evolves parametrically much faster than the SU(2) gauge field [65]. Thus, the Higgs field almost equilibrates in the background of the instantaneous SU(2) field. This can be achieved by updating the Higgs field much more often than the SU(2) field. We use a mixture of overrelaxation and heat bath updates, see [66] for details of the algorithms used. We increased the number of Higgs updates until the lattice observables of interest stayed constant resulting to around 50 more Higgs updates per gauge field update (similarly as in [67]). Finally, in the broken phase the U(1) field also evolves faster than the SU(2) gauge field for wavelengths relevant for sphaleron transitions. The size of the sphalerons \(\sim(g^{2}T)^{-1}\) is given by the SU(2) dynamics. Because the U(1) gauge coupling \(g^{\prime 2}\) is much smaller than the SU(2) coupling \(g^{2}\), the U(1) modes with wavelength \(\lambda\sim(g^{2}T)^{-1}\) behave as weakly coupled non-damped modes evolving with time scale \(\tau\approx\lambda\). This is in contrast to the overdamped SU(2) evolution with time scale \(\propto(\lambda^{2})\). Thus, on the lattice the sphaleron rate should be independent of the U(1) update rate provided it is frequent enough in comparison with the SU(2) updates. Indeed, we have tested this behavior with a few simulations with different heat bath update frequencies for the U(1) field and found no significant effect to the sphaleron rate, as seen from Fig. 2. In our final analysis we use equal update frequency for SU(2) and U(1) fields. We note that in the broken phase the magnetic field remains unscreened and very long-range magnetic fields evolve very slowly in comparison with other fields (magnetohydrodynamics). These modes have very small effect on the sphalerons, and indeed if there is no external (hyper)magnetic field the contribution from the U(1) sector is usually ignored [29; 36; 51]. In the symmetric phase the SU(2) and U(1) Chern-Simons numbers are effectively decoupled and evolve independently. The U(1) Chern-Simons number is not topological, and there is no characteristic length scale for its evolution when external magnetic field is present. It Figure 2: Comparing the obtained sphaleron rate with different U(1) update frequencies. On the y-axis the ratio of U(1) updates per SU(2) update. The dependence is observed to be negligible with our statistical accuracy. is not clear how to accurately capture the full quantum dynamics in numerical lattice simulations in this case. However, this is not a problem for the analysis of the sphaleron rate in the broken phase, and, as will be discussed in section IV, the effect of the U(1) remains sub leading in comparison with the SU(2) rate in the symmetric phase. ### Calibrated cooling Topology is not well defined on a discrete lattice, and a naive discretisation of the CS number leads to ultraviolet noise which ruins the sphaleron rate measurement. However, for sufficiently fine lattice spacing the sphaleron is large in lattice units with a length scale of order \(1/(g^{2}T)\)[34]. This makes it possible to use methods which filter out the ultraviolet noise and allows us to accurately integrate the CS number. One of these methods is the calibrated cooling [68, 29], which we employ here with the modification that we use _gradient flow_ for all fields and integrate both the SU(2) and the U(1) CS numbers. Crucially, in the broken phase we track the difference of the CS numbers (4). Periodically we cool all the way to the vacuum and check that the vacuum-to-vacuum integration result is close to an integer and remove any residuals in order to avoid the accumulation of errors. Parametrising the SU(2) links as \(U_{i}(x)=\exp[i\theta_{i}^{a}(x)\sigma^{a}/2]\) the gradient flow can be written as \[\frac{\partial U_{i}(x)}{\partial\tau} =-i\frac{\sigma^{a}}{2}U_{i}(x)\frac{\partial S}{\partial\theta^ {a}(x)}\, \tag{21}\] \[\frac{\partial\alpha_{i}(x)}{\partial\tau} =-\frac{\partial S}{\partial\alpha_{i}(x)}\,\] (22) \[\frac{\partial\Phi(x)}{\partial\tau} =-\frac{\partial S}{\partial\Phi(x)}\, \tag{23}\] where \(\tau\) is the flow time. Evolving the fields with the gradient flow equations removes ultraviolet fluctuations smoothing the fields with a smoothing radius related to the flow time by \(r=\sqrt{6\tau}a\) in 3 dimensions [69]. With these methods we can integrate the CS number accurately from a real time trajectory generated by heat bath updates. In the symmetric phase the SU(2) CS number diffuses rapidly between vacua. In the broken phase the SU(2) and U(1) gauge fields mix and the diffusion of the difference of the Chern-Simons numbers, \(N_{\text{CS}}=N_{\text{CS}}^{W}-N_{\text{CS}}^{V}\), slows down dramatically, jumping between integer values. This can be seen in Fig. 3, where the CS number is measured somewhat below the cross-over temperature. Interestingly, the SU(2) and U(1) Chern-Simons numbers are not suppressed individually, only their difference is. This is precisely the quantity which couples to the baryon and lepton number. Finally, from the real time trajectory we can compute the sphaleron rate. We use the cosine transform method described in [70]. ### Multicanonical method Near the cross-over temperature we measure the sphaleron rate using the real-time simulation methods discussed above, but deep in the broken phase the rate gets strongly suppressed and normal methods become impractical. At any reasonable amount of simulation time only few transitions take place, if any. Thus in the broken phase we have to use special multicanonical methods to compute the rate. Details of the method can be found in [65, 29, 51]. The computation consist of two parts. The multicanonical method is used to measure the probabilistic suppression of the sphaleron at the height of the potential barrier, i.e. the \(N_{\text{CS}}\) distribution between two integer vacua \(P(N_{\text{CS}})\). In a non zero magnetic field we need to use the \(N_{\text{CS}}\) given by (4) so that in a vacuum it is an integer. Dynamical simulations are performed to compute the rate of tunneling over the top of the barrier. The tunneling rate is computed by measuring \(|\Delta N_{\text{CS}}/\Delta t|\) from dynamical simulations when the trajectory crosses the sphaleron barrier \(N_{\text{CS}}=\frac{1}{2}\). This needs to be compensated by a dynamical prefactor \(\text{d}=\sum_{\text{traj}}\delta_{\text{tunnel}}/(N_{\text{cross}}N_{\text{ traj}})\) where \(\delta_{\text{tunnel}}=0\) if the trajectory does not get to a new vacuum and \(\delta_{\text{tunnel}}=1\) if it does, and \(N_{\text{cross}}\) is the number of times \(N_{\text{CS}}\) crosses the barrier. This is needed due to the fact that the dissipative update is noisy which can result in multiple crossing of the barrier in a one trajectory. With these ingredients the sphaleron rate is given by \[\Gamma=\frac{P(|N_{\text{CS}}-\frac{1}{2}|<\frac{\epsilon}{2})}{\epsilon V} \left\langle\left|\frac{\Delta N_{\text{CS}}}{\Delta t}\right|\right\rangle \text{d}\, \tag{24}\] Figure 3: Real time CS trajectory in the broken phase at \(T=153\,\text{GeV}\) in an external magnetic field \(b=0.196\). SU(2) \(N_{\text{CS}}\) trajectory (yellow), U(1) \(N_{\text{CS}}\) trajectory (red) and their difference (blue). It is clear that the difference becomes frozen at low temperatures. where \(\epsilon\ll 1\) (we used \(\epsilon=0.04\)). ## IV Results We investigated the lattice spacing dependence of the sphaleron rate with an external hypermagnetic field for a few temperatures in the symmetric and broken phase. The parameters were chosen such that the external magnetic field had the same value, see table 1. In the symmetric phase and close to the cross-over in the broken phase the lattice spacing dependence on the sphaleron rate is small, see the top most plot in Fig. 4. This is similar what was observed in previous studies without U(1) [51]. In the broken phase for large lattice size and small lattice spacing even the multicanonical method becomes very inefficient, making the measurement of the Chern-Simons number evolution impractical at large lattices. This prevents us from obtaining sufficient range in lattice spacings for a reliable continuum limit deep in the broken phase. Nevertheless, our limited results show only a mild lattice spacing dependence, as shown in Fig. 4. In the following most of our results have been obtained at single lattice spacing \(g_{3}^{2}a=1/2\). Similar inefficiency was noted in previous works where the U(1) field was omitted [51]. In our case the problem appears to be worse, presumably due to the additional noise of the combined SU(2) and U(1) Chern-Simons number observable. We investigated the finite volume effects on the sphaleron rate in an external magnetic field with \(b=0.196\) for a few different volumes \(L^{3}a^{3}\) with \(L=8/g_{3}^{2},13.9/g_{3}^{2},16/g_{3}^{2}\). The chosen temperatures were in the symmetric and in the broken phase near the cross-over so that we could still use non-multicanonical simulations. Similar to the previous studies we do not observe systematic volume dependence above \(L=8/g_{3}^{2}\). In pure SU(2) theory it was found that \(L=8/g_{3}^{2}\) is close to the smallest volume where the finite size effects are negligible [34]. Due to the small observed lattice spacing dependence and no significant finite size effects at \(L=8/g_{3}^{2}\), we present the results for the lattice parameters \(g_{3}^{2}a=1/2\), \(V=16^{3}a^{3}\) when deep in the broken phase where we need to use the multicanonical simulations. With these parameters the lattice is still small enough for us to get reliable measurements of the CS number. This enables us to get good statistics with reasonable computational effort. Due to the magnetic field flux being quantized as (18) the flux quanta are quite large for small volumes and we can only obtain a few different values of the magnetic flux for the multicanonical simulations. Thus in addition we present results for the lattice parameters \(g_{3}^{2}a=1/2\), \(V=32^{3}a^{3}\) using non-multicanonical simulations as deep as possible in to the broken phase. For all non-multicanonical runs we simulated \(2\times 10^{6}\) time steps; and for all multicanonical simulations we generated \(12\times 10^{3}\) trajectories and generated \(\sim 3\times 10^{6}\) realizations to estimate the CS number distribution \(P(N_{CS})\). ### Zero magnetic field Let us first present the results for zero external magnetic field since we find slightly different results as in previous works. We measure the sphaleron rate from simulations with and without the dynamical hypercharge U(1) field. We do not observe any systematic difference between the results, see Fig. 5. This justifies the omission of the U(1) field when there is no external magnetic field, as done e.g. in [36]. Below we discuss results with the U(1) field included. The Higgs field expectation value is observed to be very close to the perturbative result [10; 66] even without taking a continuum limit, see \(b=0\) points in Fig. 6. For the Higgs expectation value it is straightforward to check the continuum limit because the Chern-Simons number measurement can be omitted. We measured the Higgs expectation value on lattice spacings \(g_{3}^{2}a=1/2\), \(1/3\) and \(1/4\) on a few temperature values and found the continuum \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(4/(ag_{3}^{2})\) & \(V/a^{3}\) & \(n_{b}\) & \(b\) \\ \hline 5.6 & \(16^{3}\) & 2 & 0.196 \\ 8 & \(16^{3}\) & 1 & 0.196 \\ 10 & \(20^{3}\) & 1 & 0.196 \\ 12 & \(24^{3}\) & 1 & 0.196 \\ \hline \end{tabular} \end{table} Table 1: Lattice spacings, volumes, magnetic flux and magnitude of the external magnetic field used when investigating the lattice spacing dependence. Figure 4: Sphaleron rate with few different lattice spacings \(ag_{3}^{2}/4=5.6,8,10,12\) in an external magnetic field \(b=0.196\), see table 1. Top most plot at \(T=157\,\)GeV close to the cross-over using a normal simulation. Bottom plot at \(T=150\,\)GeV deep in the broken phase with multicanonical simulation. Both showing a linear and constant fits. limit to match the perturbative result. In the symmetric phase the measured sphaleron rate is approximately constant, with the value \[\Gamma_{\rm sym.}/T^{4}=(6.23\pm 0.05)\times 10^{-7}\approx(13.9\pm 0.1)\alpha_{ W}^{5}\, \tag{25}\] with \(\alpha_{W}\approx 0.03389\) at the electroweak scale.1 In the broken phase the rate is well fitted by a pure exponential and we obtain Footnote 1: The numerical factor in front of \(\alpha_{W}^{5}\) includes contributions from logarithmic factors \(\ln\alpha_{W}\)[31]. This form is presented for easier comparisons with earlier work. \[\ln(\Gamma_{\rm brk.}/T^{4})=(0.86\pm 0.01)T/{\rm GeV}-(153.1\pm 0.9). \tag{26}\] Using the linear fit we can estimate when the sphaleron processes freeze out. This happens when the Hubble rate \(H(T)\) becomes comparable to the sphaleron rate \(\Gamma(T_{*})/T_{*}^{3}=\alpha(v/T_{*})H(T_{*})\). The function \(\alpha(v/T_{*})\) (where \(v\) is the Higgs expectation value) is well approximated by a constant \(\alpha=0.1015\) in the relevant temperature range. Furthermore, \(H(T)^{2}=g_{*}\pi^{2}T^{2}/(90M_{\rm Pl}^{2})\) where the effective number of degrees of freedom is well approximated by \(g_{*}=106.75\) over the electroweak scale. The Hubble rate is seen in Fig. 5 as the green line. With these we find the freeze-out temperature \(T_{*}=133.5\pm 0.97\,{\rm GeV}\). ### Non-zero magnetic field Let us now look at the results for non zero external magnetic field. We ran simulations with \(g_{3}^{2}a=1/2\) and volume \(V=16^{3}a^{3}\) with magnetic flux quantum \(n_{b}=0,1,2,3,4\) yielding \(b\) in range from \(b=0\) to \(0.785\) with a step of \(\Delta b=0.196\). To get a smaller step size for \(b\) we additionally performed simulations with \(g_{3}^{2}a=1/2\) and volume \(V=32^{3}a^{3}\) (without multicanonical simulations due to problems discussed above) with magnetic flux quantum \(n_{b}=0,1,2,3,4\) and \(6,8,10,12,14,16,18,20,22,24\) yielding a range \(b=0\) to \(1.178\) with \(\Delta b=0.049\). The form of the electroweak cross-over is changed by the external magnetic field. This can be clearly seen from plotting the Higgs expectation value against the temperature with different magnitudes for the magnetic field, see Fig. 6. The cross-over temperature can be seen to shift to smaller temperatures. To get a better picture on the effect of the magnetic field on the cross-over let us look at the susceptibility of the Higgs field. We define the cross-over or pseudocritical temperature \(T_{c}\) as the location of the maximum in the dimensionless susceptibility \[\chi_{\phi^{\dagger}\phi}(T)=VT\left\langle\left[(\phi^{\dagger}\phi)_{V}- \left\langle\phi^{\dagger}\phi\right\rangle\right]^{2}\right\rangle \tag{27}\] where \((\phi^{\dagger}\phi)_{V}=1/V\int_{V}\phi^{\dagger}\phi\) is the volume average. We use the interpolating function defined in [9] to estimate the location of the peak. The susceptibility with different magnitudes of the magnetic field are shown in Fig. 7. From this we clearly see that the pseudocritical temperature is shifted to smaller temperatures and the cross-over region gets wider in the sense of widening the peak of the susceptibility. The pseudocritical temperature against the magnitude of the magnetic field can be seen in Fig. 8. Figure 5: Sphaleron rate in the absence of the external magnetic field in theories with and without the U(1) field. Data points marked with squares are obtained using the multicanonical method. Fits are performed using the data that include U(1). Perturbative line is from [71] with their non-perturbative correction removed. Figure 6: Higgs expectation value with different values for the magnetic field, with \(V=32^{3}a^{3}\). The lines are added for clarity, they are not fits. The expectation value becomes negative in the symmetric phase due to additive renormalization factors, see (14). The grey contours are the zero magnetic field symmetric [10] and broken phase [66] perturbative results. With small field magnitudes it behaves quadratically after which it quickly reaches linear regime. At \(b=1\) (\(B_{Y}^{4d}\approx 2T^{2}\)) the cross-over temperature has decreased from \(160\,\mathrm{GeV}\) down to \(145\,\mathrm{GeV}\). Let us finally look at how the sphaleron rate is affected by a non-zero external magnetic field. We measure the SU(2) and U(1) diffusion rates separately, and in Fig. 9 we show an example of the behaviour of the rates through the cross-over at \(b=0.884\). The U(1) diffusion rate \(\Gamma_{Y}/T^{4}\) is seen to stay constant through the transition. In the high-temperature symmetric phase \(N_{\mathrm{CS}}^{W}\) and \(N_{\mathrm{CS}}^{Y}\) evolve independently, and evolution of \(N_{\mathrm{CS}}^{W}-N_{\mathrm{CS}}^{Y}\) (with rate \(\Gamma/T^{4}\)) is slightly faster than the evolution of each of the components alone. In the broken phase the Chern-Simons numbers are strongly correlated, and the pure SU(2) rate \(\Gamma_{W}\) is no longer strongly suppressed but reaches a plateau at small temperatures. Only the physically relevant combination \(N_{\mathrm{CS}}^{W}-N_{\mathrm{CS}}^{Y}\) becomes frozen. The dashed vertical line at \(T=145\,\mathrm{GeV}\) in Fig. 9 is the point where the measured rate matches the one from the pure SU(2) case. This is seen to happen systematically around \(2\,\mathrm{GeV}\) below the pseudocritical temperature regardless of the magnitude of the magnetic flux \(b\). The full diffusion rate (4) is plotted against the temperature for different values of the magnetic field in Fig. 10. (For clarity, we do not plot all values of \(b\) that were simulated.) In the symmetric phase the SU(2) sphaleron rate is unaffected by the presence of the external magnetic field and the data is compatible with the \(b=0\) case in Eq. (25). However, the U(1) rate increases with increasing \(b\), and so does the physically relevant \(\Gamma\). This is discussed in more detail below. In the broken phase for small field values the slope at which the rate drops is compatible with the slope obtained from the \(b=0\) fit. For larger magnetic field values we do not have enough data to verify this with confidence, but as shown in Fig. 10 the suppression of the rate continues to drop at approximately the same rate as at \(b=0\), only the temperature is shifted to lower values. The shift in temperature is roughly according to the shift in the pseudocritical temperature, as seen in Fig. 8. For the largest magnetic field we simulate, \(b=1.178\) (\(B_{Y}^{4d}/T^{2}\approx 2.3\)) the sphaleron rate suppression is shifted approximately to \(22\,\mathrm{GeV}\) lower temperatures from the \(b=0\) case. The change in the sphaleron rate when the external field is increased can be understood to arise from two effects: the Higgs field expectation value decreases, and the sphaleron interacts with the field through its mag Figure 8: Pseudocritical temperature against the magnitude of the magnetic field. Figure 7: The dimensionless susceptibility with different magnitudes for the magnetic field, with \(V=32^{3}a^{3}\). The lines are from fitting the interpolating function (defined in [9]) to the data. Figure 9: Example of diffusion rates of the pure SU(2) CS number, pure U(1) CS number and their difference (4) for magnetic field magnitude \(b=0.884\). Black dotted vertical line is the pseudocritical temperature. netic dipole moments. Both of these effects reduce the sphaleron barrier. To isolate the effects arising from the sphaleron dipole moment from the effects of changing Higgs expectation value we plot the sphaleron rate against the Higgs expectation value in Fig. 11. It can be observed that at larger magnetic field values the Higgs expectation value can become quite large before the onset of the suppression of the rate. Finally, in Fig. 12 we show how the sphaleron rate depends on the magnetic field at constant Higgs expectation value. This enables the comparison with the semi-analytical results in ref. [42], where the change in the Higgs expectation value was neglected. The rates at constant Higgs expectation value are obtained by interpolating the data shown in Fig. 11. For small field values the rate behaves quadratically until around \(b\simeq 0.2\) it reaches a linear regime. The linear regime ends when the field gets strong enough to start restoring the electroweak symmetry where the rate eventually reaches the \(b=0\) SU(2) symmetric phase value (25), see left plot in Fig. 12. Qualitatively similar behavior is seen when plotting the sphaleron rate with constant temperature, see Fig. 13. At small magnetic fields the change in \(\ln\Gamma\) is proportional to \(b^{2}\), turning into approximately linear behaviour at intermediate \(b\) until finally reaching the symmetric phase value where the rate flattens to constant. Comparing Figs. 6 and 11, we can observe that the "restoration" of the rate happens before the Higgs field is fully restored. To compare the simulation results to a semi-analytical estimate we did the analysis presented in ref. [42] (where they used non-physical Higgs mass) but now with Standard Model parameters. Details of the computation can be found in appendix B. From the analytical computation we get the sphaleron energy as a function of the external magnetic field. Assuming that for small fields the change in energy is due to a simple dipole interaction \(\Delta E=-\vec{\mu}_{\rm sph}\cdot\vec{B}_{c}^{4d}\) and that the change to the rate \(\Delta\ln\Gamma/T^{4}\equiv\ln\Gamma(b)/T^{4}-\ln\Gamma(b=0)/T^{4}\) is purely due to the change in energy, the change in the rate is approximately \[\Delta\ln\Gamma/T^{4}\sim\ln\left[\frac{\sinh(\Delta E/T)}{\Delta E/T}\right]. \tag{28}\] In ref. [42] it was assumed that the change in \(\ln\Gamma\) is directly proportional to the change of the minimum energy for the sphaleron, i.e. \(\Delta\ln\Gamma/T^{4}\propto\Delta E/T\). Our result in Eq. (28) takes into account the random orientations of the magnetic dipoles at finite \(T\). At small fields Eq. (28) gives \(\Delta\ln\Gamma/T^{4}\sim(\Delta E/T)^{2}/6\), and turns into \(\sim\) linear behaviour at larger field values. For small external field values Eq. (28) is close to what we obtain from our simulations, however, the simple dipole approximation quickly becomes invalid, see Fig. 12. Our results above indicate that at small magnetic fields the dominant effect on the sphaleron rate arises from the magnetic dipole moment of the sphaleron, and the change in the Higgs expectation value is subleading. Finally, let us look at the behavior of the sphaleron rate in the symmetric phase. Here the SU(2) rate does not show any systematic dependence on the magnetic field, see right plot in Fig. 14. Only the U(1) rate is affected Figure 11: Sphaleron rate against the Higgs expectation value. The Higgs expectation value can get quite large before the rate gets suppressed. Figure 10: Sphaleron rate against temperature. Circle data points are from \(V=32^{g}a^{3}\), diamond data points from \(V=16^{3}a^{3}\) and square data points from multicanonical simulations. Grey dotted lines have the same slope as the fit of \(b=0\) (black) and are shifted according to the shift of the pseudocritical temperature seen in Fig. 8. Black horizontal line is the \(b=0\) symmetric rate fit. by the magnetic field in the symmetric phase. Despite the ambiguities associated with the U(1) field evolution in the symmetric phase as discussed in section III.1, we investigate the U(1) rate in our simulations and the dependence on the magnetic field fits very well the expected \(B_{4d}^{2}\) behavior [20], see Fig. 14. As seen in Fig. 9 the U(1) rate \(\Gamma_{Y}/T^{4}\) is approximately constant over temperature and we obtain a fit \(\Gamma_{Y}/T^{4}=(0.5\pm 0.01)\times 10^{-3}g^{\prime 6}B_{4d}^{2}\) (with \(g^{\prime 2}\simeq 0.12237\)). Comparing this to results obtained from classical simulations of U(1)-Higgs theory (scalar QED) performed in [20; 21] our rate is \(\sim 4\) times slower; comparing with magnetohydrodynamics our rate is \(\sim 3\) times faster. Given the ambiguities in the update algorithm the qualitative agreement between the results is good. ## V Conclusion Using lattice simulations of an effective 3d theory of the Standard Model we have computed the baryon violation (sphaleron) rate over the electroweak cross-over deep into the broken phase with an external magnetic field. Both the baryon violation rate and the form of the electroweak cross-over is changed due to an external magnetic field. We have argued that the fully dissipative Langevin-type update is accurate to leading logarithmic order in \(g_{W}^{2}\) in the broken phase. For zero external field we computed the rate with and Figure 12: (Left) The rate with constant Higgs expectation value. Vertical dotted line corresponds to the magnetic field value where the expectation value \(\langle\phi^{\dagger}\phi\rangle/T=0.21\) is obtained at the pseudo critical temperature (i.e. to the right of the line we are getting in to the symmetric phase). The horizontal grey line is the symmetric \(b=0\) sphaleron rate (25). (Right) Comparing the difference of the rate \(\Delta\ln\Gamma/T^{4}\) from simulations (black) to the analytical estimate (orange). We also plot the energy difference of the sphaleron configuration (blue) obtained from the analytical computation, see appendix B. Figure 13: How the rate changes with the magnitude of the magnetic field with constant temperature \(T=144\,\mathrm{GeV}\) on the left and \(T=155\,\mathrm{GeV}\) on the right. The vertical dotted line is the value of \(b\) where the constant temperature of the plot is the pseudo critical temperature. The horizontal grey line is the symmetric \(b=0\) sphaleron rate (25). without the U(1) fields included and found no difference between the results. The zero external field results slightly differ from previous results [36], see (25) and (26) for our results. The difference is most likely due to us using an updated value for the top mass (which affects the mapping between the physical and the effective 3d theory parameters) and the fact that the previous computations did not fully implement the partial \(O(a)\) improvement. Because neither of the computations have been able to obtain a reliable continuum limit, the lack of improvement has an effect on the final results. The baryon violation rate is affected by an external magnetic field due to multiple factors. The sphaleron has a dipole moment and its energy can be lowered. With an external field also the U(1) contributes to the baryon violating rate. With an external field the combination of the SU(2) and U(1) CS numbers couple to the baryon violating current and in the broken phase it is precisely this combination that gets suppressed. To get a picture of how the electroweak transition is affected by an external magnetic field we computed the Higgs expectation value and its susceptibility. As the magnetic field is increased, the cross-over shifts to lower temperatures and the transition region broadens. This shifts onset of the suppression of the sphaleron rate to lower temperatures. In the broken phase the rate increases with the external magnetic field. For small fields it increases quadratically before switching over to a linear regime. The linear regime stops after the field becomes strong enough to restore the electroweak symmetry where the rate reaches the symmetric phase value. For small external fields we performed a semi-analytical computation for the sphaleron energy in an external field (following [42]) and used a simple dipole approximation to estimate the change in the sphaleron rate. For small field values the semi-analytical result and our simulations are in relatively good agreement, see Fig. 12. This shows that for small fields the sphaleron dipole moment has the biggest effect on the rate. However, for larger fields the simple dipole approximation quickly becomes invalid and non-linear effect become important. In the symmetric phase the SU(2) and U(1) Chern-Simons numbers evolve independently. There are ambiguities in how to perform real-time lattice simulations of the U(1) field evolution in the symmetric phase. However, from our simulations we find no significant effect of the magnetic field on the pure SU(2) rate which behaves as \(\propto T^{4}\) and is compatible with the zero external field value (25). The U(1) part of the rate is found to increase with the magnetic field with the expected behavior \(\propto B_{4d}^{2}\). Full results of our simulations are availabe as tables at Zenodo, [58]. ###### Acknowledgements. The authors acknowledge the support from the Academy of Finland grants 345070 and 319066. Part of the numerical work has been performed using the resources at the Finnish IT Center for Science, CSC. ## Appendix A Continuum to lattice parameters with improvements We use the partial \(O(a)\) improvements computed in [61; 62] (these are partial since there is an additive cor Figure 14: (Left) Diffusion rate of the U(1) CS number with different external magnetic fields (blue) compared with the results from classical simulations and expected rate from magnetohydrodynamics [20; 21]. The shown rate is computed at \(T=168\,\)GeV, however, we do not find any systematic temperature dependence with the temperatures simulated. (Right) Pure SU(2) rate (grey) and the full rate (black) in the symmetric phase. The horizontal line is the \(b=0\) SU(2) symmetric rate fit (25) and the blue dashed line is the sum of the latter and the fit from the left plot. The SU(2) rate \(\Gamma_{W}/T^{4}\) stays approximately constant with increased magnetic field. rection to the parameter \(y\) that has not been computed to date.). We choose the desired values of \(x,y,z\) and \(g_{3}^{2}a\) that we want to simulate and then compute the relevant counterterms given by \[Z_{g}^{-1} =1+\frac{g_{3}^{2}a}{4\pi}\left(\frac{\pi}{3}+6\xi+\frac{\Sigma}{24 }\right), \tag{100}\] \[Z_{b} =1+z\frac{g_{3}^{2}a}{4\pi}\left(\frac{\pi}{3}-\frac{\xi}{12}+ \frac{\Sigma}{24}\right),\] (101) \[Z_{m}^{-1} =1+\frac{g_{3}^{2}a}{4\pi}\left[(9-24x+3z)\frac{\xi}{4}+(3+z) \frac{\Sigma}{24}\right],\] (102) \[\delta x =\frac{g_{3}^{2}a}{4\pi}\Big{\{}\Big{[}1-6x(3+z)+48x^{2}+\tfrac{ 1}{2}(1+z)^{2}\Big{]}\frac{\xi}{4}\] \[\qquad\qquad-x(3+z)\frac{\Sigma}{12}\Big{\}}, \tag{103}\] where \(\Sigma=3.175911...\) and \(\xi=0.152859...\) are constants. We then construct the lattice action with the relations between the continuum parameters and the lattice parameters \(\beta_{G},\beta_{Y},\beta_{H},\beta_{2},\beta_{4}\)[60] \[\beta_{Y} =\frac{\beta_{G}}{\bar{z}}\,\beta_{H}=\frac{8}{\beta_{G}},\ \beta_{4}=\frac{\beta_{H}^{2}}{\beta_{G}}\bar{x}, \tag{104}\] \[\frac{\beta_{2}}{\beta_{H}} =3+\frac{8\bar{y}}{\beta_{G}}-(3+12\bar{x}+\bar{z})\frac{\Sigma} {4\pi\beta_{G}}\] \[-\frac{1}{2\pi^{2}\beta_{G}^{2}}\bigg{[}\left(\frac{51}{16}- \frac{9\bar{z}}{8}-\frac{5\bar{z}^{2}}{16}+9\bar{x}-12\bar{x}^{2}+3\bar{x} \bar{z}\right)\] \[\qquad\times\left(\ln(\tfrac{3}{2}\beta_{G})+0.09\right)\] \[+4.9-0.9\bar{z}+0.01\bar{z}^{2}+5.2\bar{x}+1.7\bar{x}\bar{z} \bigg{]}, \tag{105}\] using the modified parameters \[\beta_{G} =\frac{4}{g_{3}^{2}a}Z_{g}^{-1}=\frac{4}{g_{3}^{2}a}+0.6674...\, \tag{106}\] \[\bar{x} =\frac{x+\delta x}{Z_{g}},\ \bar{y}=y\frac{Z_{m}}{Z_{g}^{2}},\ \bar{z}=z\frac{1}{Z_{g}Z_{b}}. \tag{107}\] Then the lattice observables are related to the continuum values with the parameters \(x,y,z\) by a multiplicative correction (and possible renormalization factors). For example, the lattice observable \(\langle\frac{1}{2}{\rm Tr}\,\Phi^{\dagger}\Phi\rangle\) is related to the \(\overline{\rm MS}\) renormalized 3d continuum value \(\langle\phi^{\dagger}\phi\rangle\) by equation (14). ## Appendix B Small field analytical estimate In this appendix we present details on the analytical computation in a small external field. We follow the analysis performed in [42]. When the U(1) field is included the sphalerons spherical symmetry is reduced into an axial symmetry. With the physical value for the weak mixing angle \(\theta_{W}\) the angular dependence of the solution is found to be mild [27] at zero magnetic field. The expansion parameter, with external magnetic field \(B_{c}^{4d}\), is effectively \(\theta_{W}B_{c}^{4d}/gv^{2}\) and the angular dependence becomes relevant for larger magnetic fields. For small fields it suffices to use simpler ansatz that is spherically symmetric [26]. The ansatz depends on four functions \(f(\xi),f_{0}(\xi),f_{3}(\xi),h(\xi)\) of dimensionless radial coordinate \(\xi\equiv gvr\), where \(v(T)\) is the temperature dependent Higgs expectation value. The energy functional of the sphaleron using the ansatz (see [26][42]) in a constant external hypermagnetic field \(B_{c}^{4d}\) is \(E=E_{0}-E_{\rm dip}\) with \[E_{0} =\frac{4\pi v}{g}\int_{0}^{\infty}\mathrm{d}\xi\Bigg{[}\frac{8}{3 }f^{\prime 2}+\frac{4}{3}f_{3}^{\prime\,2}+\frac{1}{2}\xi^{2}h^{\prime 2}+\frac{4g^{2} }{3g^{\prime\,2}}f_{0}^{\prime\,2}\] \[+\frac{8}{3\xi^{2}}\left\{2f_{3}^{2}(1-f)^{2}+[f(2-f)-f_{3}]^{2}+ \frac{g^{2}}{g^{\prime\,2}}(1-f_{0})^{2}\right\}\] \[\frac{h^{2}}{3}\left\{(f_{0}-f_{3})^{2}+2(1-f)^{2}\right\}+\frac{ \lambda}{4g^{2}}\xi^{2}(h^{2}-1)^{2}\Bigg{]}\, \tag{108}\] and \[E_{\rm dip}=\int_{0}^{\infty}\mathrm{d}\xi\frac{2\pi g^{\prime}}{3g^{3}v}(f_{ 0}-f_{3})h^{2}B_{c}^{4d}. \tag{109}\] The field equations for the ansatz functions turn out as \[h^{\prime\prime}+\frac{2}{\xi}h^{\prime}-\frac{2h}{3\xi^{2}} \left[2(1-f)^{2}+(f_{0}-f_{3})^{2}\right]-\frac{\lambda}{g^{2}}(h^{2}-1)h=0,\] \[f^{\prime\prime}+\frac{1+f}{\xi^{2}}\left[2f(f-2)+f_{3}+2f_{3} \right]+\frac{1}{4}(1-f)h^{2}=0,\] \[f^{\prime\prime}_{0}+\frac{2(1-f_{0})}{\xi^{2}}-\frac{g^{\prime 2 }}{4g^{2}}(f_{0}-f_{3})h^{2}=0,\] \[f^{\prime\prime}_{3}-\frac{2}{\xi^{2}}\left[3f_{3}+f(f-2)(1+2f_{3 })\right]-\frac{h^{2}}{4}(f_{3}-f_{0})=0. \tag{110}\] The ansatz functions are subject to the following boundary conditions \[f,h\to 1,\ f_{3},f_{0}\to 1-\frac{\sin 2\theta_{W}\xi^{2}}{8gv^{2}}B_{c}^{4d},\ \ \text{as}\ \xi\to\infty\,\] \[f,h,f_{3}\to 0,f_{0}\to 1,\ \ \text{as}\ \xi\to 0. \tag{111}\] It is convenient to make a change of variables \[g_{i}(\xi)=f_{i}(\xi)+\sin 2\theta_{W}\frac{\xi^{2}}{2gv^{2}}, \tag{112}\] for \(i=0,3\) so that the boundary conditions for the new functions at infinity are simply \(g_{i}\to 1\) as \(\xi\to\infty\). Furthermore, we use a change of variables \(x\equiv\xi/(3+\xi)\) which maps \(\xi\to\infty\) to \(x\to 1\). Finally we use the standard model values for the parameters \(\lambda,g,g^{\prime}\) at the electroweak scale. With the above we have all the ingredients to compute the change to the sphaleron energy for small fields in the spherical approximation. From the set of coupled differential equations (110) we solve the functions numerically using a fourth order collocation method. (A better way would be to treat this as a minimization problem and minimize the energy functional but the method used suffices for our rough comparison.) The equations (B3) are divergent at the boundaries and thus we have to solve the system only in range \([\epsilon,1-\epsilon]\) where the \(\epsilon\) is made as small as possible. We were able to solve the system in a typical range of \([0.0006,0.913]\). Even quite large changes (\(\sim 0.01\)) in these values did not significantly change the form of the resulting functions nor the energy computed from them. The solved functions are plotted in Fig. 15 for zero magnetic field and for one example of a non-zero magnetic field. The energy of the sphaleron configuration is obtained by numerically integrating over the energy functional while omitting the constant external magnetic field terms which would make the expression divergent. The energy as a function of the magnetic field is plotted in Fig. 16. Assuming that the change in energy \(\Delta E\equiv E(B=0)-E(B)\) is due to a simple dipole interaction \(\Delta E=-\vec{\mu}_{\rm sph}\cdot\vec{B}_{c}^{4d}\) and that the change of the rate \(\Gamma\) is only due to this energy difference \(\Gamma\sim\exp(\Delta E/T)\Gamma_{0}\), where \(\Gamma_{0}\) is the rate without magnetic field. Averaging over the space of orientations for the dipole the change to the rate is roughly \[\Delta\ln\Gamma/T^{4} \sim \ln\left\{\int\frac{\mathrm{d}\Omega}{4\pi}\exp\left[-\frac{\mu_{ \rm sph}B_{c}^{4d}}{T}\cos\theta\right]\right\}\] (B6) \[\simeq \ln\frac{\sinh(\Delta E/T)}{\Delta E/T}\.\]
2306.12801
Violation of Bell inequality by photon scattering on a two-level emitter
Entanglement, the non-local correlations present in multipartite quantum systems, is a curious feature of quantum mechanics and the fuel of quantum technology. It is therefore a major priority to develop energy-conserving and simple methods for generating high-fidelity entangled states. In the case of light, entanglement can be realized by interactions with matter, although the required nonlinear interaction is typically weak, thereby limiting its applicability. Here, we show how a single two-level emitter deterministically coupled to light in a nanophotonic waveguide is used to realize genuine photonic quantum entanglement for excitation at the single photon level. By virtue of the efficient optical coupling, two-photon interactions are strongly mediated by the emitter realizing a giant nonlinearity that leads to entanglement. We experimentally generate and verify energy-time entanglement by violating a Bell inequality (Clauder-Horne-Shimony-Holt Bell parameter of $S=2.67(16)>2$) in an interferometric measurement of the two-photon scattering response. As an attractive feature of this approach, the two-level emitter acts as a passive scatterer initially prepared in the ground state, i.e., no advanced spin control is required. This experiment is a fundamental advancement that may pave a new route for ultra-low energy-consuming synthesis of photonic entangled states for quantum simulators or metrology.
Shikai Liu, Oliver August Dall'Alba Sandberg, Ming Lai Chan, Björn Schrinski, Yiouli Anyfantaki, Rasmus Bruhn Nielsen, Robert Garbecht Larsen, Andrei Skalkin, Ying Wang, Leonardo Midolo, Sven Scholz, Andreas Dirk Wieck, Arne Ludwig, Anders Søndberg Sørensen, Alexey Tiranov, Peter Lodahl
2023-06-22T11:01:24Z
http://arxiv.org/abs/2306.12801v1
# Violation of Bell inequality by photon scattering on a two-level emitter ###### Abstract Entanglement, the non-local correlations present in multipartite quantum systems, is a curious feature of quantum mechanics and the fuel of quantum technology. It is therefore a major priority to develop energy-conserving and simple methods for generating high-fidelity entangled states. In the case of light, entanglement can be realized by interactions with matter, although the required nonlinear interaction is typically weak, thereby limiting its applicability. Here, we show how a single two-level emitter deterministically coupled to light in a nanophotonic waveguide is used to realize genuine photonic quantum entanglement for excitation at the single photon level. By virtue of the efficient optical coupling, two-photon interactions are strongly mediated by the emitter realizing a giant nonlinearity that leads to entanglement. We experimentally generate and verify energy-time entanglement by violating a Bell inequality (Clauder-Horne-Shimony-Holt Bell parameter of \(S=2.67(16)>2\)) in an interferometric measurement of the two-photon scattering response. As an attractive feature of this approach, the two-level emitter acts as a passive scatterer initially prepared in the ground state, i.e., no advanced spin control is required. This experiment is a fundamental advancement that may pave a new route for ultra-low energy-consuming synthesis of photonic entangled states for quantum simulators or metrology. A quantum pulse of light interacting with a two-level emitter, see FIG. 1(a,d), constitutes a new experimental paradigm in quantum optics [1; 2]. Despite its conceptual simplicity, significant quantum complexity can be encoded in the system since a quantum pulse represents an infinitely large (continuous) Hilbert space. From an experimental point of view, this is an attractive setting since a two-level emitter can implement a highly nonlinear operation on the incoming pulse without the need for demanding and error-susceptible emitter preparation schemes. To enhance this photon-photon nonlinearity, the main experimental challenge is to promote the radiative coupling of the emitter such that it dominates deteriorating decoherence processes - such advancements have been made in the past decades using semiconductor quantum dots (QDs) in photonic crystal waveguides (PhC WGs) and cavities [3]; see illustration of a PhC WG device in FIG. 1(d). It is an exciting ongoing research endeavor to exploit the properties and applicability of the quantum nonlinear response of a two-level emitter. The essential nonlinear operation has been explored using various quantum emitters such as QDs [4], color centers in diamond [5], atoms [6], and single molecules [7]. Further experimental advancements include quadrature squeezing of light [8; 9], two-photon correlation dynamics and photonic bound states [10; 11]. Theoretical proposals include photon sorters for deterministic Bell state analyzers [12], quantum logic gates [13; 14], and single-photon transistors [15]. Furthermore, many-body waveguide quantum electrodynamics may be pushed to new realms of strongly correlated light and matter [16; 17]. While it was theoretically predicted that the two-level nonlinear response can induce photon-photon correlations [1], whether this nonlinearity can lead to photon entanglement has never been truly verified experimentally. Here, we demonstrate that a two-level quantum emitter coherently coupled to a PhC WG can induce strong energy-time entanglement between two scattered photons, see FIG. 1(a), sufficient for violating a Bell inequality and thus local realism under the fair sampling assumption. The experiment couples a continuous wave (incoming light) and a discrete quantum system (emitter), offering a route to non-Gaussian photonic operations - a type of operation vigorously searched for in continuous-variable quantum computing architectures [18]. Previous research on entanglement generation with quantum emitters exploited the strong excitation (Mollow) regime in bulk samples [19] or the QD biexciton radiative cascade [20; 21; 22]. To the best of our knowledge, no such studies have yet answered the above open question and proved experimentally that passive scattering of weak fields from a two-level QD in a PhC WG can induce genuine entanglement. This work opens a conceptually new and advantageous route to energy-time entanglement generation that may be an attractive alternative to four-wave mixing sources [23] since it operates at the ultra-low energy consumption level of single photons and no complex and decoherence-sensitive pumping schemes are required. We consider a single two-level emitter deterministically coupled to a single propagating spatial mode in a PhC WG, cf. FIG. 1(d). A weak coherent input field is launched into the PhC WG and interacts with the emitter of coupling efficiency \(\beta=\frac{\gamma}{\Gamma}\) governed by the ratio between the radiative decay rate into the waveguide mode \(\gamma\) and the QD total decay rate \(\Gamma\)[24]. For \(\beta=1\) with no decoherence processes, the single-photon component is elastically reflected via interaction with the emitter, while the two-photon component can be inelastically scattered into the forward (transmission) mode, leading to energy exchange and photon bunching in time [15, 1], as shown in FIG. 1(a). The latter process is analogous to degenerate four-wave mixing with a Kerr nonlinearity [23]. Experimentally, the two-photon scattering process is studied in a Franson interferometer [25] with time-resolved photon correlation measurements, see FIG. 2. In this way, a Clauder-Horn-Shimony-Holt (CHSH) Bell inequality entanglement criterion can be tested where a Bell parameter of \(S=2\) constitutes the locality bound [26]. Various experimental imperfections influence \(S\), including the finite photon-emitter coupling efficiency (\(\beta\)-factor), pure dephasing rate (\(\gamma_{d}\) relative to the emitter linewidth \(\Gamma\)), and the strength of the incoming light (mean photon number within the emitter lifetime \(n\)). These imperfections result in a single-photon component (elastic scattering) that is not fully reflected thereby reducing \(S\). We found that \(S\) is sensitive to \(\gamma_{d}\), \(\Gamma\) and \(n\) to first-order but is remarkably robust to coupling loss, with a quartic dependence in the limit of \(\beta\to 1\): \[S(\beta)\approx 2\sqrt{2}\left[1-(1-\beta)^{4}\right]. \tag{1}\] The complete theory is presented in Supplementary Note 7, where the experimental requirements for violating the Bell inequality are also benchmarked in detail (Supplementary FIG. S6). Figure 1: (color online) **Two-photon energy-time entanglement induced by coherent interaction of two photons with a quantum dot (QD) integrated into a photonic crystal waveguide (PhC WG)**. (a) Operational principle of the photon scattering and entanglement processes. A single-photon wave packet is predominantly reflected by elastic scattering on a two-level emitter, while the two-photon wave packet can be inelastically scattered in the forward direction, thereby generating energy-time entanglement. The entanglement is probed using two unbalanced Mach–Zehnder interferometers (UMZIs), see FIG. 2. Each UMZI is used to realize time projections onto the superposition state \(\frac{|s\rangle+e^{i\phi_{\parallel}}\rangle}{\sqrt{2}}\), as illustrated on a Bloch sphere, where \(|s\rangle\) (\(|l\rangle\)) corresponds to a photon taking the short (long) path, and \(\phi\) is the phase setting of the UMZI. \(\phi=0\) (blue vector) and \(\phi=\pi\) (red vector) refer to the two settings for data in (b). (b) Transmission intensity measurements through the PhC WG and one UMZI versus QD detuning. The blue curve indicates the transmission dip by resonant scattering of a weak coherent state \(|\alpha\rangle\). Suppression of the elastically scattered laser photons by a destructive interference phase (\(\phi=\pi\)) reveals inelastically scattered photons \(|l\rangle\) (red). (c) Measured normalized second-order correlation function \(g^{(2)}(\tau)\) of the light transmitted through the PhC WG on resonance with the QD, reaching values above 200, a signature of strong bunching induced by the scattering. (d) Schematic of the QD-embedded PhC WG structure with two mode adaptors, including shallow-etched gratings (SEGs) and nanobeam access waveguides to the photonic crystal section. (e) Calculated normalized joint spectral intensity for a laser linewidth of \(\Gamma_{L}/2\pi=100\) kHz, a Purcell enhanced QD linewidth of \(\Gamma/2\pi=2.3\) GHz, and assuming ideal coupling of the QD to the PhC WG. \(\Delta_{a(b)}\) is the frequency difference between the output photon (\(a\) or \(b\)) and the input laser: \(\Delta_{a(b)}=\omega_{a(b)}-\omega_{p}\). The width of the biphoton spectrum is determined by the laser linewidth \(\Gamma_{L}\), while each photon is broadened by the QD linewidth \(\Gamma\). Upper-right insert: the corresponding spectra of the input coherent state \(|\alpha\rangle\) (blue) and the output biphoton state \(|2_{I}\rangle\) (red), respectively. Bottom-left insert: enlarged joint spectral intensity spanning a range of 1 MHz. (f) The corresponding normalized joint temporal intensity. The biphoton correlation time is determined by the QD lifetime \(\tau_{QD}\), cf. Supplementary Note 5 for characterization. FIGs. 1(a) and (d) show the conceptual scheme of the experiment. We use a narrow linewidth continuous-wave laser as a weak coherent input state \(\ket{\alpha}\). The light is coupled in and out of the PhC WG via two mode adaptors with the sample mounted in a cryostat operating at 4 K. Due to high coupling efficiency of the photon-emitter interface, the photon state is strongly modified by the nonlinear interaction, which induces energy-time entanglement involving a continuum of optical modes in multi-dimensional Hilbert space. The photon pairs generated in different temporal modes can subsequently be analyzed in a Franson interferometer [25]. The present experiment is conducted in the regime of weak resonant excitation, i.e., far below saturation threshold of the emitter. The quantum correlations induced by the nonlinear scattering are illustrated by the two-photon joint spectral and temporal intensity distributions in FIGs. 1(e)-(f). The input weak coherent state \(\ket{\alpha}\) resembles a Dirac delta function in frequency, while the output entangled photon pair \(\ket{2_{I}}\) is Lorentzian broadened by the QD linewidth \(\Gamma\). It can be expressed as \(\ket{2_{I}}=\frac{1}{2}\int d\Delta\mathcal{T}_{\Delta,-\Delta}\ket{1_{\Delta }}\ket{1_{-\Delta}}\) (Supplementary Note 6), where \(\Delta=\Delta_{a}=-\Delta_{b}\) is the frequency detuning of each outgoing photon relative to the pump frequency, and \(\mathcal{T}_{\Delta,-\Delta}=-4\beta^{2}/[\pi\Gamma(1+4\frac{\Delta^{2}}{ \Gamma^{2}})]\) is the two-photon Lorentzian spectrum [27]. Energy conservation demands \(2\omega_{p}=\omega_{a}+\omega_{b}\), which introduces anti-correlation in the two-photon joint spectral density, cf. FIG. 1(e). The time uncertainty of the generated photon pair is determined by the pump laser coherence time \(\tau_{L}>1\)\(\mu\)s (inversely proportional to the laser linewidth \(\Gamma_{L}/2\pi\approx 100\) kHz), which is much longer than the Purcell enhanced QD lifetime \(\tau_{QD}\approx 69\) ps (FIG. 1 (f)). FIG. 1(b) measures the transmission intensity of scattered photons versus QD detuning using one of the two unbalanced Mach-Zehnder interferometers (UMZIs) at different phases \(\phi\) (see UMZI setup in FIG. 2(b)). This allows separate measurements of either the extinction of elastically scattered photons from the weak coherent state (blue, \(\phi=0\)) or the inelastically scattered photons (red, \(\phi=\pi\)). On resonance, the single-photon component is primarily reflected due to destructive interference in the PhC WG, while the transmitted mode consists of residual coherent photons from the laser and inelastically scattered photons. For \(\phi=0\), elastic scattering dominates as the laser photons traversing the short and long paths constructively interfere at the beamsplitter of one UMZI, thereby revealing a transmission dip (blue data in FIG. 1(b)). Conversely, for \(\phi=\pi\), these laser photons destructively interfere, allowing direct observation of the inelastically scattered photons (red data in FIG. 1(b)). The pronounced extinction of the transmission intensity is indicative of efficient radiative coupling to the PhC WG. By modelling the experimental data sets, we extract \(\beta=92\%\) and a Purcell enhancement from slow light in Figure 2: (color online) **Experimental setup and characterization of two-photon energy-time entanglement**. (a) Time correlation histograms of coincidence counts for constructive (blue, \(\phi_{a}+\phi_{b}=0\)) and destructive (red, \(\phi_{a}+\phi_{b}=\pi\)) interference between two photons traversing the short (\(s\)) and long (\(l\)) arms of the UMZIs, respectively. (b) Experimental setup including the PhC WG chip (light green area), spectral filter (light orange area), and the Franson interferometer (light blue areas). The two Bloch spheres illustrate the two independently controlled phases \(\phi_{a}\) and \(\phi_{b}\). LP: linear polarizer; HWP: half-wave plate; QWP: quarter-wave plate; SNSPD: superconducting nanowire single-photon detector; FC: fiber collimator; BS: beam splitter; PBS: polarizing beam splitter; PC, polarization controller; FBS, fiber beam splitter; PD: photodiode. In order to control the phase difference between the two interferometer arms, we actively stabilize the UMZIs with a PID module locked by the same laser that excites the QD. (c) 2D correlation histogram of coincidence counts versus phase and time delay. the PhC WG of \(F_{P}\approx 15.9\), which increases the QD decay rate to \(\Gamma/2\pi=2.3\) GHz (compared to \(\approx 0.14\) GHz for QDs in bulk [28]). In the entanglement characterization discussed below, a narrow bandwidth notch spectral filter is implemented to suppress residual laser leakage due to experimental imperfections (non-unity \(\beta\)-factor, residual slow spectral diffusion, etc), see Supplementary Note 1 and FIG. S7 for further details of the filter and its impact on various parameter estimates of the QD. The successful preparation of a two-photon component is quantified by second-order photon correlation measurements. Here we observed a pronounced photon bunching of \(g^{(2)}(0)\approx 210\) after applying the spectral filter, see FIG. 1(c). This explicitly demonstrates that the incoming Poissonian photon distribution is significantly altered by strong nonlinear interaction with the QD [27; 15]. While our theoretical model does not take the filter into account, it accurately describes the experimental data of FIG. 1(c), by adjusting the input model parameters, see Supplementary Notes 3 and 7.5 for the details and justification. We modelled both the unfiltered and filtered sets of second-order correlation (\(g^{(2)}\)) data versus the mean number of pump photons \(n\), see Supplementary Note 3. FIG. 2(b) illustrates the experimental setup. The scattered light from the QD PhC WG is spectrally filtered and directed by a nonpolarizing fiber beamsplitter to two identical UMZIs for entanglement analysis. The time difference between the two arms of each UMZI is set to \(\tau_{I}=3.6\) ns, which is shorter than \(\tau_{L}\) and longer than the two-photon correlation time set by \(\tau_{QD}\). We implement two-photon Franson interference measurements by recording time-resolved correlations between photon pairs while controlling the interferometric phases (\(\phi_{a}\) and \(\phi_{b}\)). FIG. 2(a) reveals three distinct correlation peaks, corresponding to every possible path that the two photons can take separately: long-short \(\ket{l,s}\) (left), short-short \(\ket{s,s}\) or long-long \(\ket{l,l}\) (center), and short-long \(\ket{s,l}\) (right). Since the two paths \(\ket{s,s}\) and \(\ket{l,l}\) cannot be distinguished in time, the central peak corresponds to projection onto the entangled state \(\ket{s,s}+e^{i(\phi_{a}+\phi_{b})}\ket{l,l}\). By tuning the interferometer phases such that \(\phi_{a}+\phi_{b}=0\) (\(\pi\)), we observe constructive (destructive) interference of the central peak (see FIG. 2(a)), stemming from the two-photon energy-time entanglement. By further measuring a 2D histogram shown in FIG. 2(c), almost background-free quantum interference is observed as a testimony of the highly efficient spectral selection of the two-photon scattering component, see Supplementary Note 4 for background noise comparison between filtered and unfiltered data. In FIG. 3(a), we scan the phase \(\phi_{b}\) for two different phase settings of interferometer \(a\) (\(\phi_{a}=0,\pi/4\)). The Franson interference visibility is defined as \(V=(R_{\text{max}}-R_{\text{min}})/(R_{\text{max}}+R_{\text{min}})\), where \(R_{\text{min}}\) (\(R_{\text{max}}\)) is the coincidence rate of the central peak at the minimum (maximum) of the interference curve. To obtain higher count rates with smaller fluctuations (error bars), the coincidence time window is increased to 0.512 ns compared with its counterpart (0.064 ns) in FIGs. 1-2. Fitting the data with a sinusoidal, we extract an interference visibility of \(V=95(4)\%\), which indicates the presence of entanglement [29]. The energy-time entangled photon pair induced by the nonlinear interaction is thoroughly certified with a CHSH Bell inequality Figure 3: (color online) **Two-photon Franson interference measurements and observation of a violation of the CHSH inequality.** (a) Interference curves as a function of \(\phi_{b}\) with \(\phi_{a}\) fixed at 0 (red) and \(\pi/4\) (blue). The data are modeled with a sinusoidal model whereby a visibility of 95(4)% is extracted. (b) Measured correlation functions from which \(S=2.67(16)\) is recorded. (c) \(S\) parameter versus \(n\) (bottom x-axis) or the corresponding pump power in the PhC WG (top x-axis). \(n\) and pump powers are calibrated by fitting the full set of transmission intensity data (Supplementary Note 2). The solid curve is the theoretical model (Supplementary Note 7.5) with parameters taken from the filtered power saturation \(g^{(2)}\) measurements in Supplementary Note 3, i.e., no additional fitting was performed. The black dashed line represents the locality bound. The data in (a) and (b) are recorded for the lowest \(n\) (0.0024) or pump power (7.2 pW) in (c). The error bars show the standard deviations assuming Poissonian distribution for the photon statistics. test [26]. The CHSH \(S\) parameter is defined as \(S=|E(\phi_{a},\phi_{b})+E(\phi_{a},\phi_{b^{\prime}})-E(\phi_{a^{\prime}},\phi_{ b})+E(\phi_{a^{\prime}},\phi_{b^{\prime}})|\), where \(E(\phi_{a},\phi_{b})\) denotes the correlation function required for the CHSH inequality, which is a combination of four unnormalized \(g^{(2)}\) after the UMZIs at different phase settings (Supplementary Note 7.4). FIG. 3(b) shows the strongest correlations measured at the lowest value of \(n\) in FIG. 3(c), which corresponds to a pump power of 7.2 pW at a single photon level. We record a pronounced violation of the CHSH Bell inequality \(S=2.67(16)>2\) by more than four standard deviations. This validates that non-local quantum correlations can be induced by two-photon inelastic scattering of a deterministically coupled two-level emitter. FIG. 3(c) explores the power dependence of the \(S\) parameter and the experimental data agree well with the theoretical model detailed in Supplementary Notes 7.5. The entanglement quality is primarily limited by photon distinguishability contributions from pure dephasing, as well as multi-photon scattering processes from finite \(n\). We have experimentally demonstrated the violation of the CHSH Bell inequality by weak scattering of a single two-level emitter deterministically coupled to light in a PhC WG. Our scheme operates at a picowatt pump power (single-photon level) by exploiting the giant nonlinearity of the emitter, laying the foundation for ultimately energy-efficient and integrated sources of entangled photons. The source is easy to operate experimentally since no elaborate excitation schemes or active spin control is required, which could constitute significant overhead for future up-scaling of entanglement generation schemes. Future experiments could exploit the creation of high-dimensional entanglement [30] and the synthesis of photonic quantum states useful for quantum optics neural network [31]. Another promising direction is to engineer the inelastic scattering processes by many-body subradiant states using coupled QDs [32, 33]. Waveguide-mediated quantum nonlinear interactions will prove essential to applications within photonic quantum computing [34], quantum communication [35] and quantum sensing [36, 37]. **Data availability** The data that support the figures in this manuscript are available from the corresponding authors upon request. **Acknowledgement** We gratefully acknowledge financial support from Danmarks Grundforskningsfond (DNRF 139, Hy-Q Center for Hybrid Quantum Networks), the Novo Nordisk Foundation (Challenge project "Solid-Q"). Furthermore, this project has received funding from the European Union's Horizon 2020 research and innovation programmes under Grant Agreements No. 824140 (TOCHA, H2020-FETPROACT-01-2018). B.S. acknowledges financial support from Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), grant no. 449674892. O.A.D.S. acknowledges funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement no. 801199. M.L.C. acknowledges funding from the European Union's Horizon 2020 Research and Innovation programme under grant agreement No. 861097 (project name QUDOT-TECH). **Author contributions** S.L. and A.T. performed the experiments based on interferometers built by R.B.N., R.G.L. and Y.A. S.L. and A.T. carried out the measurements, and analyzed the data with help from M.L.C. S.L. prepared the figures with input from A.T., A.S.S. and P.L. O.A.D.S. developed the theory with input from B.S and A.S.S.L., A.T., O.A.D.S, and P.L. wrote the manuscript with input from all the authors. S.S., A.D.W., and A.L. prepared the wafer. Y.W. and L.M. fabricated the chip. A.T. and P.L. conceived the idea and supervised the project. **Competing interests** P.L. is the founder of the company Sparrow Quantum, which commercializes single-photon sources. The authors declare no other competing interests.
2303.10610
DiffMIC: Dual-Guidance Diffusion Network for Medical Image Classification
Diffusion Probabilistic Models have recently shown remarkable performance in generative image modeling, attracting significant attention in the computer vision community. However, while a substantial amount of diffusion-based research has focused on generative tasks, few studies have applied diffusion models to general medical image classification. In this paper, we propose the first diffusion-based model (named DiffMIC) to address general medical image classification by eliminating unexpected noise and perturbations in medical images and robustly capturing semantic representation. To achieve this goal, we devise a dual conditional guidance strategy that conditions each diffusion step with multiple granularities to improve step-wise regional attention. Furthermore, we propose learning the mutual information in each granularity by enforcing Maximum-Mean Discrepancy regularization during the diffusion forward process. We evaluate the effectiveness of our DiffMIC on three medical classification tasks with different image modalities, including placental maturity grading on ultrasound images, skin lesion classification using dermatoscopic images, and diabetic retinopathy grading using fundus images. Our experimental results demonstrate that DiffMIC outperforms state-of-the-art methods by a significant margin, indicating the universality and effectiveness of the proposed model. Our code will be publicly available at https://github.com/scott-yjyang/DiffMIC.
Yijun Yang, Huazhu Fu, Angelica I. Aviles-Rivero, Carola-Bibiane Schönlieb, Lei Zhu
2023-03-19T09:15:45Z
http://arxiv.org/abs/2303.10610v3
# DiffMIC: Dual-Guidance Diffusion Network ###### Abstract Diffusion Probabilistic Models have recently shown remarkable performance in generative image modeling, attracting significant attention in the computer vision community. However, while a substantial amount of diffusion-based research has focused on generative tasks, few studies have applied diffusion models to general medical image classification. In this paper, we propose the first diffusion-based model (named DiffMIC) to address general medical image classification by eliminating unexpected noise and perturbations in medical images and robustly capturing semantic representation. To achieve this goal, we devise a dual conditional guidance strategy that conditions each diffusion step with multiple granularities to improve step-wise regional attention. Furthermore, we propose learning the mutual information in each granularity by enforcing Maximum-Mean Discrepancy regularization during the diffusion forward process. We evaluate the effectiveness of our DiffMIC on three medical classification tasks with different image modalities, including placental maturity grading on ultrasound images, skin lesion classification using dermatoscopic images, and diabetic retinopathy grading using fundus images. Our experimental results demonstrate that DiffMIC outperforms state-of-the-art methods by a significant margin, indicating the universality and effectiveness of the proposed model. Our code will be publicly available at [https://github.com/scott-yjyang/DiffMIC](https://github.com/scott-yjyang/DiffMIC). Keywords:diffusion probabilistic model medical image classification placental maturity skin lesion diabetic retinopathy. ## 1 Introduction The implications of digital medical imaging in modern healthcare have led to the indispensable role of medical image analysis in clinical therapy [5]. Medical image classification, which is a fundamental step in medical image analysis, aims to distinguish medical images according to a certain criterion, such as clinical pathologies or imaging modalities. A reliable medical image classification system can assist doctors in the fast and accurate interpretation of medical images. A large number of solutions for medical image classification have been developed over the past decades in the literature, most of which are based on deep neural networks ranging from popular convolutional neural networks to vision transformers [8, 9, 22, 23]. These methods have the potential to reduce the time and effort required for manual classification and improve the consistency and accuracy of results. However, medical images with diverse modalities still challenge existing methods due to the presence of various ambiguous lesions and fine-grained tissues, such as ultrasound (US), dermatoscopic, and fundus images. Moreover, generating medical images under hardware limitations can cause noisy and blurry effects, which can degrade image quality and thus demand a more effective feature representation modeling for robust classifications. Recently, Denoising Diffusion Probabilistic Models (DDPM) [14] have achieved excellent results in image generation and synthesis tasks [21, 2, 6, 26] by iteratively improving the quality of a given image. Specifically, DDPM is a generative model based on a Markov chain, which models the data distribution by simulating a diffusion process that evolves the input data towards a target distribution. Although a few pioneer works tried to adopt the diffusion model for image segmentation and object detection tasks [1, 29, 4, 12], their potential for high-level vision has yet to be fully explored. Motivated by the success of diffusion models in generative image modeling, we propose a novel Denoising Diffusion-based model, named DiffMIC for accurate classification of diverse medical image modalities. To the best of our knowledge, we are the first to propose a Diffusion-based model for general medical image classification. Our method can appropriately eliminate undesirable noise in medical images as the diffusion process is stochastic in nature for each sampling step. In particular, we introduce a Dual-granularity Conditional Guidance (DCG) strategy to guide the denoising procedure, conditioning each step with both global and local priors in the diffusion process. By conducting the diffusion process on smaller patches, our method can distinguish critical tissues with fine-grained capability. Moreover, we introduce Condition-specific Maximum-Mean Discrepancy (MMD) regularization to learn the mutual information in the latent space for each granularity, enabling the network to model a robust feature representation shared by the whole image and patches. We evaluate the effectiveness of DiffMIC on three 2D medical image classification tasks: placental maturity grading, skin lesion classification, and diabetic retinopathy grading. Experimental results show that our diffusion-based classification method consistently and significantly outperforms state-of-the-art methods for all three datasets. Our DiffMIC provides a promising solution for accurate and robust classification of diverse medical image modalities. In summary, the technical contributions of our work are four-fold: * To the best of our knowledge, our work is the first one to develop a DDPM-based model for general medical image classification. * We design a novel guidance strategy to condition each step by dual-granularity priors. * We introduce maximum-mean discrepancy regularization for each conditional prior to learn mutual information in the iterative sampling process. * Experimental results on three medical image classification benchmark datasets have shown that our DiffMIC clearly outperforms state-of-the-art medical image classification methods. ## 2 Method Figure 1 shows the schematic illustration of our network for medical image classification. Given an input medical image \(x\), we pass it to an image encoder to obtain the image feature embedding \(\rho(x)\), and a dual-granularity conditional guidance (DCG) model to produce the global prior \(\hat{y}_{g}\) and local prior \(\hat{y}_{l}\). At the training stage, we apply the diffusion process on ground truth \(y_{0}\) and different priors to generate three noisy variables \(y_{t}^{g}\), \(y_{t}^{l}\), and \(y_{t}\) (the global prior for \(y_{t}^{g}\), the local prior for \(y_{t}^{l}\), and dual priors for \(y_{t}\)). Then, we concatenate the three noisy variables \(y_{t}^{g}\), \(y_{t}^{l}\), and \(y_{t}\) with their corresponding priors and project them into a latent space, respectively. We further integrate three projected embeddings with the image feature embedding \(\rho(x)\) in the denoising U-Net, respectively, and predict the noise distribution sampled for \(y_{t}^{g}\), \(y_{t}^{l}\), and \(y_{t}\). We devise condition-specific maximum-mean discrepancy (MMD) regularization loss on the predicted noise of \(y_{t}^{g}\) and \(y_{t}^{l}\), and employ the noise estimation mean squared error (MSE) loss on the predicted noise of \(y_{t}\) to collaboratively train our DiffMIC network. **Diffusion Model.** Following DDPM [14], our diffusion model also has two stages: a forward diffusion stage (training) and a reverse diffusion stage (infer Figure 1: **Overview of our DiffMIC framework.** (a) The training stage (forward process) and (b) The inference stage (reverse process) are constructed, respectively. (The noise of feature embedding is greater with the darker color.) (c) The DCG Model \(\tau_{\mathcal{D}}\) obtains the dual priors from the raw image and ROIs to guide the diffusion process. ence). In the forward process, the ground truth response variable \(y_{0}\) is added Gaussian noise through the diffusion process conditioned by time step \(t\) sampled from a uniform distribution of \([1,T]\), and such noisy variables are denoted as \(\{y_{1},...,y_{t},..,y_{T}\}\). As suggested by the standard implementation of DDPM, we adopt a UNet as the denoising network to parameterize the reverse diffusion process and learn the noise distribution in the forward process. In the reverse diffusion process, the trained UNet \(\epsilon_{\theta}\) generates the final prediction \(\hat{y}_{0}\) by transforming the noisy variable distribution \(p_{\theta}(y_{T})\) to the ground truth distribution \(p_{\theta}(y_{0})\): \[p_{\theta}(y_{0:T-1}|y_{T},\rho(x))=\prod_{t=1}^{T}p_{\theta}(y_{t-1}|y_{t}, \rho(x)),\ \ \text{and}\ \ \ p_{\theta}(y_{T})=\mathcal{N}(\frac{\hat{y}_{g}+\hat{y}_{l}}{2},\mathbb{I}), \tag{1}\] where \(\theta\) is parameters of the denoising UNet, \(\mathcal{N}(\cdot,\cdot)\) denotes the Gaussian distribution, and \(\mathbb{I}\) is the identity matrix. ### Dual-granularity Conditional Guidance (DCG) Strategy **DCG Model.** In most conditional DDPM, the conditional prior will be a unique given information. However, medical image classification is particularly challenging due to the ambiguity of objects. It is difficult to differentiate lesions and tissues from the background, especially in low-contrast image modalities, such as ultrasound images. Moreover, unexpected noise or blurry effects may exist in regions of interest (ROIs), thereby hindering the understanding of high-level semantics. Taking only a raw image \(x\) as the condition in each diffusion step will be insufficient to robustly learn the fine-grained information, resulting in classification performance degradation. To alleviate this issue, we design a Dual-granularity Conditional Guidance (DCG) for encoding each diffusion step. Specifically, we introduce a DCG model \(\tau_{\mathcal{D}}\) to compute the global and local conditional priors for the diffusion process. Similar to the diagnostic procedure of a radiologist, we can obtain a holistic understanding from the global prior and also concentrate on areas corresponding to lesions from the local prior when removing the negative noise effects. As shown in Figure 1 (c), for the global stream, the raw image data \(x\) is fed into the global encoder \(\tau_{g}\) and then a \(1\times 1\) convolutional layer to generate a saliency map of the whole image. The global prior \(\hat{y}_{g}\) is then predicted from the whole saliency map by averaging the responses. For the local stream, we further crop the ROIs whose responses are significant in the saliency map. Each ROI is fed into the local encoder \(\tau_{l}\) to obtain a feature vector. We then leverage the gated attention mechanism[15] to fuse all feature vectors from ROIs to obtain a weighted vector, which is then utilized for computing the local prior \(\hat{y}_{l}\) by one linear layer. **Denoising Model.** The noisy variable \(y_{t}\) is sampled in the diffusion process based on the global and local priors computed by the DCG model following: \[y_{t}=\sqrt{\bar{\alpha}_{t}}y_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon+(1-\sqrt {\bar{\alpha}_{t}})(\hat{y}_{g}+\hat{y}_{l}), \tag{2}\] where \(\epsilon\sim\mathcal{N}(0,I)\), \(\bar{\alpha}_{t}=\prod_{t}\alpha_{t},\alpha_{t}=1-\beta_{t}\) with a linear noise schedule \(\{\beta_{t}\}_{t=1:T}\in(0,1)^{T}\). After that, we feed the concatenated vector of the noisy variable \(y_{t}\) and dual priors into our denoising model UNet \(\epsilon_{\theta}\) to estimate the noise distribution, which can be expressed as: \[\epsilon_{\theta}(\rho(x),y_{t},\hat{y}_{g},\hat{y}_{l},t)=D(E(f([y_{t},\hat{y }_{g},\hat{y}_{l}]),\rho(x),t),t), \tag{3}\] where \(f(\cdot)\) denotes the projection layer to the latent space. \([\cdot]\) is the concatenation operation. \(E(\cdot)\) and \(D(\cdot)\) are the encoder and decoder of UNet. Note that the image feature embedding \(\rho(x)\) is further integrated with the projected noisy embedding in the UNet to make the model focus on high-level semantics and thus obtain more robust feature representations. In the forward process, we seek to minimize the noise estimation loss \(\mathcal{L}_{\epsilon}\): \[\mathcal{L}_{\epsilon}=||\epsilon-\epsilon_{\theta}(\rho(x),y_{t},\hat{y}_{g},\hat{y}_{l},t)||^{2}. \tag{4}\] Our method improves the vanilla diffusion model by conditioning each step estimation function on priors that combine information derived from the raw image and ROIs. ### Condition-specific MMD Regularization Maximum-Mean Discrepancy (MMD) is to quantify the similarity between two distributions by comparing all of their moments [11; 17]. It can be efficiently implemented using a kernel trick. Inspired by InfoVAE [31], we introduce an additional pair of condition-specific MMD regularization loss to learn mutual information between the sampled noise distribution and the Gaussian distribution. To be specific, we sample the noisy variable \(y_{t}^{g}\) from the diffusion process at time step \(t\) conditioned only by the global prior and then compute an MMD-regularization loss as: \[\begin{split}\mathcal{L}_{MMD}^{g}&(n||m)=\mathbb{K }(n,n^{{}^{\prime}})-2\mathbb{K}(m,n)+\mathbb{K}(m,m^{{}^{\prime}}),\\ &\text{with}\ \ n=\epsilon,\ \ m=\epsilon_{\theta}(\rho(x), \sqrt{\bar{\alpha}_{t}}y_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon+(1-\sqrt{\bar {\alpha}_{t}})\hat{y}_{g},\hat{y}_{g},t),\end{split} \tag{5}\] where \(\mathbb{K}(\cdot,\cdot)\) is a positive definite kernel to reproduce distributions in the Hilbert space. The condition-specific MMD regularization is also applied on the local prior, as shown in Figure 1 (a). While the general noise estimation loss \(\mathcal{L}_{\epsilon}\) captures the complementary information from both priors, the condition-specific MMD regularization maintains the mutual information between each prior and target distribution. This also helps the network better model the robust feature representation shared by dual priors and converge faster in a stable way. ### Training and Inference Scheme #### 2.3.1 Total loss. By adding the noise estimation loss and the MMD-regularization loss, we compute the total loss \(\mathcal{L}_{diff}\) of our denoising network as follows: \[\mathcal{L}_{diff}=\mathcal{L}_{\epsilon}+\lambda(\mathcal{L}_{MMD}^{g}+ \mathcal{L}_{MMD}^{l}), \tag{6}\] where \(\lambda\) is a balancing hyper-parameter, and it is empirically set as \(\lambda\)=0.5. Training details.The diffusion model in this study leverages a standard DDPM training process, where the diffusion time step \(t\) is selected from a uniform distribution of \([1,T]\), and the noise is linearly scheduled with \(\beta_{1}=1\times 10^{-4}\) and \(\beta_{T}=0.02\). We adopt ResNet18 as the image encoder \(\rho(\cdot)\). Following [12], we concatenate \(y_{t}\),\(\hat{y}_{g}\),\(\hat{y}_{l}\), and apply a linear layer to obtain the fused vector in the latent space. We perform a Hadamard product between such vector and a timestep embedding to obtain a response embedding conditioned on the timestep. We then perform Hadamard product between the image feature embedding and response embedding to integrate these variables, and send the resulting vector through two more fully-connected layers with the same dimension, each would first be followed by a Hadamard product with a timestep embedding, and lastly a fully-connected layer with an output dimension of classes as the noise prediction. Note that all fully-connected layers are also followed by a batch normalization layer and a Softplus non-linearity, except the output layer. For the DCG model \(\tau_{D}\), the backbone of its global and local stream is ResNet. We adopt the standard cross-entropy loss as the objective of the DCG model. We jointly train the denoising diffusion model and DCG model after pretraining the DCG model 10 epochs for warm-up, thereby resulting in an end-to-end DiffMIC for medical image classification. Inference stage.As displayed in Figure 1 (b), given an input image \(x\), we first feed it into the DCG model to obtain dual priors \(\hat{y}_{g},\hat{y}_{l}\). Then, following the pipeline of DDPM, the final prediction \(\hat{y}_{0}\) is iteratively denoised from the random prediction \(y_{T}\) using the trained UNet conditioned by dual priors \(\hat{y}_{g},\hat{y}_{l}\) and the image feature embedding \(\rho(x)\). ## 3 Experiments ### Datasets and Evaluation We evaluate the effectiveness of our network on an in-home dataset and two public datasets, e.g., PMG2000, HAM10000 [27], and APTOS2019 [16]. **(a) PMG2000.** We collect and annotate a benchmark dataset (denoted as PMG2000) for placental maturity grading (PMG) with four categories5. PMG2000 is composed of 2,098 ultrasound images, and we randomly split the whole data into a training set and a testing set with a ratio of 8:2. **(b) HAM10000.** HAM10000 [27] is from the Skin Lesion Analysis Toward Melanoma Detection 2018 challenge, and it contains 10,015 skin lesion images with predefined 7 categories. **(c) APTOS2019.** In APTOS2019 [16], there are 3,662 labeled fundus images for grading diabetic retinopathy into five categories. Following the same protocol in [10], we split HAM10000 and APTOS2019 into a train set and a test set with the same ratio of 7:3. These three datasets are with different medical image modalities. PMG2000 is gray-scale and class-balanced ultrasound images; HAM10000 is colorful but class-imbalanced dermatoscopic images; and APTOS2019 is another class-imbalanced dataset with colorful Fundus images. Moreover, we introduce two widely-used metrics (i.e., Accuracy and F1-score) for quantitatively comparing our network and state-of-the-art methods. ### Implementation Details All the experiments are implemented with the PyTorch on one NVIDIA RTX 3090 GPU. We center-crop the image and then resize the spatial resolution of the cropped image to 224\(\times\)224. Random flipping and rotation are implemented for data augmentation in the training process. Our network is trained in an end-to-end manner using the Adam optimizer with a batch size of 32. The initial learning rate is set as 1\(\times\)10\({}^{-3}\) for the denoising model U-Net, and 2\(\times\)10\({}^{-4}\) for the DCG model (see Section 2.1) when training the whole network of our method in all experiments. Following [20], the number of training epochs is set as 1,000 for all three datasets. In inference, we empirically set the total diffusion time step \(T\) as 100 for PMG2000, 250 for HAM10000, and 60 for APTOS2019, which is much smaller than most of the existing works [14, 12]. The average running time of our DiffMIC is about 0.056 seconds for classifying an image with a spatial resolution of 224\(\times\)224. ### Comparison with State-of-the-art Methods In Table 1(a), we compare our DiffMIC against many state-of-the-art CNNs and transformer-based networks, including ResNet, Vision Transformer (ViT), Swin Transformer (Swin), Pyramid Transformer (PVT), and a medical image classification method (i.e., GMIC) on PMG2000. Apparently, PVT has the largest Accuracy of 0.907, and the largest F1-score of 0.902 among these methods. More importantly, our method further outperforms PVT. It improves the Accuracy from 0.907 to 0.931, and the F1-score from 0.902 to 0.926. \begin{table} \end{table} Table 1: Quantitative comparison to state-of-the-art methods on three classification datasets. The best results are marked in bold font. Note that both HAM10000 and APTOS2019 have a class imbalance issue. Hence, we compare our DiffMIC against state-of-the-art long-tailed medical image classification methods, and report the comparison results in Table 1(b). For HAM10000, our method produces a promising improvement over the second-best method ProCo of 0.019 and 0.053 in terms of Accuracy and F1-score, respectively. For APTOS2019, our method obtains a considerable improvement over ProCo of 0.021 and 0.042 in Accuracy and F1-score respectively. ### Ablation Study We conduct ablation studies to evaluate the effectiveness of major modules of our network. To do so, we build three baseline networks from our method. The first baseline (denoted as "basic") is to remove all diffusion operations and the MMD regularization loss from our network. It means that "basic" is equal to the classical ResNet18. Then, we apply the vanilla diffusion process onto "basic" to construct another baseline network (denoted as "C1"), and further add our dual-granularity conditional guidance into the diffusion process to build a baseline network, which is denoted as "C2". Hence, "C2" is equal to removing the MMD regularization loss from our network for image classification. Table 2 reports the Accuracy and F1-score results of our method and three baseline networks on our \begin{table} \begin{tabular}{c|c c c|c c} \hline \hline & \multicolumn{2}{c|}{Diffusion DCG MMD-reg} & \multicolumn{2}{c}{Accuracy F1-score} \\ \hline basic & - & - & - & 0.879 & 0.881 \\ C1 & ✓ & - & - & 0.906 & 0.899 \\ C2 & ✓ & ✓ & - & 0.920 & 0.914 \\ \hline **Our method** & ✓ & ✓ & ✓ & **0.931** & **0.926** \\ \hline \hline \end{tabular} \end{table} Table 2: Effectiveness of each module in our DiffMIC on the PMG2000 dataset. Figure 2: t-SNE obtained from the denoised feature embedding by the diffusion reverse process during inference on three datasets. \(t\) is the current diffusion time step for inference. As the time step encoding progresses, the noise is gradually removed, thereby obtaining a clear distribution of classes; see the last column (please zoom in). PMG2000 dataset. Apparently, compared to "basic", "C1" has an Accuracy improvement of 0.027 and an F1-score improvement of 0.018, which indicates that the diffusion mechanism can learn more discriminate features for medical image classification, thereby improving the PMG performance. Moreover, the better Accuracy and F1-score results of "C2" over "C1" demonstrates that introducing our dual-granularity conditional guidance into the vanilla diffusion process can benefit the PMG performance. Furthermore, our method outperforms "C2" in terms of Accuracy and F1-score, which indicates that exploring the MMD regularization loss in the diffusion process can further help to enhance the PMG results. ### Visualization of our Diffusion Procedure To illustrate the diffusion reverse process guided by our dual-granularity conditional encoding, we used the t-SNE tool to visualize the denoised feature embeddings at consecutive time steps. Figure 2 presents the results of this process on all three datasets. As the time step encoding progresses, the denoise diffusion model gradually removes noise from the feature representation, resulting in a clearer distribution of classes from the Gaussian distribution. The total number of time steps required for inference depends on the complexity of the dataset. ## 4 Conclusion This work presents a dual-guidance diffusion network (DiffMIC) for boosting medical image classification. The main idea of our DiffMIC is to introduce dual-granularity conditional guidance over vanilla DDPM, and enforce condition-specific MMD regularization to improve classification performance. Experimental results on three medical image classification datasets with different image modalities show the superior performance of our network over state-of-the-art methods. As the first diffusion-based model for general medical image classification, our DiffMIC has the potential to serve as an essential baseline for future research in this area.
2310.10728
Post-merger gravitational-wave signal from neutron-star binaries: a new look at an old problem
The spectral properties of the post-merger gravitational-wave signal from a binary of neutron stars encode a variety of information about the features of the system and of the equation of state describing matter around and above nuclear saturation density. Characterising the properties of such a signal is an ``old'' problem, which first emerged when a number of frequencies were shown to be related to the properties of the binary through ``quasi-universal'' relations. Here we take a new look at this old problem by computing the properties of the signal in terms of the Weyl scalar $\psi_4$. In this way, and using a database of more than 100 simulations, we provide the first evidence for a new instantaneous frequency, $f^{\psi_4}_0$, associated with the instant of quasi time-symmetry in the postmerger dynamics, and which also follows a quasi-universal relation. We also derive a new quasi-universal relation for the merger frequency $f^{h}_{\rm mer}$, which provides a description of the data that is four times more accurate than previous expressions while requiring fewer fitting coefficients. Finally, consistently with the findings of numerous studies before ours, and using an enlarged ensamble of binary systems we point out that the $\ell=2, m=1$ gravitational-wave mode could become comparable with the traditional $\ell=2, m=2$ mode on sufficiently long timescales, with strain amplitudes in a ratio $|h^{21}|/|h^{22}| \sim 0.1-1$ under generic orientations of the binary, which could be measured by present detectors for signals with large signal-to-noise ratio or by third-generation detectors for generic signals should no collapse occur.
Konrad Topolski, Samuel Tootle, Luciano Rezzolla
2023-10-16T18:00:05Z
http://arxiv.org/abs/2310.10728v1
# Post-merger gravitational-wave signal from neutron-star binaries: a new look at an old problem ###### Abstract The spectral properties of the post-merger gravitational-wave signal from a binary of neutron stars encodes a variety of information about the features of the system and of the equation of state describing matter around and above nuclear saturation density. Characterising the properties of such a signal is an "old" problem, which first emerged when a number of frequencies were shown to be related to the properties of the binary through "quasi-universal" relations. Here we take a new look at this old problem by computing the properties of the signal in terms of the Weyl scalar \(\psi_{4}\). In this way, and using a database of more than 100 simulations, we provide the first evidence for a new instantaneous frequency, \(f_{0}^{\psi_{4}}\), associated with the instant of quasi time-symmetry in the postmerger dynamics, and which also follows a quasi-universal relation. We also derive a new quasi-universal relation for the merger frequency \(f_{\rm mer}^{h}\), which provides a description of the data that is four times more accurate than previous expressions while requiring fewer fitting coefficients. Finally, consistently with the findings of numerous studies before ours, and using an enlarged ensamble of binary systems we point out that the \(\ell=2,m=1\) gravitational-wave mode could become comparable with the traditional \(\ell=2,m=2\) mode on sufficiently long timescales, with strain amplitudes in a ratio \(|h^{21}|/|h^{22}|\sim 0.1-1\) under generic orientations of the binary, which could be measured by present detectors for signals with large signal-to-noise ratio or by third-generation detectors for generic signals should no collapse occur. Neutron stars (1108), Gravitational waves (678), Compact binary stars (283) ## 1 Introduction The observation of a gravitational-wave (GW) signal from the binary neutron-star (BNS) merger event GW170817 (The LIGO Scientific Collaboration & The Virgo Collaboration, 2017), and the detection of an electromagnetic (EM) counterpart, has testified the enormous potential of GW astronomy. Starting from early works with simplified equations of state (EOSs) (see, e.g., Shibata et al., 2005; Anderson et al., 2008; Liu et al., 2008; Baiotti et al., 2008; Hotokezaka et al., 2011), increasingly more comprehensive simulations of these events, which involve an ever more detailed description of the microphysics (Bauswein et al., 2019; De Pietri et al., 2019; Gieg et al., 2019; Tootle et al., 2022; Most et al., 2022; Camilletti et al., 2022; Ujevic et al., 2023), of the magnetic-field evolution (Rezzolla et al., 2011; Dionysopoulou et al., 2013; Ciolfi et al., 2019; Sun et al., 2022; Zappa et al., 2023), and its amplification (Kiuchi et al., 2015; Palenzuela et al., 2022; Chabanov et al., 2023), and of transport of neutrinos (Foucart et al., 2022; Zappa et al., 2023), allow one to make predictions from the early inspiral up to the long-term evolution of the postmerger remnant (De Pietri et al., 2020; Kiuchi et al., 2022). During each stage in the evolution of the binary, the features of the GW and EM signals change in a characteristic manner, encoding information on the properties of the constituent neutron stars and of the hypermassive neutron star (HMNS) produced after the merger and, hence, on the governing EOS. Characterising the properties of the post-merger GW signal is a rather "old" problem, which has first emerged when a number of peculiar frequencies were shown to be related with the properties of the binary through _quasi-universal_ relations, i.e., relations that are almost independent of the specific EOS. These relations have been suggested for the GW frequency at merger \(f_{\rm mer}\)(Read et al., 2013; Bernuzzi et al., 2014; Takami et al., 2015; Rezzolla & Takami, 2016; Most et al., 2019; Bauswein et al., 2019; Weih et al., 2020; Gonzalez et al., 2022), the dominant frequency in the postmerger spectrum \(f_{2}\)(see, e.g., Oechslin & Janka, 2007; Bauswein & Janka, 2012; Read et al., 2013; Rezzolla & Takami, 2016; Gonzalez et al., 2022), and other frequencies identifiable in the transient period right after the merger (Bauswein & Stergioulas, 2015; Takami et al., 2015; Rezzolla & Takami, 2016). Fits to these quasi-universal relations have been employed in a number of studies (see, e.g., (Bauswein et al., 2016; Baiotti and Rezzolla, 2017) for some reviews). These EOS-insensitive relations can help enormously in constraining the EOS of matter at nuclear densities, marking the possible appearance of phase transitions (Most et al., 2019; Weih et al., 2020; Liebling et al., 2021; Prakash et al., 2021; Fujimoto et al., 2023; Tootle et al., 2022; Espino et al., 2023), inform waveform models (Bose et al., 2018; Breschi et al., 2019); however, see Raithel and Most (2022) for possible violations of these relations. Another relatively "old" problem in the characterisation of the GW signal from BNS is the one about the relative weight of the lower-order multipole \(\ell=2,m=1\). Numerical simulations have highlighted that the HMNS can be subject to a nonaxisymmetric instability that powers the growth of \(\ell=2,m=1\) mode of the rest-mass density distribution and, hence, of the corresponding GW signal (see, e.g., East et al., 2015; Lehner et al., 2016, 2016; Radice et al., 2016; East et al., 2019; Papenfort et al., 2022). This mode can already be seeded by the initial asymmetry of the system in the unequal-mass case, or develop by the shearing of the contact layers of the binary constituents upon merger. While the \(\ell=2,m=2\) GW mode is the primary contributor to the GW signal, it is damped faster than the other modes leading to interesting secular behaviours. We here take a new look at both of these old problems by considering the spectral properties of the GW signal when computed in terms of the Weyl scalar \(\psi_{4}\). In this way, we are able to find three novel features that can enrich our understanding of the GW signal from BNS mergers. In particular, we first highlight the presence of a new instantaneous frequency, which we dub \(f_{0}^{\psi_{4}}\), that can be associated with the instant of quasi time-symmetry in the postmerger dynamics. Interestingly, we find that a quasi-universal relation exists for \(f_{0}^{\psi_{4}}\) as a function of the tidal deformability \(\kappa_{2}^{ T}\) and of the binary mass ratio \(q\). Second, by employing a large number of BNS simulations, some of which are taken from the CoRe database (Gonzalez et al., 2022), we obtain a new quasi-universal relation for \(f_{\rm mer}\) as a function of \(\kappa_{2}^{ T}\) and \(q\) that not only requires a smaller number of coefficients, but also provides a more accurate description of the data. Finally, as already suggested in (Papenfort et al., 2022), we provide evidence that the \(\ell=2,m=1\) GW mode could become the most powerful mode on secular timescales after the merger. ## 2 Numerical and Physical Framework Our analysis is based on the GW signal computed via numerical simulations of BNS mergers in full general relativity computed with the codes described in (Radice et al., 2014, 2014; Most et al., 2019, 2021; Papenfort et al., 2021; Tootle et al., 2021) and using a number of different EOSs (see below). In addition, we employ part of the data contained in the CoRe database (Gonzalez et al., 2022), from where we select only simulations with the highest-resolution. The combined data of \(118\) irrotational binaries covers the range \(q:=M_{2}/M_{1}\in[0.485,1]\) in the mass ratio, \(M:=M_{1}+M_{2}\in[2.4,3.33]\)\(M_{\odot}\) in the total ADM mass at infinite separation, and \(\kappa_{2}^{ T}\in[33,458]\) in the tidal deformability. The dataset comprises a variety of EOSs including some with quark matter (Prakash et al., 2021; Logoteta, Domenico et al., 2021; Alford et al., 2005; Demircik et al., 2022; Tootle et al., 2022). A crucial role in our analysis is played by the use of the Weyl scalar \(\psi_{4}\) in place of the standard dimensionless strain polarisations \(h_{+,\times}\). The two quantities are mathematically equivalent and related by two time derivatives (i.e., \(\psi_{4}=\partial_{t}^{2}(h_{+}-ih_{\times})\); see (Bishop and Rezzolla, 2016) for a review). However, while \(\psi_{4}\) is computed from the simulations, \(h_{+,\times}\) are obtained after a nontrivial double time integration (the transformation from \(h_{+,\times}\) to is \(\psi_{4}\) trivial as it involves derivatives and not integrals; see (Calderon Bustillo et al., 2022) for a data-analysis framework based on \(\psi_{4}\), which can obviously be employed for all types of compact-object binaries). More importantly, the evolution of the GW frequency from \(\psi_{4}\) is less rapid than from the strain, i.e., \(\partial_{t}\ln f_{\rm CW}^{\psi_{4}}(t)\ll\partial_{t}\ln f_{\rm CW}^{h}(t)\), thus making it easier and more robust to characterise the features of the \(\psi_{4}\) GW signal. In this sense, while \(\psi_{4}\) and \(h_{+,\times}\) are related by simple time derivatives, the analysis carried out with the former does provide additional information as it allows for the determination of properties that are harder to capture with the latter. ## 3 Old and New Frequencies Figure 1 reports the complete information of the GW signal from a representative binary in our sample. Using a 3D representation, we report on the left the \(\ell=2,m=2\) mode of the GW signal \(\psi_{4}(t)\) (light red) and its amplitude \(|\psi_{4}(t)|\) (dark red), the instantaneous frequency \(f_{\rm CW}^{\psi_{4}}(t)\) (black), and the power spectral density (PSD) \(\sqrt{2}f\,\psi_{4}\) (blue) as a function of the frequency \(f\) (see Rezzolla and Takami, 2016, for details on the definition). Also indicated are the three main frequencies in our analysis: the frequency at merger \(f_{\rm mer}^{\psi_{4}}\), i.e., the GW frequency at the _first_ maximum of \(|\psi_{4}|\), the frequency at quasi time-symmetry \(f_{0}^{\psi_{4}}\), i.e., the GW frequency at the _first_ minimum of \(|\psi_{4}|\), and the dominant frequency of the HMNS emission \(f_{2}^{\psi_{4}}\). To help the eye, we also mark with lines the corresponding times \(t_{\rm mer}^{\psi_{4}}\) (dashed), \(t_{0}^{\psi_{4}}\) (dotted), and frequencies (dashed, dotted and dot-dashed respectively). The right panel of Fig. 1 shows the same quantities but when computed from the strain. By comparing the black lines in the left and right panels it is straightforward to realise that the variation of \(f_{\rm CW}^{h}(t)\) is much larger than that in \(f_{\rm CW}^{\psi_{4}}(t)\) over the same interval of \(\sim 1\,\)ms after the merger. It is this very rapid change in \(f_{\rm CW}^{h}(t)\) that makes the identification of \(f_{0}^{h}\) extremely difficult, if not impossible. Note also that while in both representations \(f_{\rm mer}<f_{2}<f_{0}\), the numerical values of the various quantities are similar but not identical. However, \(f_{2}^{\psi_{4}}\simeq f_{2}^{h}\) to very good precision (the largest differences are \(\lesssim 4\%\)) simply because this frequency is relative to a mostly monochromatic GW signal; hence, hereafter we simply assume \(f_{2}^{\psi_{4}}=f_{2}^{h}=:f_{2}\). Finally, because in all representations \(f_{0}\) is largest frequency measured, even a crude measure of largest frequency in the signal will serve as a first estimate of the \(f_{0}\) frequency.1 Footnote 1: From a numerical point of view, we note that the \(f_{0}\) frequency is always below \(\sim 4\,{\rm kHz}\) and is therefore much smaller than the typical sampling frequency of the \(\psi_{4}\) scalar, that is \(\simeq 80-100\,{\rm kHz}\). Besides marking the time of the first amplitude minimum, from a physical point of view \(t_{0}^{\psi_{4}}\) corresponds to the time when the two stellar cores have reached the minimum separation and are about to bounce-off each other. At this instant, the corresponding amplitude of \(\psi_{4}\) shows a clear minimum, while the instantaneous GW frequency a local maximum [the discussion in the _Supplemental Material_ (SM) illustrates this behaviour very clearly by employing the toy model introduced in (Takami et al., 2015)]. ## 4 Quasi-universal relations We next proceed to the derivation of quasi-universal relations that can be employed to deduce the physical properties of the binary. Following the approach started already in (Takami et al., 2014, 2015; Rezzolla and Takami, 2016), which captures the logarithmic variation of a properly rescaled mass and frequency, we express the relevant frequencies in terms of a power expansion of the mass ratio \(q\), i.e., \(\log_{10}\left[(M/M_{\odot})(f/{\rm Hz})\right]=a_{0}+(b_{0}+b_{1}q+b_{2}q^{2} )\left(\kappa_{2}^{\kappa}\right)^{n}\), where \(f\) is any of the frequencies we consider (i.e., \(f_{\rm mer}^{\psi_{4}},f_{\rm mer}^{h},f_{0}^{\psi_{4}},f_{2}\)), \(a_{0},b_{0},b_{1},b_{2},n\) are fitting coefficients. Hereafter, we will refer to this generic fitting functions as \(\mathcal{F}_{1}\). Figure 2 provides a 3D representation of the measured GW frequencies \(f_{\rm mer}^{\psi_{4}}\) and \(f_{0}^{\psi_{4}}\) as a function of \(\kappa_{2}^{r}\) and \(q\) (see also the SM for fits to \(f_{\rm mer}^{h}\) and \(f_{2}\)). Also reported is the fitting surface described by \(\mathcal{F}_{1}\), with the best-fit parameters listed in Table 2 of the SM for all the frequencies considered. Furthermore, for each frequency we report below the relative error of the fit in the two principal directions of the fit, \(\kappa_{2}^{T}\) and \(q\). Despite their simple form, our fits for \(f_{\rm mer}^{\psi_{4}}\) and \(f_{0}^{\psi_{4}}\) capture the data very well, showing average relative errors that are \(\lesssim 1\%\) and maximal relative errors \(\lesssim 2\%\) for other than equal-mass binaries. It is interesting to compare our functional fitting form \(\mathcal{F}_{1}\) for \(f_{\rm mer}^{h}\), which needs only five fitting coefficients, with the one proposed in (Breschi et al., 2022) for irrotational binaries, which we will refer to as \(\mathcal{F}_{2}\), and that requires twice as many coefficients. In order to compare \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\) it is first necessary to distinguish the "pipeline", that is, the technical procedure employed to extract the frequencies from the data. We thus indicate with \(\mathcal{P}_{1}\) the pipeline discussed above and with \(\mathcal{P}_{2}\) that released in (Gonzalez et al., 2022). Naturally, each fitting function can be applied to either pipeline, so that \(\mathcal{F}_{1}(\mathcal{P}_{1})\) indicates the use of our fitting form to data computed with our pipeline. In Fig. 3, we present the relative differences between the measured frequencies for the 118 binaries considered and the corresponding values from the fit, with different rows referring to the four possibilities. Overall, the comparison in Fig. 3 shows that \(\mathcal{F}_{1}\) leads to smaller relative errors with a maximum residual error of \(\sim 2\%\) and an an average residual error that is between two and four times smaller than for \(\mathcal{F}_{2}\). As a cautionary note we Figure 1: 3D representation of the complete information in the GW signal from a representative binary (Tootle et al., 2022). The left panel shows the \(\ell=2,m=2\) mode of the GW signal \(\psi_{4}(t)\) (light red) and its amplitude \(|\psi_{4}(t)|\) (dark red), the instantaneous frequency \(f_{\rm GW}(t)\) (black), and the power spectral density (PSD) \(\sqrt{2}f\,\tilde{\psi}_{4}\) (blue) as a function of the frequency \(f\). Also indicated are the frequency at merger \(f_{\rm mer}^{\psi_{4}}\), the frequency at quasi time-symmetry \(f_{0}^{\psi_{4}}\), the dominant frequency of the HMNS emission \(f_{2}\), and the corresponding times where these frequencies appear. The right panel shows the same quantities but when computed in terms of the GW strain. should remark that we have specialised the fitting \(\mathcal{F}_{2}\), which is more general and can include spinning and eccentric binaries, to the case relevant for this comparison, namely, irrotational binaries. Hence, our conclusions apply only to such binaries. ## 5 Secular GW emission The last point we cover regards the relative strengths of the \(\ell=2,m=2\) and \(\ell=2,m=1\) GW modes. The importance of the latter was first pointed out in (Paschalidis et al., 2015; Lehner et al., 2016) and is produced by corresponding asymmetries in the rest-mass density. The emergence of an \(m=1\) deformation is well-known to occur in isolated stars (Chandrasekhar, 1969; Shibata et al., 2000; Baiotti et al., 2007; Franci et al., 2013; Loffler et al., 2015) that have a sufficiently large amount of rotational kinetic energy \(T\) and emerges when the ratio \(T/|W|\), where \(W\) is the gravitational binding energy, exceeds a certain threshold (in a systematic analysis, Baiotti et al., 2007, have shown that this happens for \(T/|W|\gtrsim 0.25\)). In such stars, the \(m=1\) mode in the rest-mass density would grow exponentially reaching equipartition with the \(m=2\) mode, and, subsequently, represent the largest deformation. A similar phenomenology seems to be present also for the postmerger remnant, as already hinted in (Papenfort et al., 2022), but as shown more clearly by the numerous binaries considered here. Figure 4 reports the evolution of the ratio of the GW amplitudes in the two modes \(|\psi_{4}^{21}|/|\psi_{4}^{22}|\), with the left panel showing a selected set of binaries and with the right panel reporting all binaries (the raw timeseries are smoothed over a window \(\Delta t=0.5\,\mathrm{ms}\)). While only a few binaries in the sample reach \(|\psi_{4}^{21}|/|\psi_{4}^{22}|=1\) within the simulated time, the large majority exhibits a trend that we try to capture by extrapolating linearly in time after averaging the last \(5\,\mathrm{ms}\) of the evolution. Using a colour code to distinguish binaries with different \(q\), it becomes clear that the initial strength of the \(m=1\) mode is inversely proportional to the mass ratio, so that for an extremely asymmetric binary, i.e., \(q\lesssim 0.6\), \(|\psi_{4}^{21}|/|\psi_{4}^{22}|\) can be more than two orders of magnitude larger than for equal-mass binaries. At the same time, the initial mode-amplitude ratio does not depend on the merger dimensionless spin (Papenfort et al., 2022). Unsurprisingly, a similar behaviour can be observed when Figure 3: Relative differences between the measured \(f_{\mathrm{mer}}^{h}\) frequencies and the corresponding values from the fit. Different rows refer to the four different possibilities of applying the fitting function \(\mathcal{F}_{i}\) to the data pipeline \(\mathcal{P}_{i}\) with \(i=1,2\); arrows indicate relative differences above \(10\%\). Note how the new fitting function and pipeline, i.e., \(\mathcal{F}_{1}(\mathcal{P}_{1})\), provide errors that are about four times smaller. Figure 2: 3D representation of the GW frequencies \(f_{\mathrm{mer}}^{\psi_{4}}\) and \(f_{0}^{\psi_{4}}\) as measured from the data (coloured circles) and presented as a function of \(\kappa_{2}^{T}\) and \(q\). Also reported are the best-fit surfaces, while shown below are the relative errors of the fit in the two principal directions. Stars mark binaries modelled with the V-QCD EOS and thus having a strong first-order phase transition (Demircik et al., 2022; Tootle et al., 2022). computing the signal-to-noise (SNR) ratio in the \(m=2\) and \(m=1\) modes. Specifically, by computing a time-windowed SNR ratio we can quantify the growing contribution of the subdominant mode in a similar fashion to estimates given by (Lehner et al., 2016) and find that its increases to \(\simeq\mathcal{O}(1)\) if the \(m=1\) is not suppressed (see SM for details). If confirmed by systematic, long-term evolutions, this finding would change the standard picture in which the largest signal in the BNS postmerger is to be expected at the \(f_{2}\) frequency. Rather, the trend reported here suggests that the most powerful feature in the PSD for long-lived HMNSs may actually appear at a frequency \(\simeq\frac{1}{2}f_{2}\). Because this falls in a more favourable region of the detectors sensitivities, and assuming a detection angle not favouring either of the modes, the corresponding signal-to-noise ratio will grow proportionally to the ratio between detectors noise at \(\frac{1}{2}f_{2}\) and \(f_{2}\), hence by a factor of \(\sim 2\) for LIGO or Virgo. While promising, this prospect should be accompanied by some caveats. First, it is possible that the growth rate may be weaker than the one estimated here. Second, the extrapolation assumes that the HMNS will not collapse to a black hole before reaching \(|\psi_{4}^{21}|/|\psi_{4}^{22}|\sim 1\) and while this is likely for soft EOSs and low-mass binaries, it may not happen if the EOS is stiff and the binary massive. Third, all binaries in our sample have zero deformation. A robust conclusion that can be inferred from the results shown in Fig. 4 is that remnants with a long lifetime, as it was likely the case for GW170817 (Rezzolla et al., 2018; Gill et al., 2019; Murguia-Berthier et al., 2021), will reasonably have the \(\ell=2,m=1\) as the least-damped mode. Hence, considerable spectral power should be present at frequencies \(\frac{1}{2}f_{2}\) and \(f_{2}\), with the main strain amplitudes in a ratio \(|h^{21}|/|h^{22}|\sim 0.1-1\) for generic orientations (e.g., for an inclination of \(2\arctan(1/2)\sim 53^{\circ}\) two modes have the same spin-weighted spherical-harmonics coefficients). ## 6 Conclusion Leveraging on a rich literature developed over the last ten years on this subject, we have considered again the spectral properties of the signal when computed in terms of the Weyl scalar \(\psi_{4}\) rather than in terms of the GW strain \(h_{+,\times}\). Exploiting the better behaviour of \(\psi_{4}\), we were able to highlight three novel features that can be used to better infer physical information from the detected signal. First, by employing a large number of simulations spanning a considerable set of EOSs and mass ratios, we have shown the existence of a new instantaneous frequency, \(f_{0}^{\psi_{4}}\), that can be associated with the instant of quasi time-symmetry in the postmerger dynamics. This corresponds to when the stellar cores in the merger remnant have reached their minimum separation and are about to bounce-off each other. Just like other spectral frequencies of the BNS GW signal, \(f_{0}^{\psi_{4}}\) also follows a quasi-universal behaviour as a function of the tidal deformability \(\kappa_{2}^{r}\) and of the binary mass ratio \(q\), for which we provide a simple and yet accurate analytical expression. Second, we have obtained a new quasi-universal relation for the merger frequency \(f_{\rm mer}^{h}\) as a function of \(\kappa_{2}^{r}\) and \(q\). The new expression not only requires a smaller number of fitting coefficients than alternative expressions in the literature, but it also provides a more accurate description of the data, with a residual error that is four times smaller on average. Finally, we have pointed out the evidence that the \(\ell=2,m=1\) could become the most powerful GW mode on sufficiently long timescales, with strain amplitudes for the dominant modes that are in a ratio \(|h^{21}|/|h^{22}|\sim 0.1-1\). Should this mode not be suppressed by the collapse of the HMNS to a black hole or by other dissipative effects such as magnetic fields, considerable spectral power should be present at frequencies \(\frac{1}{2}f_{2}\), where it could be detected in conditions of smaller signal-to-noise ratios or by third-generation detectors. The results presented here can be improved by enlarging the number of BNS simulations considered, by increasing the variance in the microphysical description (e.g., including Figure 4: Evolution of the ratio of the two main GW modes \(|\psi_{4}^{21}|/|\psi_{4}^{22}|\), with the left panel showing a selected set of binaries to highlight the behaviour for different mass ratios (colour code) and the right one reporting all of the binaries. Black dotted horizontal lines are used to mark the position when the two modes have equal amplitudes, while straight coloured lines report a linear extrapolation after averaging the last \(5\,{\rm ms}\) of the evolution; note that the large majority of binaries exhibits a growing trend that is maintained throughout the computed evolution. simulations with magnetic fields and neutrino transport), by performing additional long-term evolutions, and by extending the fitting approach to binaries with spins and eccentricity. We will explore these extensions in future work. _Data policy._ The relevant data that supports the findings of this paper is available from the first author and can be shared upon a reasonable request. We thank K. Chakravarti, K. Takami, C. Ecker, and C. Mugolino for useful input and discussions. Support comes from the State of Hesse within the Research Cluster ELEMENTS (Project ID 500/10.006). LR acknowledges funding by the ERC Advanced Grant "JETSET: Launching, propagation and emission of relativistic jets from binary mergers and across mass scales" (Grant No. 884631). The simulations from which parts of the used data are derived were performed on HPE Apollo HAWK at the High Performance Computing Center Stuttgart (HLRS) under the grants BNSMC and BBHDISKS, and on SuperMUC at the Leibniz Supercomputing Centre. Einstein Toolkit (Haas _et al._, 2020), Carpet (Schnetter et al., 2004), FIL (Most et al., 2019), FUKA (Papenfort et al., 2021), Kadath (Grandclement, 2010), Watpy (The CoRe Collaboration, 2022), Kuibit (Bozzola, 2021), Mathematica (Wolfram Research, 2020)
2310.08167
Multiclass Classification of Policy Documents with Large Language Models
Classifying policy documents into policy issue topics has been a long-time effort in political science and communication disciplines. Efforts to automate text classification processes for social science research purposes have so far achieved remarkable results, but there is still a large room for progress. In this work, we test the prediction performance of an alternative strategy, which requires human involvement much less than full manual coding. We use the GPT 3.5 and GPT 4 models of the OpenAI, which are pre-trained instruction-tuned Large Language Models (LLM), to classify congressional bills and congressional hearings into Comparative Agendas Project's 21 major policy issue topics. We propose three use-case scenarios and estimate overall accuracies ranging from %58-83 depending on scenario and GPT model employed. The three scenarios aims at minimal, moderate, and major human interference, respectively. Overall, our results point towards the insufficiency of complete reliance on GPT with minimal human intervention, an increasing accuracy along with the human effort exerted, and a surprisingly high accuracy achieved in the most humanly demanding use-case. However, the superior use-case achieved the %83 accuracy on the %65 of the data in which the two models agreed, suggesting that a similar approach to ours can be relatively easily implemented and allow for mostly automated coding of a majority of a given dataset. This could free up resources allowing manual human coding of the remaining %35 of the data to achieve an overall higher level of accuracy while reducing costs significantly.
Erkan Gunes, Christoffer Koch Florczak
2023-10-12T09:41:22Z
http://arxiv.org/abs/2310.08167v1
# Multiclass Classification of Policy Documents with Large Language Models ###### Abstract Classifying policy documents into policy issue topics has been a long-time effort in political science and communication disciplines. Efforts to automate text classification processes for social science research purposes have so far achieved remarkable results, but there is still a large room for progress. In this work, we test the prediction performance of an alternative strategy, which requires human involvement much less than full manual coding. We use the GPT 3.5 and GPT 4 models of the OpenAI, which are pre-trained instruction-tuned Large Language Models (LLM), to classify congressional bills and congressional hearings into Comparative Agendas Project's 21 major policy issue topics. We propose three use-case scenarios and estimate overall accuracies ranging from %58-83 depending on scenario and GPT model employed. The three scenarios aims at minimal, moderate, and major human interference, respectively. Overall, our results point towards the insufficiency of complete reliance on GPT with minimal human intervention, an increasing accuracy along with the human effort exerted, and a surprisingly high accuracy achieved in the most humanly demanding use-case. However, the superior use-case achieved the %83 accuracy on the %65 of the data in which the two models agreed, suggesting that a similar approach to ours can be relatively easily implemented and allow for mostly automated coding of a majority of a given dataset. This could free up resources allowing manual human coding of the remaining %35 of the data to achieve an overall higher level of accuracy while reducing costs significantly. ## Introduction Text is a highly valuable source of information for political scientists as political actions, debates and outcomes are often the main subject in a wide array of different documents for instance newspaper articles, parliamentary and bureaucratic records, social media posts, press releases, court documents and political manifestos (Grimmer & Stewart, 2013; Wilkerson & Casas, 2017). Due to the inherently unstructured nature of textual data it has traditionally been analyzed with primarily qualitative methods, which has limited the amount of data that could be processed. The rise of computational social science has enabled researchers to train algorithms for their particular use cases with off-the-shelf methods of machine learning to encompass a very high volume of textual data, thus allowing a seemingly revolutionary efficiency increase and expanding the possibilities of testing observable implications of theories in many branches of political science. (Lazer et al., 2009; Gentzkow, Kelly & Teddy, 2019) The increase in valuable computational tools does not come without costs (Denny and Spirling, 2018). One of the most discussed issues is the error rate of computational language models which has, perhaps due to this focus, been drastically improved upon through technological advances such as neural networks, transformers, and Large Language Models (LLM) (Bosley et. al., 2023). In this paper, we therefore approach the literature from the vantage point of other less debated issues that are hindering the wide adoption of computational text techniques and test if the recent advance of instruction-tuned LLMs, and particularly OpenAI's GPT (OpenAI, 2023) models, provide sufficient answers to these challenges. The first challenge is that training one's own algorithm requires both theoretical and practical expertise which puts the entry barrier to adoption relatively high. The costs of adopting the technical aspects of computational text analysis approaches are unlikely to be evenly distributed as has historically been the case during other major methodological transitions in social sciences. In an academic reality with increasing demands and time constraints this could in turn prevent some parts of the scholarly community from adopting these newer tools. The second challenge is that training context specific algorithms require substantial text corpora, and in some cases also require manual hand coding of a large amount of text. This can make the use of computational methods unfeasible or impractical in cases where the text amounts needed for a given analysis are more than one could reasonably handle by hand-coding but at the same time insufficient for training an algorithm from scratch. The third challenge is that many conventional computational methods for text classification require multiple steps of text pre-processing, which requires extensive documentation to ensure the reproducibility of the classification output. Additionally, many computational approaches are highly sensitive to text pre-processing choices, and limiting the amount of possible variability in those choices could improve consistency across research projects. At face value, the recent development of instruction-tuned LLMs, such as OpenAI's GPT models (Radford et. al., 2019), might offer an answer to those challenges. First, OpenAI provides a simple interface to interact with their models, thereby substantially decreasing the amount of technical expertise needed to use the algorithm. In practice, it reduces technical requirements by moderately shifting the focus from programming to prompt-engineering. Furthermore, access to OpenAI's API makes projects highly scalable and might allow the intuitive understanding to be developed using the interface and then translated to the programming environment. Secondly, the pre-trained nature of the GPT models means that use by the individual researcher would be akin to zero-shot learning (Radford et. al., 2019), as the model requires no training to be used. Consequently, the quality of the results derived from the model is therefore much less dependent on the amount of data the individual researcher puts into it. However, implementing any language model in an academic context on face value without tests of its performance is ill-advised at best. The aim of this paper is therefore to provide initial indications of how some of the most widely used instruction-tuned LLMs might perform on a concrete multi-class classification task with relevance for political science. To provide such a test we use data on congressional bills and congressional hearings and instruct the GPT models to classify the bills according to the coding scheme from the Comparative Agendas Project. This provides us with the opportunity to evaluate the performance of recent GPT models to the 'gold standard' of human coding. To get a sense of how the improvements in the algorithm between versions affect the model's ability to perform classification tasks we carry out tests with both GPT 3.5 and GPT 4. Furthermore, this allows us to compare agreement rates between the two models. To situate and illustrate researchers' use of GPT, we develop three scenarios or use-cases and report results for each scenario. In the first use-case, researchers rely only on GPT for classification with no human intervention. In the second use-case, humans assist with cleaning and structuring. In the third case, GPT models are combined to achieve a high accuracy score for a subset of the data, and humans assist with cleaning, structuring and manual coding of remaining data. This allows us to evaluate how the algorithm performs along varying degrees of human effort exerted. Our results suggest that while the GPT models perform surprisingly well on multi-class classification tasks, the accuracy is still some way from rivaling the performance seen in similar classification tasks from the state of the art custom-tailored approaches. Our comparison between the GPT 3.5 and GPT 4 models suggests a significant increase in accuracy between the two versions of the algorithm, signifying that LLM's such as GPT may reach efficiency levels appropriate for use in the social sciences in the future. Comparing the two models, we see a roughly %65 agreement between the classifications. If we isolate to only the cases in which the two models agree, the overall accuracy score is above %80, thereby rivaling many state-of-the-art approaches to classification in political science. Evaluating our use-cases, we conclude that complete reliance on GPT is suboptimal and should be avoided, while significant cost reduction and highly promising accuracy results can be achieved when combining machine and human effort to the highest degree. We conclude the paper by discussing prompt engineering, field specificity, combining models, and the possibility of fine-tuning the GPT models in the future for use in political science. ## Large Language Models and Social Science Research Large Language Models (LLMs) are distinguished by the amount of data they are trained on, the scale of the neural network underlying them, and the scope of tasks they are aimed to achieve. Computational social science research on text classification so far has mainly progressed with the use and development of custom natural language processing models specializing on specific tasks and trained on data to optimize the performance on the specific task. LLMs are foundation models which aim to imitate natural language generation by humans; therefore they have the potential to perform all kinds of specific text analysis and generation tasks specialized models are trying to implement, though possibly with varying levels of performance across different tasks (Kocon et al., 2023). This flexibility in LLMs use cases in the context of text annotation is one of the main advantages of LLMs compared to more conventional computational text annotation methods. LLMs could be divided into two types: base LLMs and instruction-tuned LLMs. The former one is basically a model that predicts the next token given the provided text. Instruction-tuned LLMs use next token prediction mechanism to generate responses to user prompts. They are developed using examples of prompt and response pairs as training data, and could be further refined using reinforcement learning from human feedback. The use of LLMs for various NLP tasks have been surging in the last few years thanks to the introduction of powerful models such as BERT and Generative Pre-Trained Transformer (GPT), and it has started gaining more momentum with the introduction of instruction-tuned LLMs and user interfaces that enable conversational interaction with instruction-tuned LLMs, such as ChatGPT. Text classification using instruction-tuned LLMs differs from conventional unsupervised approaches like topic models and custom supervised models. While topic models are traditionally fully unsupervised 1 and identify latent topics rather than known classes, instruction-tuned LLMs can be guided towards known target classes through instructions passed as a prompt to the model. Compared to custom supervised models, LLMs require no labeled training data, instead relying on pre-trained knowledge to infer the label of a given text based on the context and instructions provided. This reduces the need for expensive and time consuming data labeling processes, which are often necessary for traditional supervised learning. Furthermore, instruction-tuned LLMs can quickly adapt to new classification tasks without the need for re-training from scratch. This makes them highly flexible and versatile across a wide range of applications. However, the efficacy of instruction-tuned LLMs can be contingent upon the clarity and specificity of the instruction prompts. In scenarios where the distinction between classes is subtle, a well-crafted prompt becomes crucial. Nonetheless, this approach shows the power of leveraging vast amounts of pre-existing knowledge to perform text classification without extensive fine-tuning or labeled datasets. Footnote 1: For recent developments in semi-supervised topic modelling in political science, see: Eshima, Imai and Sasaki, 2023. Research on the efficacy of instruction-tuned LLMs for various social science research tasks have been growing rapidly, especially since the introduction of ChatGPT in late 2022. Some remarkable examples include the use of OpenAI's GPT models for simulating survey responses using sociodemographic information about potential survey respondents (Argyle et al, 2023), measuring latent ideology of politicians (Wu et al., 2023) and detecting news pieces with misinformation or disinformation (Caramancion, 2023). Recently, we have seen an influx of studies evaluating the capabilities of those models in performing diverse text annotation tasks such as topic detection, stance detection, and frame detection in tweets (Gilardi, Alizadeh and Kubli; 2023), sentiment analysis in tweets (Zhu et al, 2023) and classification of Twitter accounts by political affiliation (Tornberg, 2023). A prevailing conclusion in those studies is that the more difficult the text annotation problem, e.g. larger number of classes, fuzzy semantic boundaries etc., the lower the performance of GPT models compared to the state of the art model(s) for the specific text annotation task (Kocon et al, 2023). However, in most scenarios, they were able to achieve sufficiently good performance with a zero shot learning strategy which requires much less overall effort than human labelling or computational labelling using more conventional methods. In this study, we test text classification capabilities of the two most advanced instruction-tuned LLMs released by the OpenAI, i.e. GPT 3.5 and GPT 4, on a relatively more complex classification task. We use those two models to classify policy documents into policy issue topic categories, which is a multiclass classification task with more than 20 distinct classes. In the next section, we elaborate on the data and the procedures we followed in our tests. ## Data and Research Setup We use two large datasets from the Comparative Agendas Project's database which include information about the issue topic category of legislative policy documents and events. The first dataset contains policy bill titles from the US Congress and issue topic labels assigned by human research assistants. The second one contains congressional hearing descriptions and issue topic labels assigned by human research assistants. Both datasets use the CAP topic scheme, which contains 21 major topics such as macroeconomics, health, agriculture, etc., and numerous subtopics under each major topic. The congressional bills dataset has an additional category, i.e. private bills, which we named 'other' in our tests. The extensive use of these two datasets in previous computational social science research to examine various text classification approaches allows us to benchmark the performance of our method against other strategies. Summary information about those data could be seen in Table 1. We use two slightly different learning strategies to test the capabilities of two pretrained LLMs from OpenAI, the GPT3.5 and the GPT4 algorithms respectively. In our tests with the GPT 3.5 we use a fully zero-shot learning strategy, while we use a mostly zero shot learning strategy in our experiments with the GPT 4 algorithm. In the fully zero-shot learning situation, we do not show any examples of text and label pairs from the dataset for any of the classes. In \begin{table} \begin{tabular}{|l|c|c|l|} \hline **Dataset** & **N** & **t** & **Example Text** \\ \hline Congressional Bills & 468,438 & 1947-2016 & \(\bullet\) & A bill to require the Secretary of Homeland Security and the Secretary of State to accept passport cards at air ports of entry and for other purposes. \\ & & & \(\bullet\) To amend the Internal Revenue Code of 1986 to provide a tax credit for the costs of college textbooks. \\ & & & \(\bullet\) A bill to amend the Small Business Act to direct the task force of the Office of Veterans Business Development to provide access to and manage the distribution of overseas excess or surplus property to veteran-owned small businesses. \\ \hline Congressional Hearings & 102,151 & 1946-2020 & \(\bullet\) \\ & & & \(\bullet\) Score of Soviet activity in the US. \\ & & & \(\bullet\) Tax treatment of recycling of solid waste. \\ & & & \(\bullet\) World petroleum outlook for 1982. \\ \hline \end{tabular} \end{table} Table 1: Description of Data the mostly zero-shot situation, we show a few text and label pairs for two of the classes. An alternative strategy, which we do not use in this work, would be few shot learning in which we would show a few examples for each candidate labels in our prompts. Studies indicate few shot learning strategy could significantly improve LLM performance (Brown et al., 2020), however, we limit the use of that strategy to classes on which the GPT 4 algorithm was struggling to get right in our small non-systematic tests which we did before running the large scale tests whose results we present in the next section. We couldn't use that strategy for all classes, because providing even a couple labeled examples for each of the 21 classes would exceed the context length limits, especially with the congressional bill titles which on average have longer token length than hearings descriptions. However, with token limits for LLMs having increased since the time of our tests, few-shot learning strategies with LLMs have become more feasible. Therefore, zero-shot learning provides a more convenient and economical approach for classifying documents into the CAP major topics, as no hand-labeled examples are required. We use the prompt structures depicted in Figure 1 in our text classification tests with the OpenAI's LLMs. The cost per token and context length differences between the two models were the main reasons why we used two different prompt structures. Since our main focus in this work is not comparing the two models, but to offer an alternative approach to existing text classification approaches in computational social science, we do not pay special attention to the alignment between prompts we use in our tests with these two different models. However, in spite of the limited context we were able to provide in the GPT 4 prompts and asking the algorithm to classify 100 titles in a single prompt, we observed a considerable improvement when we used the GPT4 algorithm. GPT 3.5 model is much cheaper than the GPT 4 model. At the time we conducted these tests, the former models' cost per 1K token was $0.002 and $0.03 for the latter model. In our experiments with GPT 3.5, we included a single bill title or hearing description in each prompt, along with the class labels, their full descriptions from the CAP master codebook and instructions about the classification task. For GPT 4, with higher token limits and higher cost per token, we include a batch of multiple titles or hearings in each prompt, along with just the candidate class labels and some instructions about the nuances of some issue topics. In Figure 1: Prompt Design GPT 4 prompts, we also included a few example titles from the private bills and government operations category. Private bills have a very standard structure, where each title begins with the phrase "for the relief of". A researcher working with the congressional bills data will quickly realize that structure. We showed two examples from that category in our prompts. In our small scale experiments, we also realized the GPT 4 algorithm was struggling with the government operations class and decided show an example of that class, too. We also put some information to help the GPT 4 algorithm on issues that could be associated with multiple classes such as abortion, veterans affairs. In the CAP scheme, abortion related issues are classified under the civil rights topic, but in our small scale experiments, we realized the GPT 4 algorithm sometimes associated that issue with the health category. The same problem was observed for the veterans affairs issue, where the GPT 4 algorithm sometimes put veterans affairs related issues under the social welfare topic, while the CAP scheme puts it under the defense topic. Examples of full prompts we used with the GPT 3.5 and GPT 4 algorithms could be found in the Appendix A. To query the LLMs, we use OpenAI's API which provides convenient access to models like GPT 3.5 and GPT 4 through API calls in Python. We specifically use the gpt-3.5-turbo-0301 and the gpt-4-0314 models. The last four digits in model names refer to the dates the models were released. OpenAI later released updated versions of those models, which had not been released by the time we did our tests. OpenAI's API allows controlling some model parameters such as the temperature parameter which control the randomness in model output. That parameter ranges from 0 to 1, with lower values resulting in more conservative, deterministic responses. We set the temperature parameter to 0 for both models to obtain almost deterministic and reproducible results. Although those models are not perfectly deterministic at temperature 0, they produce near-identical results across repeated runs. We generate responses from the models in a pre-defined format, asking them to output just the predicted class label for each input observation. However, GPT 3.5 was not always successful in terms of generating the response in the desired format, while GPT 4 always gave the response in the desired format. ## Results We present the results using three different scenarios, which correspond to three different ways researchers may use the LLM generated classification output. In each of those scenarios, we have increasing levels of human involvement in the text classification process. Human-computer collaboration has been suggested as a strategy to improve text classification productivity and quality (Loftis and Mortensen, 2020), and here we use these scenarios to show how human-computer collaboration can also improve the performance of text classification with LLMs. In the first scenario, we report classification performance metrics where the researchers will use the full LLM output without touching the predictions regardless of whether they belong to the set of candidate labels. In the second scenario, researchers will exclude the text with predicted labels which do not exist within the candidate labels. In the third scenario, researchers will use a label prediction only when the two algorithms agree. ### Scenario 1: Using Untouched LLM Predictions In the first scenario, we present results based on the assumption that researchers will not touch the output from the LLM models. They will utilize the predicted labels as they are, even if they do not exist in the candidate labels set. Our aim in presenting that scenario is to demonstrate the performance achievable with minimal human intervention in the classification process. As such, any classifications with labels that do not appear in the candidate labels set are treated as incorrectly coded in that scenario. Table 2 shows the performance metrics of different GPT models under that scenario. It specifically highlights three performance indicators for models: accuracy, F1 Score, and weighted \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline **Model** & **Dataset** & **Sample Size** & **Accuracy** & **F1 Score** & **Weighted F1 Score** \\ \hline GPT 3.5 Turbo & Bills & 11,300 & 0.63 & 0.55 & 0.63 \\ \hline GPT 3.5 Turbo & Hearings & 11,011 & 0.59 & 0.56 & 0.61 \\ \hline GPT 4 & Bills & 11,300 & 0.69 & 0.60 & 0.70 \\ \hline \end{tabular} \end{table} Table 2: Overall accuracy performance of GPT models when all label predictions are included in the evaluation F1 Score. These metrics are essential as they provide an overall view of the model's ability to correctly predict and balance precision and recall, which respectively refer to how often the model is correct when it predicts a certain class and how well the model identifies all instances of each class. We tested the GPT 3.5 Turbo model on the Congressional Bills dataset and the Congressional Hearings dataset, while the GPT 4 model was tested only on the Congressional Bills dataset. The sample sizes for these datasets vary, with the Congressional Bills dataset having slightly larger sample sizes in both model instances. Starting with GPT 3.5 Turbo, on the congressional bills dataset, the model achieved an accuracy of 0.63, an F1 Score of 0.55, and a weighted F1 score of 0.63. When the same model was tested on a random sample from the congressional hearings dataset there was a moderate dip in accuracy to 0.59, but maintained an F1 Score of 0.56 and a slightly decreased weighted F1 score of 0.61. Conversely, when we evaluated the GPT 4 model using the Congressional Bills dataset, there was a marked improvement in performance. The model reached an accuracy of 0.69, outperforming the GPT 3.5 Turbo. Additionally, the F1 Score and weighted F1 score for GPT 4 on this dataset were 0.60 and 0.70 respectively. This suggests that while both models show competency in classifying data from the two datasets, the GPT 4 model exhibits superior performance on the Congressional Bills dataset when compared to its predecessor. In Appendix B, Table 5 and Table 6 present the class-specific performance metrics for GPT 4 and GPT 3.5 with the congressional bills dataset. Overall, GPT 4 demonstrates comparable or superior performance in precision, recall, and F1 Score across a majority of the classes when juxtaposed against GPT 3.5. Particularly on the health, agriculture and private bills (Other) topics, both models exhibit high accuracy performance, with GPT 3.5 Turbo marginally leading on the health topic and the private bills topics. The immigration topic reveals a notable divergence in performance, where GPT 3.5 significantly overperforms GPT 4, especially in terms of F1 score. Though GPT 4 manifests heightened recall on both the environment and technology topics, this leads to some compromise in precision within the environment class. On the topics of defense and international affairs, GPT 4 overperforms GPT 3.5 by a large margin. In Table 7, we present GPT 3.5 model's class-specific performance on the congressional hearings dataset. We do not observe any class with F1 score above 80 percent, while the health, agriculture, and energy topics achieve F1 scores close to that level. Some classes, including labor, education, and transportation, hover around the mid-70s in terms of F1 score, suggesting a relatively consistent performance across various topics in the dataset. ### Scenario 2: Omitting Non-Matching Predicted Labels In the second scenario, we assume the researchers will filter out the predicted labels which do not match with any candidate labels from the topic scheme and responses which do not match the desired response format. For example, in our GPT 3.5 experiment with the congressional bills data, the predicted labels set has some labels such as'veterans affairs' and 'tax policy', which are not among the candidate labels we provided in our prompts. In our experiments with the congressional hearings dataset, the GPT 3.5 model sometimes gave responses with explanations of the prediction or responses that have additional text to the label prediction. In that second scenario, we assume the researchers will put those kinds of predictions aside and label those texts manually. That will require a bit more human involvement in the process compared to the first scenario, but this will result in a considerable classification quality gain with minimal additional human effort in the process. We report the classification performance results for the machine labelled portion of the data in the below table. The GPT 3.5 model achieves an accuracy of 0.67, F1 score of 0.59, and a weighted F1 score of 0.66 when we test in with the congressional bill titles. The same model on the congressional hearings dataset achieves an accuracy of 0.64, an F1 score of 0.61 and a weighted F1 score of 0.65. The GPT 4 model's metrics for the Congressional Bills dataset remained same because \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline **Model** & **Dataset** & **Sample** & **Accuracy** & **F1 Score** & **Weighted** \\ & & **Size** & & & **F1 Score** \\ \hline GPT 3.5 Turbo & Bills & 10,662 & 0.67 & 0.59 & 0.66 \\ \hline GPT 3.5 Turbo & Hearings & 10,059 & 0.64 & 0.61 & 0.65 \\ \hline GPT 4 & Bills & 11,300 & 0.69 & 0.60 & 0.70 \\ \hline \end{tabular} \end{table} Table 3: Overall accuracy performance of GPT models when the predicted labels that do not exist in the coding scheme provided in the prompt are excluded it did not suffer from the same problem where GPT 3.5 made up labels that do not exist in the candidate labels set or did not generate the response in the desired format. ### Scenario 3: Trusting Only Mutual Predictions of GPT Models In the third scenario, we assume the researcher will only rely on the predictions where the two models agree. We have test results from both models for the congressional bills data, but not for the congressional hearings data. Thus, we only report results for the congressional bills data in that scenario. Table 4 shows the overall classification performance metrics for the machine labelled portion of the data. The two models agreed on approximately %65 of the congressional bill titles. Within that subset of the random sample, the accuracy is significantly higher than the overall accuracy of both models in the other two scenarios presented earlier. While there is a much larger classification quality gain in that scenario, now the researcher will need to spend more human effort on the approximately %35 of the data where the two models disagreed. In that scenario, both GPT models achieve an accuracy of 0.83 for the congressional bills dataset. This is a significant improvement compared to the first and second scenarios where GPT 4, the better performing model, achieved an accuracy of 0.69. We observe similar improvement in F1 score and weighted F1 score metrics. GPT 4's F1 score in the third scenario went up to 0.73 from the 0.60 score in the firs and second scenarios, and the weighted F1 score went up to 0.83 from 0.70 in the previous two scenarios. This enhancement indicates that when both models agree on predictions, classification quality substantially increases. However, this approach also leaves %35 of the titles on which the models diverged unaddressed. Class-specific performance metrics also suggest a significant improvement over the other two scenarios. Now, there are ten topics with at least %80 F1 score, and all topics except "international affairs" have at F1 scores above %50. The 'private bills" category, which is the \begin{table} \begin{tabular}{l|l|l|l|l|l} **Model** & **Dataset** & **Sample Size** & **Accuracy** & **F1 Score** & **Weighted F1 Score** \\ \hline GPT 3.5 Turbo & Bills & 7,291 & 0.83 & 0.73 & 0.83 \\ \hline GPT 4 & Bills & 7,291 & 0.83 & 0.73 & 0.83 \\ \end{tabular} \end{table} Table 4: Overall accuracy performance of GPT models when the titles on which the two models’ predictions disagree are excluded most common category in the random sample, as well as in the full dataset, has almost perfect accuracy. ## Discussion In this paper we tested how the 3.5 and 4.0 versions of the GPT model performed several multiclass classification tests aiming at identifying a list of prespecified political topics commonly used in the CAP project in inherently political texts. We proposed three use-case scenarios and estimated overall accuracies ranging from %58-83 depending on scenario and GPT model employed. The three scenarios aimed at minimal, moderate, and major human interference, respectively. Overall, our results point towards the insufficiency of complete reliance on GPT with minimal human intervention, an increasing accuracy along with the human effort exerted, and a surprisingly high accuracy achieved in the most humanly demanding use-case. However, the superior use-case achieved the %83 accuracy on the %65 of the data in which the two models agreed, suggesting that a similar approach to ours can be relatively easily implemented and allow for mostly automated coding of a majority of a given dataset. This could free up resources allowing manual human coding of the remaining %35 of the data to achieve an overall higher level of accuracy while reducing costs significantly. Our results are slightly less optimistic than other recent papers examining how GPT models might help political scientists. For instance, Wu and colleagues (2023) show that GPT 3.5 can be used to scale American politicians according to ideological orientation. Furthermore, Tornberg (2023) show that ChatGPT outperforms both crowd and Mturk coders on classifying whether tweets originate from democrats or republicans, suggesting that LLMs might sometimes be superior to frequently used human alternatives. Gilardi and colleagues (2023) similarly show that GPT outperforms coders at classifying tweets in six categories. Comparing our results with those of Tornberg and Gilardi and colleagues, a likely difference that may contribute to our divergent findings is the complexity of the coding task. While many classification tasks may only require a couple of categories and likely yield results similar to Tornberg's, future appliers of the GPT framework in political science should be mindful of the likely loss of accuracy associated with complex multiclass classification for GPT. However, as the GPT framework has been consistently updated, and will likely continue to be further improved upon, future research should assess upcoming versions of GPT as its accuracy at any given stage will be an empirical question. While comparing our results to other studies of the GPT framework helps situate our findings in the emerging literature on the use of GPT, it is also necessary to situate our findings relative to other high-profile examples of classification in political science to gage the accuracy of GPT relative to standards within the field. This may yield insights valuable use-cases for GPT coding. The case that may be most directly comparable to ours is Loftis & Mortensen (2020) who use the CAP scheme and naive bayes classification on Danish municipal council meetings and achieve accuracy scores around 67 to 75. As the accuracy scores of our GPT test with minimum human intervention fall within 67 to %69 GPT may be a viable alternative to Naive Bayes classification without a substantial accuracy loss and without a substantive increase in human labor. One possible caveat of using GPT for classification is apparent when we consider precision results within different categories. While the best performing categories achieve as high a precision as 0.86-0.89 a few others achieved 0.27 to 0.32 (see appendix B, table 5-10). What is furthermore of note is that the topic-accuracy varies between GPT 3.5 and GPT 4 so that it is not necessarily the same topics that are accurately categorized, suggesting that further development of the algorithm is needed before it can be reliably used for classification tasks for specific topics. The most problematic case is the immigration topic, when coded according to scenario 1, which achieves an unsatisfactory low 0.28 using the GPT 4.0, but had shown very high performance using GPT 3.5 resulting in a 0.85 accuracy score. However, an overall comparison of within topic accuracy scores between the three scenarios suggests a vast improvement when scenario 3 is considered, thus suggesting that this may be the best overall strategy to decrease variation in accuracy scores across topics. Further studies should seek to evaluate within-topic accuracy scores to see if a pattern emerges in which topics are consistently underperforming and may therefore need extra manual quality control on the backend of the process. While our results suggest that the GPT models do not present with sufficient accuracy compared to what is currently achieved in the literature using problem-tailored algorithms, there may still be easily overlooked advantages that in the future could tip the scale towards large language models like GPT. Most of the literature for which comparable accuracy estimates can be obtained presents in-sample test accuracy results, e.g. testing the results of the training set against a subset of the data achieved by splitting the sample in training and test set. Terechshenko and colleagues (2020) present a study with highly comparable parameters to ours, which tests both in and out of sample prediction of a variety of categorization algorithms. Furthermore, they also rely on congressional bills and the CAP scheme. Comparing our results, they systematically find that in-sample predictions which outperforms the accuracy achieved from our GPT whether the model fitted is based on linear SVM, Logistic Regression, Random Forest, Roberta, Ulmfit or XLNET. However, the out-of-sample prediction rates, as tested by using their algorithms trained on CAP hearings on New York Times CAP data, fail to rival the accuracy level of our tests with the GPT models for all of their specifications. Given that coding using GPT essentially could be thought of as out-of-sample prediction from the get-go, this comparison highlights the exact advantage of the GPT framework - it has fairly high accuracy across different contexts. As the ultimate goal of training models in political science is usually out-of-sample prediction, this could highly favor GPT as a coding tool. However, whether our test is truly out-of-sample prediction rests on one crucial assumption that future users of GPT in political science must be consciously aware of. One of the current issues with relying on algorithms pretrained by external actors is that there is limited transparency in where the sources of training data are collected from. On the one hand, seeing as many data sources in political science are publicly available, they may already have been included in the model's training and may therefore boost the model's ability to classify these particular texts. On the other hand, it might mean that our accuracy in these tests is biased either up or down dependent on whether the data has been used in the pre-training of GPT. Following from this logic, the ability to generalize our results to other use cases, that may themselves be build on data that could possibly have been used in training GPT may sometimes be limited. However, such generalization must be assessed on a case-by-case comparison of the data future users want to explore using GPT. As a possible baseline, we consider the public availability of congressional bills and hearings fairly likely to have been included in the training, but there is truly no way of knowing without increased transparency from OpenAI. In line with other studies, our results also suggest that prompt engineering is an important aspect of using GPT to perform classification tasks. Importantly, however, our results also suggest that GPT 4 is much less susceptible performance enhancement as a result of prompt engineering than GPT 3.5. In turn, this indicates that while future uses of GPT for classification tasks need to consider prompt engineering as a mandatory aspect of GPT work, the importance is likely to decrease with future updates. GPT models, except the GPT4, have now been made available for fine-tuning. This will mean that with relatively little additional effort the algorithm may be developed with further training to overcome possible issues with field specificity. It may be the case, that classification of political documents achieve a relatively high base line of accuracy given that many political text sources are freely available through the internet and may have been part of the data the GPT model was trained on. This may be less likely for fringe issues that are politically less salient or maybe country specific, and accuracy may suffer as a result when using a GPT framework on edge cases, rare political subjects, or data from eras with less publicly available digitized data. Furthermore, while this study's results show the eventual promise of the GPT framework in coding tasks on political texts, future research will have to determine how well this extends beyond the political sphere. Caution is advised in using GPT on other types of texts before performance has been evaluated. ## Conclusion In this paper we examined the viability of two versions of the GPT model for coding of policy documents. We proposed three approaches researchers might use with varying degrees of human-computer-interaction. Our results suggest that the approach based on the highest degree of human-computer-interaction allowed for an accuracy rivaling those seen in many state-of-the-art approaches while substantially cutting down the need for manual coding and therefore costs associated with classification. Mainly relying on the GPT models without substantial human intervention is currently not viable while achieving a sufficient performance, but LLMs may improve sufficiently in the future to allow for this approach to be viable. The development of open source LLM frameworks may help alleviate concerns such as black box problems relating to data input, costs, and transparency.
2310.07254
Attosecond control of solid-state high harmonic generation using ω-3ω fields
Controlling the electron dynamics in matter by individual oscillations of light fields has recently led to the development of attosecond metrology. One of the phenomena resulting from the nonlinear response of materials in the strong-field interaction regime is coherent emission of high energy photons. High harmonic spectra carry the fingerprints of sub-cycle electronic motion and the energy structure of the studied system. Here we show that tailoring the waveform of the driving light by using a coherent combination with its third harmonic frequency allows to control the electron tunneling time within each half-cycle of the fundamental wave with attosecond precision. We introduce an experimental scheme in which we simultaneously monitor the modulation of amplitude and emission delays of high harmonic radiation and the excited electron population generated in crystalline silicon as a function of the relative phase between the {\omega}-3{\omega} fields. The results reveal unambiguously the connection between the dynamics of electron tunneling and high harmonic generation processes in solids.
Adam Gindl, Pawan Suthar, František Trojánek, Petr Malý, Thibault J. -Y. Derrien, Martin Kozák
2023-10-11T07:30:23Z
http://arxiv.org/abs/2310.07254v1
# Attosecond control of solid-state high harmonic generation using \(\omega\)-3\(\omega\) fields ###### Abstract Controlling the electron dynamics in matter by individual oscillations of light fields has recently led to the development of attosecond metrology. One of the phenomena resulting from the nonlinear response of materials in the strong-field interaction regime is coherent emission of high energy photons. High harmonic spectra carry the fingerprints of sub-cycle electronic motion and the energy structure of the studied system. Here we show that tailoring the waveform of the driving light by using a coherent combination with its third harmonic frequency allows to control the electron tunneling time within each half-cycle of the fundamental wave with attosecond precision. We introduce an experimental scheme in which we simultaneously monitor the modulation of amplitude and emission delays of high harmonic radiation and the excited electron population generated in crystalline silicon as a function of the relative phase between the \(\omega\)-3\(\omega\) fields. The results reveal unambiguously the connection between the dynamics of electron tunneling and high harmonic generation processes in solids. ## Main text High harmonic spectroscopy has recently become an indispensable tool for investigation of light driven coherent electron dynamics in atoms [1, 2, 3] and solid-state systems [4, 5]. In a material illuminated by a nonresonant light wave with high field amplitude, the regime of electron excitation differs from classical optics in which the time dependent potential generated in a material by the applied field can be considered as a small perturbation to the system. When the field amplitude of light increases such that the probability of electron tunnelling to the excited state per half period of the driving wave becomes larger than the perturbative transition probability, the light-matter interaction takes place in the nonperturbative strong-field regime [6], which represents the cornerstone of attosecond physics. This regime is characterized by the dimensionless Keldysh parameter \(\gamma_{\rm K}=\big{(}\omega\sqrt{m^{*}E_{\rm g}}\big{)}/eF_{0}<1\)[7], where \(E_{\rm g}\) is the band gap energy, \(m^{*}\) is the reduced effective mass of an electron-hole pair, \(F_{0}\) and \(\omega\) are the electric field amplitude and frequency of the driving wave and \(e\) is electron charge. One of the phenomena resulting from the coherent nonlinear electron dynamics in the strong-field regime is high harmonic generation (HHG) [8, 9]. When this process occurs in solids, the photons with energies corresponding to higher order multiples of the incident photon energy are generated as a consequence of coherent nonlinear interband polarization and intraband electron dynamics [10]. The interband HHG can be understood in the framework of a three-step semi-classical model adopted from atomic physics [9]. The electron first tunnels to the conduction band in a narrow time window close to each maximum of the electric field of the driving wave. Subsequently, the coherent electron-hole wavepacket is accelerated by the laser field in the crystal and may eventually recombine while emitting high energy photon. The amplitude, phase and polarization of the emitted high harmonic radiation are tightly linked to the electric field waveform of the driving pulse and to the band structure of the material. It has been shown that HHG can be controlled by changing the polarization of the driving wave with respect to high symmetry axes of the crystal [11, 12, 13], the carrier-envelope phase of ultrashort pulses [14, 15, 16], by a coherent combination of phase controlled \(\omega\)-2\(\omega\) fields [17, 18, 19, 20, 21, 22, 23, 24] or by sub-cycle control of electron excitation with respect to the THz driving field [25]. Although high harmonic spectroscopy in condensed matter has become a widely used technique, there are only few experiments clearly separating the individual processes contributing to the emission of the coherent high energy photons [16, 25]. Up to now it was also not possible to measure the phase shifts of the emitted harmonic radiation induced by tailoring the driving waveform. The principle of coherent two-color optical control has initially been developed to steer electron currents in semiconductors via quantum path interference by the \(\omega\)-2\(\omega\) field superposition [26, 27]. Since then, the two-color coherent control has been demonstrated in many physical systems, including the control of quantum wave function of electrons in atoms [28], electron photoemission from metals [29, 30] or HHG in atoms and solids [17, 18, 19, 20, 21, 22, 23]. In this report we demonstrate a scheme allowing to control the electron tunneling time within each half-cycle of the driving wave with attosecond precision using coherent superposition of an ultrashort infrared pulse with its third harmonic frequency. By controlling the mutual phase between the \(\omega\)-3\(\omega\) fields, the driving waveform changes (see Fig. 1a) leading to modulation of the amplitude and attosecond emission delays of high harmonic radiation generated in a silicon crystal. To clearly separate the HHG process from electron tunneling excitation we simultaneously monitor the excited electron density as a function of the \(\omega\)-3\(\omega\) phase by measuring the transient reflectivity of the sample after the interaction with \(\omega\)-3\(\omega\) fields. We observe that even a small admixture of light at the third harmonic frequency to the fundamental pulse (the ratio between light intensities \(I_{3\omega}/I_{\omega}\approx 10^{-4}-10^{-3}\)) is sufficient to reach high modulation visibility of both the high harmonic generation yield and the excited electron population. In our experiments we use the fundamental pulses in the mid-infrared spectral region with the photon energy of 0.62 eV (wavelength of 2000 nm) and the full-width at half maximum duration of 35 fs. The third harmonic pulse (1.86 eV, 667 nm) is generated in a BBO crystal using type I phase-matching. Both pulses propagate collinearly and are overlapped in space and time at the surface of a silicon crystal (see the layout of the experimental setup in Fig. 1b, detailed description of the setup can be found in Methods and in Extended data figure 1). Both the \(\omega\) and 3\(\omega\) pulses have the same orientation of linear polarization along [100] crystallographic direction of silicon. The generated harmonic radiation is collected in the reflection geometry to avoid propagation effects in the sample. Due to the strong absorption of light at photon energies above the direct band gap (3.4 eV), the collected harmonic radiation is generated in a surface layer of the sample with the thickness of only 5-100 nm depending on the photon energy. The relative phase difference between the \(\omega\) and 3\(\omega\) fields is controlled by a pair of fused silica wedges with a precision of 3 mrad corresponding to the time shift of the two waves by 3 as. The long-term timing jitter of the setup is characterized to be 2.8 as (RMS, see Methods and Extended data figure 2 for details). We choose the combination of the fundamental frequency with its third harmonics instead of more commonly used \(\omega\)-2\(\omega\) combination because it allows to control the time window of electron tunneling within each half-cycle of the fundamental wave with better precision thanks to the shorter period of the third harmonic field. Additionally, the \(\omega\)-3\(\omega\) combination does not break the time symmetry of the waveform. As a consequence, only the odd order harmonic frequencies are observed in the high harmonic spectra generated in centrosymmetric materials. Simultaneously with the modulation of high harmonic yield we monitor the excited carrier population which is left in the sample after the interaction with \(\omega\)-3\(\omega\) pulses using transient reflectivity of an ultraviolet probe pulse (3.62 eV, blue beam in Fig. 1b, details in Methods and Extended data figure 3) incident on the sample with the delay time of 0.5 ps after the \(\omega\)-3\(\omega\) pulse combination. This allows us to clearly separate the modulation of the integrated tunneling rate from the modulation of high harmonic emission probability. Coherent control of HHG in silicon by the two-color \(\omega\)-3\(\omega\) field is experimentally demonstrated in Fig. 1c, where we plot the measured HHG spectra in the spectral region 2.8-10.5 eV as a function of the relative phase shift \(\varphi\) between the fundamental (peak electric field in silicon 1.61 \(\pm\) 0.1 GV/m) and the third harmonic fields with the ratio between the peak field amplitudes \(r=F_{3\omega}/F_{\omega}=(3.9\pm 0.5)\%\). The results are compared to numerical calculations using time dependent density functional theory (TDDFT, details are described in Methods) shown in Fig. 1d obtained with the fundamental electric field amplitude of 1.6 GV/m and the field ratio of \(r=4.3\%\). We observe a clear modulation of the spectra both in the experimental and numerical data. The oscillations of individual harmonic orders obtained by integrating the power emitted in the spectral window of 0.1 eV around each harmonic peak are shown in Fig. 1e both for the experimental data (solid curves) and the numerical calculations (dashed curves). The data are normalized and vertically translated for clarity. The HHG modulation is compared to the modulation of the transient reflectivity of the sample \(\Delta R/R_{0}\) shown as a solid curve in the uppermost panel of Fig. 1e. Here the dashed curve corresponds to the normalized population of electrons excited to the conduction band resulting from the TDDFT calculations (see Methods for details). These data are used to obtain absolute calibration of the mutual phase between the \(\omega\)-3\(\omega\) fields as the highest excited electron population is expected for \(\varphi=0\), for which the maxima of both waveforms overlap (see Fig. 1a). We observe a good quantitative agreement between the experiment and numerical calculations both for the phase shifts of the oscillation maxima of individual harmonic orders, which are marked by squares (experimental data) and stars (theory), and for the depth of modulation of the high harmonic yield. The phase of maximum generation yield differs for each harmonics as a consequence of the dispersion of the propagating electron and hole in higher energy bands of silicon, which tailors the interference between different quantum paths contributing to the interband emission of high energy photons. Remarkably, the maximum of the harmonic generation yield is reached for the \(\omega\)-3\(\omega\) mutual phase \(\varphi\approx\pi/2\), which is shifted from the phase corresponding to the highest population of excited carriers. This observation suggests that the dominant mechanism of HHG driven by mid-infrared light in silicon is the interband polarization [31] in contrast to the case of HHG in wide band gap materials excited by near-infrared light, where the intraband current was found to be dominating [16]. The phase of HHG modulation maxima is only weakly dependent on the ratio between the amplitudes of electric field of the 3\(\omega\) and \(\omega\) pulses \(r\) (see Extended data figure 4). The modulation visibilities defined as \((S_{\text{max}}-S_{\text{min}})/(S_{\text{max}}+S_{\text{min}})\), where \(S_{\text{min}}\) and \(S_{\text{max}}\) are the minimum and maximum HHG yields at a particular harmonic frequency, are plotted in Fig. 1f as functions of \(r\). The visibilities reach high values even for very weak third harmonic field only of about 1% of the fundamental field, which induces almost imperceptible changes of the combined waveform. However, due to the strong nonlinearity of the HHG process, this weak waveform modulation translates to a strong modulation of the HHG signal. To qualitatively understand the observed phenomena we recall the semi-classical model of HHG in atoms, in which we assume instantaneous electron tunneling and classical propagation of a single electron in free space between the tunneling and recombination events. In the adiabatic approximation, the time-dependent potential generated by the oscillating electric field changes slow enough to allow the quantum system to follow it. The instantaneous tunneling rate of an electron to the conduction band can be approximated using a Zener-like tunnelling formula [32]: \[W(t)\propto|E(t)|\exp\left(-\frac{\pi\sqrt{m^{*}E_{\text{g}}^{3}}}{2e\hbar\left| F(t)\right|}\right). \tag{1}\] Here \(F(t)\) is the time dependent electric field in the material and \(\hbar\) is reduced Planck's constant. In Fig. 2a we plot the time evolution of the coherent superposition of the fundamental and third harmonic fields \(F(t)=F_{\omega}\cos(\omega t)+F_{3\omega}\cos(3\omega t-\varphi)\) with the amplitudes \(F_{\omega}\)=1.6 V/nm and \(F_{3\omega}\)=0.16 V/nm (ratio \(r=F_{3\omega}/F_{\omega}\)=10%) for two values of the relative phase difference \(\varphi=1.24\pi\) (blue curve) and \(\varphi=0.24\pi\) (red curve) corresponding to weak and strong emission of high harmonic radiation, respectively. The instantaneous tunneling rate describing the probability of electron transition to the conduction band per unit time calculated using equation (1) with parameters \(E_{\text{g}}=\)3.4 eV (1st direct band gap of silicon) and \(m^{*}=\)0.45 \(m_{0}\) (reduced mass corresponding to the lowest electron and hole bands in \(\Gamma\) point) is shown as blue and red shaded areas. The classical trajectories of the electrons \(x_{\text{e}}(t)\) calculated by solving the equation of motion \(\ddot{x}_{\rm e}=-eE/m\) with the effective mass of the electron \(m\)=0.2 \(m_{0}\) (effective mass of light electron band in \(\Gamma\)-point) emitted in different times within one half-cycle of the fundamental wave are shown as curves in the lower section of Fig. 2a. The color scale corresponds to the total energy of the recombining electron and hole \(E_{\rm tot}=1/2m\dot{x}_{\rm e}^{2}+1/2m\dot{x}_{\rm h}^{2}+E_{\rm g}\) (we assume light holes with \(m\)=0.2 \(m_{0}\) with trajectories \(x_{\rm h}(t)\)) while the line thickness is proportional to the tunneling probability of the electron corresponding to the particular trajectory. There are two important effects which may contribute to the modulation of the HHG yield. The first effect is a large difference between the maxima of the instantaneous tunneling rates, which are related to the modulation of the electric field amplitude of the combined \(\omega\)-3\(\omega\) waveform. The second effect contributing to the HHG modulation is the time shift \(\delta t\) of the maximum of the tunneling window. The time shift has important implications for the probability of the electron-hole recombination. When the HHG is driven by a single frequency field, only the electrons generated during the second half of each tunneling window can recombine and emit photons while the ones created in the first half of the tunneling window propagate in the laser field away from the original position. The time shift of the tunneling window to later times with respect to the fundamental wave thus increases the number of recombining electrons contributing to high harmonic emission while the shift to earlier times leads to the opposite effect. This model suggests that the maximum of the high harmonic yield is reached for different value of \(\varphi\) than the maximum of the excited carrier population, which agrees with our experimental observations shown in Fig. 1e. While this model captures most of the features observed in the experiments, it can not describe the complex electron dynamics in silicon involving the intraband currents and interband transitions between multiple bands. To get a deeper insight into the coherent highly nonlinear response of silicon we analyse the nonlinear current obtained from TDDFT simulations. By applying high pass Fourier filter to the calculated total time-dependent current with a cut-off frequency of 4\(\omega\) (only 5th and higher harmonics are taken into account) we obtain the instantaneous current oscillating at high harmonic frequencies. The radiation emitted by an accelerating charge is proportional to the time derivative of the current, which gives us the time profile of the radiated instantaneous high harmonic power. In Figs. 2b,c we plot the combined \(\omega\)-3\(\omega\) waveforms with the field ratio of \(r\)=5% for two different values of the phase difference \(\varphi\)=1.5 (red curve in Fig. 2b) and \(\varphi\)=0.5 (red curve in Fig. 2c) along with the calculated instantaneous power of the emitted high harmonic radiation at photon energies above 2.8 eV (blue shaded areas). The field of the fundamental pulse is shown for comparison as dashed curves in Fig. 2b,c. The time shifts of the maxima of the waveforms in these two cases of \(\delta t=\varphi/(3\omega)=-135\) as (see the inset of Fig. 2b) and \(\delta t=135\) as (see the inset of Fig. 2c), respectively, lead to dramatic changes of the emission probability even with a very weak perturbation of the fundamental waveform by the third harmonic field. We observe that the maxima and minima of the HHG yield correspond to approximately the same values of excited electron populations, emphasizing the role of the time shift of the electron tunneling window which is evidenced to be the dominant mechanism causing the modulation of HHG in silicon. When the time of electron excitation shifts with respect to the fundamental wave, not only the amplitude but also the phase of the generated harmonic radiation is ex pected to shift. To measure the phase delays of the emitted high harmonic radiation we apply spectral interferometry by using a signal and reference high harmonic fields [33, 34]. The reference field is generated by an infrared pulse phase-locked to the \(\omega\)-3\(\omega\) combination, which arrives to the sample before the two-color waveform (layout of the experimental setup is shown in Fig. 3a, the details of the experimental setup are described in Methods and Extended data figure 5). The time separation of the signal and reference fields of about 170 fs leads to spectral interference in the harmonic spectra, example of which is shown in Fig. 3b for 5th to 9th harmonic frequencies. By monitoring the shifts of the spectral interference fringes we measure the relative shifts of the emission phase of harmonic radiation as a function of the mutual phase between the \(\omega\)-3\(\omega\) fields. We note that a real carrier population is generated in the sample during the HHG by the reference pulse. While this can induce dephasing of carriers during coherent HHG driven by \(\omega\)-3\(\omega\) pulses which would decrease the amplitude of the generated harmonics, it is not expected to influence the phase of the emitted photons. In Fig. 3c we show the measured high harmonic emission delays (field ratio \(r\)=5.5%) compared to the delays obtained from the numerical TDDFT simulations (\(r\)=5%) in the time window around the maxima of the harmonic emission shown in Fig. 3b. We observe that close to the maximum of the emission yield, the emission delay scales approximately linearly with the time delay between the two-color fields \(\delta t\) with different slopes for individual harmonic frequencies. The emission delays of high harmonic photons relative to the fundamental wave thus depend on the photon energy and cannot be understood only by considering the time shift of the electron tunneling time. In contrast, the emission phase is determined by the combination of the tunneling time and the quantum mechanical phase acquired by the electron-hole wavepacket during its coherent dynamics between tunneling and recombination, where multiple bands are involved in the high harmonic generation process [31, 35]. The measured slopes \(b\) of the emission delays of 5th to 9th harmonics obtained by fitting the data shown in Fig. 3c by a linear function \(y=a+bx\) are plotted in Fig. 3d as a function of the \(\omega\)-3\(\omega\) field ratio \(r\). The observed smaller delays of higher order harmonics can be qualitatively understood in the framework of the semi-classical electron dynamics shown in Fig. 2a. The electrons propagating along trajectories corresponding to the highest recombination energy are generated in a short time window within one half-cycle of the fundamental wave. The recombination time of these high energy electrons practically does not shift with the relative phase \(\varphi\). However, the electrons with lower recombination energies are generated in two distinct time windows corresponding to short and long trajectories. Photons within the intermediate energy range are thus emitted in two distinct times within each half-cycle of the \(\omega\) field (dark green curves in Fig. 2a) with different amplitudes and emission phases. The change of the relative phase shift \(\varphi\) between the \(\omega\)-3\(\omega\) field leads to strong changes of the recombination probability of particular trajectories, which in turn causes the delay oscillating between the values corresponding to the short and long trajectories. The larger delays observed in the case of the 5th harmonic frequency can be also influenced by the fact that its photon energy is slightly below the direct band gap of undriven silicon. The intraband current thus in this case may play more important role than for the higher harmonics, where the dominance of the interband generation has been confirmed by recent experiments [31]. The combination of coherent two-color control and phase-resolved detection scheme with attosecond resolution introduced here brings opportunities for gaining a deeper understanding of HHG in solids. In particular we study the relation between the tunneling time, which is controlled within each half-cycle of the driving wave by adding a weak perturbation at the third harmonic frequency, and the amplitude and phase of the emitted high-energy photons. A direct comparison between the modulation of high harmonic emission and the modulation of the real carrier population induced by the \(\omega\)-3\(\omega\) fields demonstrated here allows to separate the processes of electron excitation from the nonlinear coherent dynamics leading to HHG. Similar schemes can also be applied in other physical systems, in which the strong-field nonlinear optical interactions induce coherent sub-cycle electron dynamics [36]. Further, the \(\omega\)-3\(\omega\) combination may find applications in atomic HHG for enhancement of the emission yield or for advanced gating techniques combining the coherent control with dynamic phase-matching [37]. ## References * [1] Corkum, P. B. & Krausz, F. Attosecond science. _Nat. Phys._. **3**, 381-387 (2007). * [2] Krausz, F. & Ivanov, M. Attosecond physics. _Rev. Mod. Phys._. **81**, 163-234 (2009). * [3] Levesque, J., Zeidler, D., Marangos, J. P., Corkum, P. B. & Villeneuve, D. M. High Harmonic Generation and the Role of Atomic Orbital Wave Functions. _Phys. Rev. Lett._. **98**, 183903 (2007). * [4] Ghimire, S. et al. Observation of high-order harmonic generation in a bulk crystal. _Nat. Phys._. **7**, 138-141 (2011). * [5] Luu, T. T. et al. Extreme ultraviolet high-harmonic spectroscopy of solids. _Nature_. **521**, 498-502 (2015). * [6] Kruchinin, S. Yu., Krausz, F. & Yakovlev, V. Colloquium: Strong-field phenomena in periodic systems. _Rev. Mod. Phys._. **90**, 021002 (2018). * [7] Keldysh, L. V. Ionization in the Field of a Strong Electromagnetic Wave. _Soviet Physics JETP_. **20** 1307 (1965). * [8] McPherson, A. et al. Studies of multiphoton production of vacuum-ultraviolet radiation in the rare gases. _J. Opt. Soc. Am. B_. **4**, 595-601 (1987). * [9] Corkum, P. B. Plasma perspective on strong field multiphoton ionization. _Phys. Rev. Lett._. **71**, 1994-1997 (1993). * [10] Ghimire, S. & Reis, D. A. High-harmonic generation from solids. _Nat. Phys._. **15**, 10-16 (2019). * [11] Klemke, N. et al. Polarization-state-resolved high-harmonic spectroscopy of solids. _Nat. Commun._. **10**, 1319 (2019). * [12] You, Y. S., Reis, D. A. & Ghimire, S. Anisotropic high-harmonic generation in bulk crystals. _Nat. Phys._. **13**, 345-349 (2017). * [13] Yoshikawa, N., Tamaya, T. & Tanaka, K. High-harmonic generation in graphene enhanced by elliptically polarized light excitation. _Science_. **356**, 736-738 (2017). * [14] You, Y. S. et al. Laser waveform control of extreme ultraviolet high harmonics from solids. _Opt. Lett._. **42**, 1816-1819 (2017). * [15] Schubert, O. et al. Sub-cycle control of terahertz high-harmonic generation by dynamical Bloch oscillations. _Nat. Photonics_. **8**, 119-123 (2014). * [16] Garg, M. et al. Multi-petahertz electronic metrology. _Nature_. **538**, 359-363 (2016). * [17] Watanabe, S., Kondo, K., Nabekawa, Y., Sagisaka, A. & Kobayashi, Y. Two-Color Phase Control in Tunneling Ionization and Harmonic Generation by a Strong Laser Field and Its Third Harmonic. _Phys. Rev. Lett._. **73**, 2692-2695 (1994). * [18] Vampa, G. et al. Linking high harmonics from gases and solids. _Nature_. **522**, 462-464 (2015). * [19] Vampa, G. et al. All-Optical Reconstruction of Crystal Band Structure. _Phys. Rev. Lett._. **115**, 193603 (2015). * [20] Orenstein, G. et al. Shaping electron-hole trajectories for solid-state high harmonic generation control. _Opt. Express_. **27**, 37835-37845 (2019). * [21] Dudovich, N. et al. Measuring and controlling the birth of attosecond XUV pulses. _Nat. Phys._. **2**, 781-786 (2006). * [22] Mitra, S. et al. Suppression of individual peaks in two-colour high harmonic generation. _J. Phys. B-At. Mol. Opt. Phys._. **53**, 134004 (2020). * [23] Uzan, A. J. et al. Attosecond spectral singularities in solid-state high-harmonic generation. _Nat. Photonics_. **14**, 183-187 (2020). * [24] Severt, T., Tros, J., Kolliopoulos, G., Ben-Itzhak, I. & Trallero-Herrero, C. A. Enhancing high-order harmonic generation by controlling the diffusion of the electron wave packet. _Optica_. **8**, 1113-1121 (2021). * [25] Langer, F. et al. Lightwave-driven quasiparticle collisions on a subcycle timescale. _Nature_. **533**, 225-229 (2016). * [26] Hache, A. et al. Observation of Coherently Controlled Photocurrent in Unbiased, Bulk GaAs. _Phys. Rev. Lett._. **78**, 306-309 (1997). * [27] Dupont, E., Corkum, P. B., Liu, H. C., Buchanan, M. & Wasilewski, Z. R. Phase-Controlled Currents in Semiconductors. _Phys. Rev. Lett._. **74**, 3596-3599 (1995). * [28] Weinacht, T., Ahn, J. & Bucksbaum, P. Controlling the shape of a quantum wavefunction. _Nature_. **397**, 233-235 (1999). * [29] Forster, M. et al. Two-Color Coherent Control of Femtosecond Above-Threshold Photoemission from a Tungsten Nanotip. _Phys. Rev. Lett._. **117**, 217601 (2016). * [30] Li, A., Pan, Y., Dienstbier, P. & Hommelhoff, P. Quantum Interference Visibility Spectroscopy in Two-Color Photoemission from Tungsten Needle Tips. _Phys. Rev. Lett._. **126**, 137403 (2021). * [31] Suthar, P., Trojanek, F., Maly, P., Derrien, T. J.-Y. & Kozak, M. Role of Van Hove singularities and effective mass anisotropy in polarization-resolved high harmonic spectroscopy of silicon. _Commun. Phys._. **5**, 288 (2022). * [32] Kane, E. O. Zener tunneling in semiconductors. _J. Phys. Chem. Solids_. **12**, 181-188 (1960). * [33] Lu, J., Cunningham, E. F., You, J. S., Reis, D. A. & Ghimire, S. Interferometry of dipole phase in high harmonics from solids. _Nature Photonics_. **13**, 96-100 (2019). * [34] Uchida, K. & Tanaka, K. High harmonic interferometer:For probing sub-laser-cycle electron dynamics in solids. Preprint at [https://arxiv.org/abs/2304.14704](https://arxiv.org/abs/2304.14704) (2023). * [35] Klemke, N. et al. Polarization-state-resolved high-harmonic spectroscopy of solids. _Nat. Commun._. **10** (2019). * [36] Dienstbier, P. et al. Tracing attosecond electron emission from a nanometric metal tip. _Nature_. **616**, 702-706 (2023). * [37] Thomann, I. et al. Characterizing isolated attosecond pulses from hollow-core waveguides using multi-cycle driving pulses. _Opt. Express_. **17**, 4611-4633 (2009). ## Funding: Czech Science Foundation (project GA23-06369S), Charles University (UNCE/SCI/010, SVV-2020-260590, PRIMUS/19/SCI/05, GAUK 349921). Funded by the European Union (ERC, eWaveShaper, 101039339). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. T. J.-Y. D. was supported by the European Regional Development Fund, the state budget of the Czech Republic (project BIATRI: CZ.02.1.01/0.0/0.0/15 003/0000445) and the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:90140). ## Contribution MK conceived and supervised the study. A.G., M.K. and P.S. performed the experiments. A.G. and M.K. processed and interpreted the experimental data. T.J.-Y.D. performed the numerical simulations. M.K. wrote manuscript with the input from all the co-authors. ## Competing interests Authors declare that they have no competing interests. ## Data availability All data supporting the findings of this study are available from the corresponding authors upon reasonable request. **Fig. 1. Modulation of high harmonic generation yield by coherent superposition of \(\omega\)-3\(\omega\) fields.****(a)** Time evolution of electric field of a combined \(\omega\)-3\(\omega\) waveform for different values of the mutual phase difference \(\varphi\) within one period of the fundamental wave. (**b)** Layout of the experimental setup. (**c)** Measured and (**d**) calculated dependence of high harmonic spectra generated in silicon on the mutual phase \(\omega\)-3\(\omega\) fields \(\varphi\). The field ratios are \(r=(3.9\pm 0.5)\%\) in (**c**) and \(r=4.3\%\) in (**d**). (**e**) Experimental (full curves) and numerically calculated (dashed curves) modulation of individual harmonic orders (panels labelled 5-15) and the transient reflectivity signal of the probe pulse (top panel). The black points label the values of the mutual phase corresponding to the maxima of the measured (squares) and theoretical (stars) modulation for individual harmonics. The HHG data were obtained by integrating the spectral window with the width of 0.1 eV around the maxima of the respective harmonic orders. (**f**) Measured visibility of modulation of the harmonic generation yield for 5th-15th harmonics as a function of the ratio of \(\omega\)-3\(\omega\) field amplitudes \(r\). **Fig. 2. Time-dependence of high harmonic emission driven by \(\omega\)-3\(\omega\) fields. (a) Semi-classical model showing the time evolution of the electric field of the combined \(\omega\)-3\(\omega\) waveform for two values of the mutual phase difference \(\varphi=1.24\pi\) (blue curve) and \(\varphi=0.24\pi\) (red curve). Instantaneous electron tunneling rate calculated using equation (1) is shown as blue and red shaded peaks. The classical trajectories of electrons excited in different times are shown in the lower part of the figure with the color scale corresponding to the energy in the moment of recombination and the line thickness proportional to the tunneling probability associated with the particular trajectory. (b) The instantaneous power of high harmonic emission calculated using time-dependent density functional theory (blue shaded area) for the relative phase \(\varphi\)=1.5 \(\pi\) between the \(\omega\)-3\(\omega\) fields and for the field ratio of \(r\)=4.3%. The time evolution of the electric field of the combined waveform (red solid curve) is compared with the electric field of the fundamental pulse (black dashed curve). Inset: The peak of the waveform shifts by \(\delta t\)=-135 as. (c) The same as in (b) with the phase difference of \(\varphi\)=0.5 \(\pi\) leading to the positive time shift of the field maximum of \(\delta t\)=135 as.** Figure 3: **Attosecond emission delays of high harmonic radiation induced by modulating the relative phase of \(\omega\)-3\(\omega\) fields generated in a silicon crystal.** (**a**) Layout of the experimental setup for spectral interferometry of high harmonic fields. (**b**) Spectral interference of the 5th, 7th and 9th harmonics measured as a function of the relative phase between \(\omega\)-3\(\omega\) pulses \(\varphi\). (**c**) Measured (solid curves, \(r\)=5.5%) and calculated (dashed curves, \(r\)=5%) attosecond emission delays obtained from the data shown in (**b**) as a function of the time delay of the 3\(\omega\) field with respect to the \(\omega\) field. The shaded areas show the standard deviation of the delays resulting from the least squared fits of the data using equation (2). (**d**) Slopes of the measured emission delays of the 5th (black squares), 7th (red circles) and 9th (blue triangles) harmonics as a function of the field ratio \(r\) obtained by fitting the data shown in (**c**) using linear function in the time interval of 400 as around the modulation maxima. Errorbars correspond to the standard deviation of the slope obtained by least squares fits. ## Methods ### Experimental setup The experiments demonstrating coherent two-color control of high harmonic generation in silicon are performed using pulses generated in a noncollinear optical parametric amplifier with subsequent difference frequency generation (NOPA-DFG, setup is described in detail in [38]) and the pulses at its third harmonic frequency. The NOPA-DFG setup is pumped by solid-state femtosecond laser system Pharos SP2-6W (Light Conversion) with ytterbium-doped active medium. The NOPA-DFG output pulses have central wavelength of 2000 nm and pulse duration at the sample of 35 fs (FWHM of instantaneous power). The coherent \(\omega\)-3\(\omega\) pulse combination is generated in the setup, which is shown in Extended data figure 1. The beam passes through a half wave plate to control the direction of linear polarization of the fundamental pulse. Subsequently, the beam is focused using an off-axis parabolic mirror with a focal distance of 15 cm to a BBO crystal, where the pulses at third harmonic frequency 3\(\omega\) are generated. We use phase-matching angle of \(\theta\)=27.2\({}^{\circ}\) for direct third harmonic generation (\(oooe\)-type phase-matching) leading to vertically polarized 3\(\omega\) pulse (polarization along the direction of projection of the optical axis of the BBO crystal in the plane perpendicular to the light propagation). In the first experiment, the half wave plate is adjusted such that the fundamental polarization is horizontal. The pulse thus propagates as an ordinary ray and we obtain only one replica of the fundamental pulse after the BBO. When the half wave plate is rotated to 22.5\({}^{\circ}\), the polarization incident to BBO crystal has both the horizontal and vertical components with the same amplitude. As a consequence of the birefringence of the BBO crystal we obtain two orthogonally polarized pulses, which are delayed due to different group indexes of the ordinary and the extraordinary rays. After the BBO crystal, the two collinear beams at the fundamental and the third harmonic frequencies are collimated by a second off-axis parabolic mirror with focal length of 15 cm. The relative phase between the \(\omega\) and 3\(\omega\) pulses \(\varphi\) is controlled using a pair of fused silica wedges with an apex angle of \(\alpha\)=4\({}^{\circ}\). One of the wedges is placed on a translation stage. The phase shift of the third harmonic field with respect to the fundamental pulse resulting from the propagation in the wedges over a distance \(\Delta L\) can be expressed as \(\varphi=3\omega\Delta L\left(n_{3\omega}-n_{\omega}\right)/c\), where \(c\) is speed of light in vacuum and \(n_{\omega}\)=1.4381 and \(n_{3\omega}\)=1.4561 are the refractive indexes of fused silica at frequencies \(\omega\) and \(3\omega\), respectively. One period of the high harmonic yield modulation corresponds to the shift of the relative \(\omega\)-3\(\omega\) phase \(\varphi\) by 2\(\pi\) which gives \(\Delta L=\)37 \(\mu\)m. The calculated shift of the wedge needed to change the thickness of the material by \(\Delta L\) is \(\Delta x=\Delta L/\tan(\alpha)=0.59\) mm, which matches well the experimentally determined value of 0.574\(\pm\)0.005 mm. While the refractive indexes of fused silica at frequencies \(\omega\) and 3\(\omega\) differ strongly, the difference between group velocities of the two pulses in the wedges are much smaller (group indexes are \(n_{g,\omega}\)=1.4673 and \(n_{g,3\omega}\)=1.4732) leading to a small relative shift of the pulse envelopes when changing the relative phase by several periods of the \(3\omega\) field. To compensate the total group delay which the \(\omega\) and 3\(\omega\) pulses obtain between the first BBO crystal and the sample we let the beam pass through a second BBO crystal with the optical axis cut under the angle \(\theta\)=50\({}^{\circ}\). Due to the negative uniaxial birefringence of BBO, the ordinary ray at frequency \(\omega\) propagates slower in the crystal than the extraordinary ray at 3\(\omega\). The beam is finally focused on the sample surface by an off-axis (90\({}^{\circ}\)) silver coated parabolic mirror with focal length of \(f\)=35 mm. The spot sizes of the two beams on the sample are \(w_{\omega}=15\)\(\mu\)m and \(w_{3\omega}=8\)\(\mu\)m. The dependence of nonperturbative HHG yield in silicon on the incident light intensity of the fundamental pulses was previously measured to be cubic [31]. When considering Gaussian profile of the fundamental beam, the harmonics are produced from the area with a radius decreased by a factor of \(w_{HHG}=w_{\omega}/\sqrt{3}=8.7\)\(\mu\)m, which is well matched to the size of the third harmonic beam. To adjust the polarizations of all the pulses to be linear in the same direction we put a thin polarizer (Thorlabs, LPNIRA) between the final focusing parabolic mirror and the sample. High harmonic radiation is collimated by a UV fused silica lens with focal distance of 100 mm. The spectra are detected using a grating spectrograph (Andor, Shamrock 163) with a grating containing 600 lines/mm blazed for 300 nm. The spectrometer is equipped with a cooled CCD camera (Andor iDUS 420). Prior to entering the spectrometer, the light is spectrally dispersed in the vertical direction by a pair of fused silica prisms. This allows to filter out most of the radiation at wavelengths above 500 nm before the high harmonics enter the spectrometer. Because the dispersion is applied in the vertical direction, all the harmonics are transmitted through the entrance vertical slit of the spectrometer. The harmonics 11th-15th have photon energies higher than 6 eV and are strongly absorbed in air. The modulation of the high harmonic orders is thus measured in vacuum using a vacuum ultraviolet spectrometer (EasyLight, H+P Spectroscopy) combined with a microchannel plate detector. Due to the inline setup of the nonlinear interferometer, in which both beams propagate through the same beam paths, the drifts of the relative phase difference \(\varphi\) between the \(\omega\) and 3\(\omega\) pulses are negligible and the time jitter of the two fields is only few attoseconds over the measurement time of one hour (see the measured modulation of the 5th harmonic frequency over the time period of 60 minutes shown in Extended data figure 2). The carrier population excited in the sample by the coherent superposition of \(\omega\)-3\(\omega\) fields is monitored by measuring the transient reflectivity of the sample. We use an independent probe pulse with the wavelength of 343 nm and photon energy of 3.62 eV (third harmonics of the Pharos laser output at 1030 nm), which is incident on the sample with a controlled time delay with respect to the coherent \(\omega\)-3\(\omega\) pulse combination. The high photon energy of the probe is selected due its short penetration depth into silicon of only few nanometers, which prevents accumulation of the signal coming from different depths which would be modified by the fact that the phase velocities of the pulses at \(\omega\) and 3\(\omega\) frequencies in silicon strongly differ. The second reason for selecting a short probe wavelength is the possibility to focus the probe beam to a smaller spot size than the \(\omega\) and 3\(\omega\) beams, which allows to probe the region of the sample with approximately homogeneous spatial distribution of the excited carriers. In this experiment, the pump \(\omega\)-3\(\omega\) combination propagates through an optical chopper, which modulates the beam. After being reflected from the sample, the power of the probe beam at 3.62 eV is detected by a silicon photodiode. The electronic signal is measured using a lock-in amplifier (SR830, Stanford Research Systems) at the frequency of the optical chopper. The measured transient reflectivity dynamics is shown in Extended data figure 3 as a function of the time delay between the \(\omega\)-3\(\omega\) pulses which excite the carriers and the ultraviolet probe pulse. The excited carrier population is linearly proportional to the transient reflectivity change in a short time after the excitation. To observe the oscillations of the carrier population as a function of the relative phase between the \(\omega\)-3\(\omega\) fields \(\varphi\) we set the time delay of the probe to 0.5 ps. ### Data processing The measured high harmonic spectra are not compensated for the absolute spectral efficiency of the detection setup. The absolute calibration is not required because we are only interested in the relative changes of the HHG yield of individual harmonic frequencies as a function of experimental parameters (mutual phase difference between \(\omega\)-3\(\omega\) fields, ratio of the field amplitudes \(r\)). The generation yield at each harmonic frequency is obtained by integrating the spectral window with the width of 0.1 eV around the spectral peak of the particular harmonics. The spectral interferometry data shown in Fig. 3b are processed by fitting the peaks of the individual harmonics in each spectrum corresponding to specific \(\omega\)-3\(\omega\) phase differences \(\varphi\) using a function: \[f(\hbar\omega)=A_{1}\text{exp}\left[-\frac{(\hbar\omega-\hbar\omega_{HHG})^{2 }}{2\Delta\omega_{HHG}^{2}}\right]+A_{2}\text{exp}\left[-\frac{(\hbar\omega- \hbar\omega_{HHG})^{2}}{2\Delta\omega_{HHG}^{2}}\right]\text{sin}\left(\omega \tau+\Delta\varphi_{HHG}\right)+A_{3} \tag{2}\] Here \(\omega_{HHG}\) and \(\Delta\omega_{HHG}\) are the central frequency and the spectral width of the particular harmonic peak. Fitting parameters are the delay between the two harmonic pulses \(\tau\), their relative phase shift \(\Delta\varphi_{HHG}\) and the coefficients \(A_{1}\), \(A_{2}\) and \(A_{3}\) which correspond to the amplitude of the slowly varying Gaussian envelope (\(A_{1}\)), the amplitude of the spectral interference (\(A_{2}\)) and the constant background (\(A_{3}\)). The data are fitted in two steps. First the time delay \(\tau\) is determined from the data with the strongest spectral interference shown in Extended data figure 5. In the second step, \(\tau\) is kept constant and the relative phase shift of harmonic fields \(\Delta\varphi_{HHG}\) is obtained for different values of the relative phase between the driving \(\omega\)-3\(\omega\) fields. The relative emission delays shown in Fig. 3c are calculated from the relative phase shifts of the individual harmonic peaks as \(\delta t_{HHG}=\Delta\varphi_{HHG}/\omega_{HHG}\), where \(\omega_{HHG}\) is the central frequency of the particular harmonics. ### Numerical calculations using time dependent density functional theory (TDDFT) #### Ground state preparation Silicon (100) crystal is prepared using a primitive cell made of two silicon atoms along with non-orthogonal periodic boundary conditions as implemented in _Octopus_[39]. Inter-atomic distance is chosen equal to the experimental value (\(a=5.431\) angstroms) [40] The inter-atomic potential is described using the method of norm-conserving _ab-initio_ pseudo-potentials [41]. The real-space is meshed regularly using a step of \(\delta x=0.23\) angstroms, and the momentum space is meshed uniformly using a grid of \(k=40\times 40\times 40\), repeated 4 times centered at points X and L. The ground state is prepared using the Tran-Blaha functional (TB09) [42]. The convergence of the ground state was verified by comparison with available literature of more precise methods [43]. The resulting direct band gap \(E_{\Gamma}\) of silicon is found to be \(E_{\text{g}}\)=3.04 eV. #### Time-dependent simulations and control of convergence The time-evolved Kohn-Sham (KS) states are prepared by solving the KS equation expressed in the velocity gauge and in Hartree atomic units: \[\left[\left(-\frac{i\hbar}{2m_{e}}\mathbf{\nabla}_{\mathbf{r}}+\frac{\left| e\right|}{c}\mathbf{A}\left(t\right)\right)^{2}+\hat{v}_{\text{ion}}\left(\mathbf{r} \right)+\hat{v}_{\text{H}}\left[n\left(\mathbf{r},t\right)\right]\left(\mathbf{r} \right)+\right. \tag{3}\] \[\left.+\hat{v}_{\text{xc}}\left[n\left(\mathbf{r},t\right)\right] \left(\mathbf{r}\right)\right]\times\psi_{n,\mathbf{k}}\left(\mathbf{r},t\right)=i\hbar \frac{\partial}{\partial t}\psi_{n,\mathbf{k}}\left(\mathbf{r},t\right).\] Here the vector potential describing the laser pulse \(A(t)\) is introduced using the dipolar approximation in the minimal coupling formulation. In Eq. (3), \(i\) is the pure imaginary number, \(\hbar\) is the reduced Planck constant, \(m_{e}\) is the electron mass at rest, \(\mathbf{\nabla}_{\mathbf{r}}\) is the real-space gradient operator, \(\left|e\right|\) is the elementary charge of the electron and \(c\) is the light velocity. In the Coulomb gauge, the vector potential \(\mathbf{A}\left(t\right)\) in the dipolar approximation is related to the electric field \(\mathbf{F}\left(t\right)\) via the relation \[\mathbf{A}\left(t\right)=-c\int_{-\infty}^{t}\mathbf{F}\left(t^{\prime}\right)\;dt^{ \prime}. \tag{4}\] In Eq. (3), \(\hat{v}_{\text{ion}}\left(\mathbf{r}\right)\) represents the potential of the atomic lattice. Note that the non-local term originating from the pseudo-potential is not detailed for simplicity. \(\hat{v}_{\text{H}}\) denotes the Hartree potential, corrected by the exchange-correlation potential noted \(\hat{v}_{\text{xc}}\). The incident wavelengths is normalized to the band-gap energy obtained using the TB09 functional. Orientation of the linearly-polarized electric field is set along the \(\left(100\right)\) direction of the cubic crystal. From the time-evolved Kohn-Sham orbitals \(\psi_{n,\mathbf{k}}\left(\mathbf{r},t\right)\), the time-dependent electrical current \(\mathbf{J}\left(t\right)\) is computed as indicated in Ref. [44]. The electric field of laser pulses at frequencies \(\omega\) and \(3\omega\), respectively, which is used in the numerical simulations can be written as: \[\vec{F}(t)=\vec{F}_{\omega}\times\text{sin}\left(\omega t+\varphi_{0,\omega} \right)\times f\left(t\right)+\vec{F}_{3\omega}\times\text{sin}\left(3\omega t +\varphi_{0,3\omega}-\varphi\right)\times f\left(t\right), \tag{5}\] where: \[f\left(t\right)=\text{sin}^{4}\bigg{(}\frac{\pi}{2}\frac{t-t_{0}-\tau_{\text{ corr}}}{\tau_{\text{corr}}}\bigg{)}\times H\left(t-t_{0}+\tau_{\text{corr}} \right)\left[1-H\left(t-t_{0}-\tau_{\text{corr}}\right)\right]. \tag{6}\] Here the correction of the FWHM for \(\text{sin}^{4}\) envelope can be expressed as: \[\tau_{\text{corr}}=\frac{4\tau_{\text{exp}}}{\pi}\arcsin\left(2^{-1/4}\right). \tag{7}\] \(\tau_{\text{exp}}\)=35 fs is the experimental pulse duration and \(H(x)\) is Heaviside function of argument \(x\). Note that 4th-power sinus is commonly employed to decrease the spurious oscillations originating from the Fourier transformation upon computation of high-harmonic spectrum [31, 45]. ### Normalization of pulse frequency to band gap energy Density functional theory gives for most of the materials systematically lower values of the band gap compared the experimental values. For the HHG process, an important parameter is the ratio between the driving photon energy and the minimum band gap. As shown in [31], the driving frequency can be normalized to the reduced band-gap provided by DFT [43] to fulfill the relation \(\hbar\omega_{\text{TB09}}/E_{\text{g,TB09}}=\hbar\omega_{\text{exp}}/E_{ \text{g,exp}}\). In the calculations, the wavelengths of \(\lambda_{\omega,\text{TB09}}\)=2237 nm and \(\lambda_{3\omega,\text{TB09}}\)=745.66 nm are used instead of the experimental values 2000 nm and 666.67 nm, respectively. The time axis in the calculation results is normalized back to obtain harmonic spectra with frequencies corresponding to experimental data. ## Calculation of harmonic spectra Harmonic spectra are calculated using Larmor's formula and Fourier transform of the total current obtained by the TDDFT simulations. The Fourier transform is applied to a temporal window that decays in \(\sin^{4}\) at the end of pulse to avoid generating spurious oscillations in the spectra as a consequence of no dephasing present in the TDDFT calculations (similarly to Ref. [31]). ### Calculation of the excited electron density The number of electrons excited from the valence band to the conduction bands is computed using the method described in Ref. [46]. Note that transient values of electron density are subjected to gauge dependencies [47]. Therefore, reported values of quantity of excited electrons are captured after the laser pulse, i.e., in absence of fields, a situation where gauge is not problematic [47, 48].
2301.11377
The momentum operator on a union of intervals and the Fuglede conjecture
The purpose of the present paper is to place a number of geometric (and hands-on) configurations relating to spectrum and geometry inside a general framework for the {\it Fuglede conjecture}. Note that in its general form, the Fuglede conjecture concerns general Borel sets $\Omega$ in a fixed number of dimensions $d$ such that $\Omega$ has finite positive Lebesgue measure. The conjecture proposes a correspondence between two properties for $\Omega$, one takes the form of spectrum, while the other refers to a translation-tiling property. We focus here on the case of dimension one, and the connections between the Fuglede conjecture and properties of the self-adjoint extensions of the momentum operator $\frac{1}{2\pi i}\frac{d}{dx}$, realized in $L^2$ of a union of intervals.
Dorin Ervin Dutkay, Palle E. T. Jorgensen
2023-01-26T19:44:31Z
http://arxiv.org/abs/2301.11377v2
# The momentum operator on a union of intervals and the Fuglede conjecture ###### Abstract. The purpose of the present paper is to place a number of geometric (and hands-on) configurations relating to spectrum and geometry inside a general framework for the _Fuglede conjecture_. Note that in its general form, the Fuglede conjecture concerns general Borel sets \(\Omega\) in a fixed number of dimensions \(d\) such that \(\Omega\) has finite positive Lebesgue measure. The conjecture proposes a correspondence between two properties for \(\Omega\), one takes the form of spectrum, while the other refers to a translation-tiling property. We focus here on the case of dimension one, and the connections between the Fuglede conjecture and properties of the self-adjoint extensions of the momentum operator \(\frac{1}{2\pi i}\frac{d}{dx}\), realized in \(L^{2}\) of a union of intervals. Key words and phrases:momentum operator, self-adjoint extension, Fourier bases, Fuglede conjecture 2010 Mathematics Subject Classification: 47E05,42A16 \({}^{*}\)Corresponding author ###### Contents * 1 Introduction * 2 Notations and preliminaries * 3 Symmetric and self-adjoint extensions * 4 Spectral decomposition * 5 The unitary group * 6 Spectral sets ## 1. Introduction For bounded open domains \(\Omega\) in \(\mathbb{R}^{d}\), the Fuglede problem deals with two properties that \(\Omega\) may or may not have, one (called spectral) is relative to the Hilbert space \(L^{2}(\Omega)\), the question of whether \(L^{2}(\Omega)\) has an orthogonal \(d\)-variable Fourier basis, and the other is geometric (tiling), whether \(\Omega\) tiles \(\mathbb{R}^{d}\) by some set of translation vectors. The original problem asked whether the two properties are equivalent. In this paper we show how tools from operator theory (especially choices of spectral representations for unbounded operators), serve to link the two sides of the problem, spectrum vs geometry. Since the inception, stated this way, the Fuglede conjecture is now known to be negative, more precisely that the two properties are not equivalent in dimension \(3\) and higher [14, 15, 16, 17]. Nonetheless, the Fuglede problem is even open for \(d=1\). Parallel to this we note that there are many closely related new research directions, including analysis on fractals, which deal with various notions of interplay between spectral theoretic properties on the one hand, and geometry on the other, e.g., direct problems and inverse problems. We further stress that the original formulation was stated in terms of properties for the set of \(d\) partial derivative operators for the coordinate directions in \(\Omega\), specifically the possible extensions of partial derivative operators in the form of commuting generators for unitary one-parameter groups acting in \(L^{2}(\Omega)\). Such extensions are known to necessarily be local translation generators. Moreover, following quantum theory, such generators may be viewed as momentum operators, a viewpoint motivated by the canonical duality from quantum mechanics for momentum and position observables. This formulation in turn makes a direct connection to scattering theoretic properties, again related to the Hilbert space \(L^{2}(\Omega)\). And in this form, the problem is of interest even for \(d=1\); and so the case when \(\Omega\) is a union of intervals. Continuing earlier work (e.g., [1]) we aim here at presenting new results for the \(d=1\) Fuglede problem, and making the presentation as self-contained as possible, for the readers who might not be experts in this field. Starting with its classical roots, the Fuglede problem addresses two related properties for bounded domains \(\Omega\) in \(\mathbb{R}^{d}\). More precisely, Fuglede's question asks for a specific linking between multivariable spectra on one side, and geometry of \(\Omega\) on the other (spectral vs tiling). But by now, the Fuglede problem/conjecture has become distinctly interdisciplinary. It has come to encompass a diverse variety of neighboring fields of mathematics each of which in turn lies at the crossroads of at least the following six separate disciplines: (i) harmonic analysis, (ii) spectral and scattering theory for operators in Hilbert space, (iii) metric/convex geometry, (iv) fractals, (v) operator algebras, and (vi) representation theory. For the benefit of readers, we include citations to the following list of papers, each dealing with one or the other of the above six areas, [14, 15, 16, 17, 18]. The paper is organized as follows: in Section 1, we introduce some definitions related to unbounded symmetric operators and their extensions, the associated one-parameter unitary group, and we recall Fuglede's result and conjecture, which serve as the main motivation for our paper. In the next sections we focus on the case when the set \(\Omega\) is a finite union of intervals in dimension \(d=1\). In Section 2, we study the symmetric and the self-adjoint extensions \(A\) of the momentum operator \(\mathsf{D}=\frac{1}{2\pi i}\frac{d}{dx}\) on the space \(C^{\infty}_{c}(\Omega)\) of infinitely differentiable functions with compact support in \(\Omega\). In Section 3, we describe the spectral decomposition of such self-adjoint extensions. Section 4 is devoted to the one-parameter unitary group \(U(t)=\exp(2\pi itA)\), \(t\in\mathbb{R}\), which acts as translations inside the intervals of \(\Omega\) and has a different behavior at the end-points. In Section 5, we make the connections between the existence of orthogonal Fourier bases on \(\Omega\) and the properties of the self-adjoint extensions \(A\) or of the unitary group \(U(t)\). ## 2. Notations and preliminaries When \(d=1\), with a choice of an open set \(\Omega\), the corresponding connected components will then be intervals, and so \(\Omega\) takes the form of a finite union of intervals as follows. **Definition 2.1**.: Let \[\Omega=\bigcup_{i=1}^{n}(\alpha_{i},\beta_{i}),\text{ where }-\infty<\alpha_{1}< \beta_{1}<\alpha_{2}<\beta_{2}<\cdots<\alpha_{n}<\beta_{n}<\infty.\] So \[\Omega=\bigcup_{i=1}^{n}J_{i},\text{ where }J_{i}=(\alpha_{i},\beta_{i})\text{ for all }i\in\{1,\ldots,n\}.\] On \(\Omega\) we consider the Lebesgue measure \(dx\). We denote by \(\partial\Omega\) the boundary of \(\Omega\), \[\partial\Omega=\{\alpha_{i},\beta_{i}:i\in\{1,\ldots,n\}\}.\] For a function \(f\) on \(\partial\Omega\) we use the notation \[\int_{\partial\Omega}f=\sum_{i=1}^{n}(f(\beta_{i})-f(\alpha_{i})).\] Consider the subspace of infinitely differentiable compactly supported functions on \(\Omega\), \(C_{c}^{\infty}(\Omega)\). We define the _differential/momentum operator_\(\mathsf{D}\) on \(C_{c}^{\infty}(\Omega)\): \[\mathsf{D}f=\frac{1}{2\pi i}f^{\prime},\quad(f\in C_{c}^{\infty}(\Omega)).\] Define also the subspaces \[\mathscr{D}_{0}(\Omega)=\{f:\Omega\to\mathbb{C}:f\text{ is absolutely continuous on each }J_{i},\\ f(\alpha_{i}+)=f(\beta_{i}-)=0\text{ for all }i\text{ and }f^{\prime}\in L^{2}(\Omega)\}\,, \tag{2.1}\] \[\mathscr{D}_{\max}=\left\{f:\Omega\to\mathbb{C}:f\text{ is absolutely continuous on each interval }J_{i}\text{ and }f^{\prime}\in L^{2}(\Omega)\right\}, \tag{2.2}\] **Remark 2.2**.: If a function \(f\) is absolutely continuous on each interval \(J_{i}\) and \(f^{\prime}\in L^{2}(\Omega)\), then the values of the function \(f\) at the endpoints \(\alpha_{i}\) and \(\beta_{i}\) are well defined. Indeed, fix \(i\in\{1,\ldots,n\}\) and a point \(x_{0}\in(\alpha_{i},\beta_{i})\). Then, since \(f\) is absolutely continuous and with \(f^{\prime}\in L^{2}(\Omega)\subset L^{1}(\Omega)\), one has \[f(x)=f(x_{0})+\int_{x_{0}}^{x}f^{\prime}(t)\,dt,\text{ for all }x\in(\alpha_{i}, \beta_{i}),\] and therefore \[f(\alpha_{i}+)=f(x_{0})-\int_{\alpha_{i}}^{x_{0}}f^{\prime}(t)\,dt\text{ and, }f(\beta_{i}-)=f(x_{0})+\int_{x_{0}}^{\beta_{i}}f^{\prime}(t)\,dt.\] This means that we can define \(f(\alpha_{i}):=f(\alpha_{i}+)\), and similarly \(f(\beta_{i})\) by continuity. For \(f\in\mathscr{D}_{\max}\), we denote by \(f(\vec{\alpha})=(f(\alpha_{1}),\ldots,f(\alpha_{n}))\) and similarly for \(f(\vec{\beta})\). We are looking for closed symmetric and for self-adjoint extensions of the operator \(\mathsf{D}\) on \(C_{c}^{\infty}(\Omega)\). **Definition 2.3**.: We recall some notions about unbounded linear operators, see for example [1, Chapter X]. Let \(H\) be a Hilbert space. * We denote by \(\mathscr{B}(H)\) the set of bounded linear operators on \(H\). * We denote the domain of an unbounded operator \(T\) by \(\mathscr{D}(T)\). * An operator \(T_{2}\) is an _extension_ of \(T_{1}\) if \(\mathscr{D}(T_{2})\) contains \(\mathscr{D}(T_{1})\) and \(T_{2}f=T_{1}f\), for all \(f\in\mathscr{D}(T_{1})\). We write \(T_{1}\subseteq T_{2}\). * An operator \(T\) is _closed_ if its graph is closed, i.e., if \(\{f_{n}\}\) in \(\mathscr{D}(T)\) converges to \(f\) and \(\{Tf_{n}\}\) converges to \(g\) then \(f\in\mathscr{D}(T)\) and \(Tf=g\). * For a densely defined unbounded operator \(T:H_{1}\to H_{2}\), the _adjoint_ operator \(T^{*}:H_{2}\to H_{1}\) is defined on the set of vectors \(g\in H_{2}\) with the property that the linear functional \(\mathscr{D}(T)\ni f\mapsto\langle Tf\,,\,g\rangle\) is bounded. In this case, by Riesz's lemma, there exists a unique element \(T^{*}g\in H_{1}\) such that \(\langle Tf\,,\,g\rangle=\langle f\,,\,T^{*}g\rangle\) for all \(f\in\mathscr{D}(T)\). * An densely defined operator is called _symmetric_ if \[\langle Tf\,,\,g\rangle=\langle f\,,\,Tg\rangle\,,\text{ for all }f,g\in\mathscr{D}(T).\] * An operator is called _self-adjoint_ if \(\mathscr{D}(T^{*})=\mathscr{D}(T)\) and \(T^{*}f=Tf\) for all \(f\in\mathscr{D}(T)\). * An operator \(N\) is called _normal_ if it is closed, densely defined and \(N^{*}N=NN^{*}\). **Definition 2.4**.: Recall that a (possibly unbounded) linear operator \(T:\mathcal{H}\to\mathcal{K}\) is called _boundedly invertible_ if there is a bounded operator \(S:\mathcal{K}\to\mathcal{H}\) such that \(TS=I\) and \(ST\subseteq I\). The _resolvent set_\(\rho(T)\) for the operator \(T\) is defined by \[\rho(T)=\{\lambda\in\mathbb{C}:\lambda I-T\text{ is boundedly invertible}\}.\] The _spectrum_ of \(T\) is defined as \(\sigma(T)=\mathbb{C}\setminus\rho(T)\). **Definition 2.5**.: If \(X\) is a set, \(\mathcal{B}\) is a \(\sigma\)-algebra of subsets of \(X\), and \(H\) is a Hilbert space, a _spectral measure/spectral resolution_ for \((X,\mathcal{B},H)\) is a function \(E:\mathcal{B}\to\mathscr{B}(H)\) such that: * for each \(\Delta\) in \(\mathcal{B}\), \(E(\Delta)\) is a projection; * \(E(\emptyset)=0\) and \(E(X)=I_{H}\); * \(E(\Delta_{1}\cap\Delta_{2})=E(\Delta_{1})E(\Delta_{2})\) for \(\Delta_{1}\) and \(\Delta_{2}\) in \(\mathcal{B}\); * if \(\{\Delta_{n}\}_{n=1}^{\infty}\) are pairwise disjoint sets from \(\mathcal{B}\), then \[E\left(\bigcup_{n=1}^{\infty}\Delta_{n}\right)=\sum_{n=1}^{\infty}E(\Delta_{ n}).\] **Theorem 2.6**.: **Spectral theorem for unbounded self-adjoint operators.** _If \(A\) is a self-adjoint operator on \(H\), then there exists a unique spectral measure \(E\) defined on the Borel subsets of \(\mathbb{R}\) such that_ * \(A=\int x\,dE(x)\)_;_ * \(E(\Delta)=0\) _if_ \(\Delta\cap\sigma(A)=\emptyset\)_;_ * _if_ \(U\) _is an open subset of_ \(\mathbb{R}\) _and_ \(U\cap\sigma(A)\neq\emptyset\)_, then_ \(E(U)\neq 0\)_;_ * _if_ \(B\in\mathscr{B}(H)\) _such that_ \(BA\subseteq AB\)_, then_ \(B(\int\phi\,dE)\subseteq(\int\phi\,dE)B\) _for every Borel function_ \(\phi\) _on_ \(\mathbb{R}\)_._ **Remark 2.7**.: Note that, starting with a fixed selfadjoint operator \(A\) in a Hilbert space \(H\), the corresponding projection valued measure \(E\) then induces a spectral representation for \(A\) via a system of scalar measures indexed by the vectors in \(H\). These are defined for \(h\in H\) by \[E_{h,h}(\Delta)=\langle E(\Delta)h\,,\,h\rangle\,,\text{ for all Borel sets }\Delta. \tag{2.3}\] Let \(A\) be a self-adjoint operator in a Hilbert space \(H\) with spectral resolution \(E\); see Definition 2.5. Then the following three conclusions follow immediately : * If \(\varphi:\mathbb{R}\to\mathbb{C}\) is measurable, then the operator (2.4) \[\varphi(A):=\int_{\mathbb{R}}\varphi(x)\,dE(x)\] is well defined in \(H\) and normal. 2. The dense domain of \(\varphi(A)\), denoted \(\mathscr{D}(\varphi(A))\), consists of all vectors \(h\in H\) such that (2.5) \[\int_{\mathbb{R}}|\varphi(x)|^{2}\,dE_{h,h}<\infty,\] 3. For all \(h\in\mathscr{D}(\varphi(A))\), (2.6) \[\|\varphi(A)h\|^{2}=\int_{\mathbb{R}}|\varphi(x)|^{2}\,dE_{h,h}.\] **Definition 2.8**.: A _strongly continuous one parameter unitary group_ is a function \(U:\mathbb{R}\to\mathscr{B}(H)\) such that for all \(s\) and \(t\) in \(\mathbb{R}\): 1. \(U(t)\) is a unitary operator; 2. \(U(s+t)=U(s)U(t)\); 3. if \(h\in H\) and \(t_{0}\in\mathbb{R}\) then \(U(t)h\to U(t_{0})h\) as \(t\to t_{0}\). **Theorem 2.9**.: _Let \(A\) be a self-adjoint operator on \(H\) and let \(E\) on \((X,\mathcal{B},H)\) be its spectral measure. Define_ \[U(t)=\exp(2\pi itA)=\int e^{2\pi itx}\,dE(x),\quad(t\in\mathbb{R}).\] _Then_ 1. \((U(t))_{t\in\mathbb{R}}\) _is a strongly continuous one parameter group;_ 2. _if_ \(h\in\mathscr{D}(A)\)_, then_ \[\lim_{t\to 0}\frac{1}{t}(U(t)h-h)=2\pi iAh;\] 3. _if_ \(h\in H\) _and_ \(\lim_{t\to 0}\frac{1}{t}(U(t)h-h)\) _exists, then_ \(h\in\mathscr{D}(A)\)_. Consequently,_ \(\mathscr{D}(A)\) _is invariant under_ \(U(t)\)_._ **Theorem 2.10**.: **Stone's Theorem.** _If \(U\) is a strongly continuous one parameter unitary group, then there exists a self-adjoint operator \(A\) such that \(U(t)=\exp(2\pi itA)\), \(t\in\mathbb{R}\), and conversely. The self-adjoint operator \(A\) is called the infinitesimal generator for \(U\)._ We will also need the following well-known lemma about multiplication operators. **Lemma 2.11**.: _Let \((X,\mu)\) be a \(\sigma\)-finite measure space and let \(\phi:X\to\mathbb{C}\) be a measurable function. Let \(\mathscr{D}=\{f\in L^{2}(\mu):\phi f\in L^{2}(\mu)\}\) and define \(Af=\phi f\) for all \(f\in\mathscr{D}\). Then \(A\) is a closed operator, \(\mathscr{D}(A^{*})=\mathscr{D}\), and \(A^{*}f=\overline{\phi}f\) for \(f\in\mathscr{D}\). In particular, if \(f\) is real-valued, then \(A\) is self-adjoint._ Proof.: First, the domain \(\mathscr{D}\) is dense, because one can consider functions of the form \[\chi_{\{x\in M:-n\leq\operatorname{Re}(\phi(x))\leq n,-m\leq\operatorname{Im }(\phi(x))\leq m\}},\quad m,n\in\mathbb{N},M\subseteq X\text{ measurable},\] and they are in the domain \(\mathscr{D}\) and span \(L^{2}(\Omega)\). Thus the operator \(A\) is densely defined. Consider now \(g\in\mathscr{D}(A^{*})\). Then the map \(\mathscr{D}\ni f\mapsto\langle Af\,,\,g\rangle=:\varphi_{g}(f)\) is a bounded linear functional, i.e., \[\left|\int\phi f\overline{g}\,d\mu\right|\leq C\|f\|,\quad(f\in\mathscr{D}).\] This implies that \(\varphi_{g}\) can be extended continuously to the whole space \(L^{2}(\Omega)\) and therefore there exists an element \(A^{*}g\in L^{2}(\Omega)\) such that \(\varphi_{g}(f)=\langle f\,,\,A^{*}g\rangle\) for all \(f\in\mathscr{D}\). Then \[\int f\overline{\overline{\phi}g}\,d\mu=\int f\cdot A^{*}g\,d\mu,\quad(f\in \mathscr{D}).\] But this means that \(A^{*}g=\overline{\phi}g\) a.e. Conversely, if \(g\in\mathscr{D}\) we want to see that \(g\in\mathscr{D}(A^{*})\). But, if \(\phi g\in L^{2}(\Omega)\), then \(\overline{\phi}g\in L^{2}(\Omega)\), so \(g\in\mathscr{D}(A^{*})\). **Corollary 2.12**.: _Let \(\Lambda\) be some nonempty index set. The multiplication operator_ \[M_{I}(a_{\lambda})_{\lambda\in\Lambda}=(\lambda a_{\lambda})_{\lambda\in \Lambda},\quad\mathscr{D}(M_{I})=\left\{(a_{\lambda})_{\lambda\in\Lambda}\in l ^{2}(\Lambda):\sum_{\lambda\in\Lambda}|\lambda|^{2}|a_{\lambda}|^{2}<\infty \right\},\] _is self-adjoint._ Proof.: Using Lemma 2.11 for the space \(\Lambda\) with the discrete measure and \(\phi(\lambda)=\lambda\in\mathbb{R}\), we get that the operator \(M_{I}\) is self-adjoint. Next, we recall some of the main ideas in Fuglede's paper [10]. **Definition 2.13**.: For an open set \(\Omega\) in \(\mathbb{R}^{d}\), consider the partial differential operators \(\frac{1}{2\pi i}\frac{\partial}{\partial x_{j}}\), \(j=1,\ldots,d\), defined on the space of infinitely differentiable functions with compact support contained in \(\Omega\), \(C_{c}^{\infty}(\Omega)\). These operators are symmetric (by integration by parts). * We say that \(\Omega\) has the _extension property_ if there are commuting self-adjoint extension operators \(H_{j}\), i.e., \[\frac{1}{2\pi i}\frac{\partial}{\partial x_{j}}\subseteq H_{j},\quad j=1, \ldots,d.\] * _Commutativity_ for the extension operators \(H_{j}\) is in the strong sense of spectral resolutions. More precisely, all projections associated to the spectral resolutions of the operators \(H_{j}\) must commute. * A set \(\Omega\) of finite Lebesgue measure is called _spectral_ if there exists a set of frequencies \(\Lambda\subset\mathbb{R}^{d}\), such that the family of exponential functions \(\{e_{\lambda}(x)=e^{2\pi i\lambda\cdot x}:\lambda\in\Lambda\}\) forms an orthogonal basis for \(L^{2}(\Omega)\). The set \(\Lambda\) is called a _spectrum_ for the set \(\Omega\). * A set \(\Omega\) of finite Lebesgue measure is said to _tile \(\mathbb{R}^{d}\) by translations_ if there exists a set of vectors \(\Gamma\subset\mathbb{R}^{d}\) such that the translates \(\{\Omega+\gamma:\gamma\in\Gamma\}\) cover \(\mathbb{R}^{d}\) up to measure zero, and if the intersections \((\Omega+\gamma)\cap(\Omega+\gamma^{\prime})\) have measure zero for \(\gamma\neq\gamma^{\prime}\) in \(\Gamma\). **Remark 2.14**.: In general, when \(\Omega\) is given, the individual symmetric operators will have self-adjoint extensions, but the added condition that there is a choice of \(d\)_mutually commuting_ self-adjoint extensions is a strong restriction. For example, if \(d=2\), and if \(\Omega\) is a triangle or a disk, then there will not be commuting self-adjoint extensions (see [10]). This point is clarified in the next theorem: **Theorem 2.15**.: _[_10_, 11, 12, 13, 14] Let \(\Omega\subset\mathbb{R}^{d}\) be open and connected, with finite and positive Lebesgue measure. Then \(\Omega\) has the extension property if and only if it is a spectral set. Moreover, with \(\Omega\) given, there is a one-to-one correspondence between the two sets of subsets:_ \[\{\Lambda\subset\mathbb{R}^{d}:\Lambda\text{ is a spectrum for }\Omega\} \tag{2.7}\] _and_ \[\left\{\Lambda\subset\mathbb{R}^{d}:\Lambda\text{ is the joint spectrum of some commutative family}\right.\] \[\left.\left(H_{1},\ldots,H_{d}\right)\text{ of self-adjoint extensions}\right\}. \tag{2.8}\] _This correspondence is determined as follows:_ * _If the extensions_ \((H_{1},\ldots,H_{d})\) _are given, then_ \(\lambda\in\Lambda\) _if and only if_ (2.9) \[e_{\lambda}\in\bigcap_{j}\mathscr{D}(H_{j}).\] * _If, conversely,_ \(\Lambda\) _is a spectrum for_ \(\Omega\) _at the outset, then the ansatz (_2.9_) and_ (2.10) \[H_{j}e_{\lambda}=\lambda_{j}e_{\lambda},\quad\lambda\in\Lambda\] _determine uniquely a set of commuting extensions._ _If \(\Omega\) is only assumed open, the the spectral-set property implies the extension property, but not conversely._ **Conjecture 2.16**.: **The Fuglede Conjecture.[12]** _A set \(\Omega\) of finite Lebesgue measure is spectral if and only if it tiles \(\mathbb{R}^{d}\) by translations._ **Remark 2.17**.: We note the following conclusions from Theorem 2.15: The link between the theorem and Conjecture 2.16 is as follows: when commuting self-adjoint extensions exist, then automatically the joint spectrum is purely discrete, and the corresponding eigenspaces will be one-dimensional. And they are necessarily orthogonal in \(L^{2}(\Omega)\) and spanned by Fourier frequencies. The link to geometry is on account of the fact that, when commuting self-adjoint extensions exist for a given \(\Omega\), then they generate a unitary representation \(U\) of \(\mathbb{R}^{d}\), with \(U\) acting on \(L^{2}(\Omega)\). But locally (i.e., in the interior of \(\Omega\)), \(U\) will then act by translations. ## 3. Symmetric and self-adjoint extensions In this section we investigate the symmetric and the self-adjoint extensions of the momentum operator \(\mathsf{D}\) on \(C_{c}^{\infty}(\Omega)\), where \(\Omega\) is a union of intervals as in Definition 2.1. **Theorem 3.1**.: _The operator \(\mathsf{D}\) is symmetric. The adjoint \(\mathsf{D}^{*}\) has domain \(\mathscr{D}_{\max}\) as in (2.2) and_ \[\mathsf{D}^{*}f=\frac{1}{2\pi i}f^{\prime},\text{ for }f\in\mathscr{D}(\mathsf{ D}^{*})=\mathscr{D}_{\max}. \tag{3.1}\] _The operator \(\mathsf{D}\) on \(C_{c}^{\infty}(\Omega)\) has a closed extension to \(\mathscr{D}_{0}(\Omega)\) (see (2.1)), and, for \(f\) in \(\mathscr{D}_{0}(\Omega)\), we also have \(\mathsf{D}f=\frac{1}{2\pi i}f^{\prime}\). The adjoint of the operator \(\mathsf{D}|_{\mathscr{D}_{0}(\Omega)}\) is the same as the one described above. The adjoint of \(\mathsf{D}^{*}\) is_ \[\left(\mathsf{D}|_{C_{c}^{\infty}(\Omega)}\right)^{**}=\mathsf{D}|_{\mathscr{ D}_{0}(\Omega)}=\mathsf{D}^{*}|_{\mathscr{D}_{0}(\Omega)}. \tag{3.2}\] Proof.: Let \(g\in\mathscr{D}(\mathsf{D}^{*})\). By definition, this means that, for all \(f\in C_{c}^{\infty}(\Omega)\), \(\langle\mathsf{D}f\,,\,g\rangle=\langle f\,,\,\mathsf{D}^{*}g\rangle\) which means that \[\frac{1}{2\pi i}\int_{\Omega}f^{\prime}(x)\overline{g}(x)\,dx=\int_{\Omega}f( x)\overline{\mathsf{D}^{*}g}(x)\,dx.\] Define the function \(\varphi(x):=\int_{\alpha_{i}}^{x}\mathsf{D}^{*}g(t)\,dt\), for all \(x\in J_{i}\), \(i\in\{1,\ldots,n\}\). Then \(\varphi\) is absolutely continuous and \(\varphi^{\prime}(x)=\mathsf{D}^{*}g(x)\) for almost every \(x\in\Omega\). Then, using integration by parts, and the fact that \(f|_{\partial\Omega}=0\), we have: \[\frac{1}{2\pi i}\int_{\Omega}f^{\prime}(x)\overline{g}(x)\,dx=\int_{\Omega}f( x)\overline{\varphi^{\prime}(x)}\,dx=\int_{\partial\Omega}f\overline{\varphi} -\int_{\Omega}f^{\prime}(x)\overline{\varphi(x)}\,dx=-\int_{\Omega}f^{\prime}( x)\overline{\varphi(x)}\,dx.\] Then \[\int_{\Omega}f^{\prime}(x)\overline{\left(\frac{1}{2\pi i}g(x)-\varphi(x)\right)} \,dx=0,\text{ for all }f\in C_{c}^{\infty}(\Omega).\] This means that the function \(\varphi-\frac{1}{2\pi i}g\) is orthogonal to the range of the operator \(\mathsf{D}\). Next, we compute the orthogonal complement of the range of the operator \(\mathsf{D}\). Note that, if \(f\in C_{c}^{\infty}(\Omega)\), then \(f|_{\partial\Omega}=0\), and \(f(x)=\int_{\alpha_{i}}^{x}f^{\prime}(t)\,dt\) for all \(x\in J_{i}\), so \(\int_{\alpha_{i}}^{\beta_{i}}f^{\prime}(t)\,dt=f(\beta_{i})=0\). This implies that, for every function \(h\in L^{2}(\Omega)\) which is constant on each interval \(J_{i}\), we have \(\int_{\Omega}\mathsf{D}f(x)\overline{h}(x)\,dx=0\), so \(h\) is orthogonal to the range of \(\mathsf{D}\). Conversely, let \(h\in L^{2}(\Omega)\) be orthogonal to the range of \(\mathsf{D}\). Fix \(i\in\{1,\ldots,n\}\). We will show that \(h\) has to be constant on \(J_{i}\). First, we show that the range of \(\mathsf{D}\) contains the functions \(f\in C_{c}^{\infty}(J_{i})\) with \(\int_{\alpha_{i}}^{\beta_{i}}f(t)\,dt=0\). Let \(f\) be such a function and let \(\psi(x)=2\pi i\int_{\alpha_{i}}^{x}f(t)\,dt\), for \(x\in J_{i}\), and \(\psi(x)=0\) otherwise. Then, \(\psi\in C_{c}^{\infty}(\Omega)\) and \(\mathsf{D}\varphi=f\), thus \(f\) is in the range of \(\mathsf{D}\). Take now a function \(f\in L^{2}(\Omega)\), which is zero outside \(J_{i}\) and with \(\int_{\alpha_{i}}^{\beta_{i}}f(t)\,dt=0\). One can approximate \(f\) in \(L^{2}\) by a sequence of functions \(f_{n}\) in \(C_{c}^{\infty}(J_{i})\) with \(\int_{\alpha_{i}}^{\beta_{i}}f(t)\,dt=0\), therefore the functions \(f_{n}\) are in the range of the operator \(\mathsf{D}\). It follows that \(h\) is orthogonal to the functions \(f_{n}\), hence to \(f\). Now take a function \(\tilde{f}\in L^{2}(\Omega)\) which is zero outside \(J_{i}\). Define \(f(x)=\tilde{f}(x)-\frac{1}{\beta_{i}-\alpha_{i}}\int_{\alpha_{i}}^{\beta_{i}} \tilde{f}(t)\,dt\), for \(x\in J_{i}\), and \(f(x)=0\) outside \(J_{i}\). Then \(\int_{\alpha_{i}}^{\beta_{i}}f(t)\,dt=0\). Then \(\int_{\Omega}h(x)\overline{f}(x)\,dx=0\), which means that \[\int_{\Omega}\left(h(x)-\frac{1}{\beta_{i}-\alpha_{i}}\int_{ \alpha_{i}}^{\beta_{i}}h(t)\,dt\right)\overline{\tilde{f}(x)}\,dx=\int_{\alpha _{i}}^{\beta_{i}}h(x)\overline{\tilde{f}(x)}\,dx-\frac{1}{\beta_{i}-\alpha_{i }}\int_{\alpha_{i}}^{\beta_{i}}h(x)\,dx\cdot\int_{\alpha_{i}}^{\beta_{i}} \overline{\tilde{f}(x)}\,dx\] \[=\int_{\Omega}h(x)\cdot\overline{\left(\tilde{f}(x)-\frac{1}{ \beta_{i}-\alpha_{i}}\int_{\alpha_{i}}^{\beta_{i}}\tilde{f}(t)\,dt\right)}\, dx=\int_{\Omega}h(x)\overline{f(x)}\,dx=0.\] Since \(\tilde{f}\) is arbitrary, it follows that the function \(h(x)-\frac{1}{\beta_{i}-\alpha_{i}}\int_{\alpha_{i}}^{\beta_{i}}h(t)\,dt\) is zero a.e., on each interval \(J_{i}\), which means that \(h\) is constant a.e., on each interval \(J_{i}\). Returning to the computation of the domain of \(\mathsf{D}^{*}\), we obtain that \(\varphi-\frac{1}{2\pi i}g\) is constant on each interval \(J_{i}\). But then since \(\mathsf{D}^{*}g\) is in \(L^{2}(\Omega)\subset L^{1}(\Omega)\), it follows that \(\varphi(x)=\int_{\alpha_{i}}^{x}\mathsf{D}^{*}g\,dx\) is absolutely continuous, on each interval \(J_{i}\), so \(g\) is as well, and \(\frac{1}{2\pi i}g^{\prime}=\varphi^{\prime}=\mathsf{D}^{*}g\) a.e. on \(\Omega\). Conversely, if \(g\) is absolutely continuous on each interval \(J_{i}\), and \(g^{\prime}\in L^{2}(\Omega)\), then using integration by parts as above, we have \(\left\langle\mathsf{D}f\,,\,g\right\rangle=\left\langle f\,,\,\frac{1}{2\pi i }g^{\prime}\right\rangle\), and therefore \(g\in\mathscr{D}(\mathsf{D}^{*})\) and \(\mathsf{D}^{*}g=\frac{1}{2\pi i}g^{\prime}\). Next, we prove that the operator \(\mathsf{D}\) on \(\mathscr{D}_{0}(\Omega)\) is closed. Take a sequence \(\{f_{n}\}\) in \(\mathscr{D}_{0}(\Omega)\) which converges in \(L^{2}(\Omega)\) to some function \(f\), and such that the sequence \(\{\mathsf{D}f_{n}\}\) converges in \(L^{2}(\Omega)\) to some other function \(\frac{1}{2\pi i}g\). We want to prove that \(f\) is in \(\mathscr{D}_{0}(\Omega)\) and \(f^{\prime}=g\). Define \(\varphi(x)=\int_{\alpha_{i}}^{x}g(t)\,dt=\left\langle g\,,\,\chi_{(\alpha_{i},x )}\right\rangle\), for all \(x\in J_{i}\). Then, for \(x\in\Omega\), \(\varphi(x)\) is the limit of \(\left\langle f_{n}^{\prime}\,,\,\chi_{(\alpha_{i},x)}\right\rangle=\int_{\alpha _{i}}^{x}f_{n}^{\prime}(t)\,dt=f_{n}(x)-f_{n}(\alpha_{i})=f_{n}(x)\). Since \(\{f_{n}\}\) converges to \(f\) in \(L^{2}(\Omega)\), we obtain that \(f=\varphi\) a.e. This implies that \(f\) is absolutely continuous, and \(f^{\prime}=\varphi^{\prime}=g\) a.e. Since \(\varphi(\alpha_{i})=0\), it follows that \(f(\alpha_{i})=0\). Also, \[f(\beta_{i})=\varphi(\beta_{i})=\int_{\alpha_{i}}^{\beta_{i}}f^{\prime}(t)\,dt= \lim_{n}\int_{\alpha_{i}}^{\beta_{i}}f^{\prime}_{n}(t)\,dt=\lim_{n}(f_{n}(\beta_ {i})-f_{n}(\alpha_{i}))=0.\] To prove that the adjoint, of \(\mathsf{D}|_{\mathscr{D}_{0}(\Omega)}\) is as before, the same arguments can be used. Since \(A=\mathsf{D}|_{\mathscr{D}_{0}(\Omega)}\) is closed, \(A^{**}=A\), see [12, Corollary 1.8, page 305]. **Theorem 3.2**.: _If \(T\) is closed symmetric extension of \(\mathsf{D}|_{C^{\infty}_{c}(\Omega)}\), then there exists a partial isometry \(B\) between subspaces \(B_{l}\) and \(B_{r}\) of \(\mathbb{C}^{n}\) such that_ \[\mathscr{D}(T)=\left\{f\in\mathscr{D}_{\max}:f(\vec{\alpha})\in B_{l},Bf(\vec {\alpha})=f(\vec{\beta})\right\},\] _and \(Tf=\mathsf{D}f\) for \(f\in\mathscr{D}(T)\). Conversely, if the unbounded operator \(T\) is defined as such, then it is a closed symmetric extension of \(\mathsf{D}|_{C^{\infty}_{c}(\Omega)}\)._ _The adjoint \(T^{*}\) has domain_ \[\mathscr{D}(T^{*})=\left\{f\in\mathscr{D}_{\max}:(B^{*}\vec{f}(\beta)-\vec{f} (\alpha))\perp B_{l}\right\},\] _and \(T^{*}f=\mathsf{D}f\), for \(f\in\mathscr{D}(T^{*})\)._ _The operator \(T\) is a self-adjoint extension if and only if \(B\) is unitary, i.e., \(B_{l}=B_{r}=\mathbb{C}^{n}\)._ Proof.: Let \(T\) be a closed symmetric extension of \(\mathsf{D}\). Then, for all \(f\in C^{\infty}_{c}(\Omega)\), and \(g\in\mathscr{D}(T)\), we have \[\left\langle f\,,\,Tg\right\rangle=\left\langle Tf\,,\,g\right\rangle=\left \langle\mathsf{D}f\,,\,g\right\rangle.\] By Theorem 3.1, this implies that \(g\in\mathscr{D}_{\max}\) and \(Tg=\mathsf{D}^{*}g=\mathsf{D}g\). Thus \(\mathscr{D}(T)\subseteq\mathscr{D}_{\max}\) and \(Tg=\mathsf{D}g\), for all \(g\in\mathscr{D}(T)\), in other words, \(\mathsf{D}\) on \(\mathscr{D}_{\max}\) is an extension of \(T\). Using integration by parts we have, for all \(f,g\in\mathscr{D}_{\max}\): \[\left\langle f\,,\,\mathsf{D}g\right\rangle=\int_{\Omega}f\overline{\frac{1}{ 2\pi i}g^{\prime}}=-\frac{1}{2\pi i}\int_{\Omega}f\overline{g^{\prime}}=- \frac{1}{2\pi i}\left(-\int_{\Omega}f^{\prime}\overline{g}+\sum_{i=1}^{n} \left(f(\beta_{i})\overline{g}(\beta_{i})-f(\alpha_{i})\overline{g}(\alpha_{i })\right)\right)\] (see the notation in Remark 2.2). Thus, we have \[\left\langle f\,,\,\mathsf{D}g\right\rangle=\left\langle\mathsf{D}f\,,\,g \right\rangle-\frac{1}{2\pi i}\left(\left\langle f(\vec{\beta})\,,\,g(\vec{ \beta})\right\rangle-\left\langle f(\vec{\alpha})\,,\,g(\vec{\alpha})\right\rangle \right),\text{ for all }f,g\in\mathscr{D}_{\max}. \tag{3.3}\] Now, if \(T\) is a closed symmetric extension of \(\mathsf{D}\) on \(C^{\infty}_{c}(\Omega)\), then, for \(f,g\in\mathscr{D}(T)\), we have, \(f,g\in\mathscr{D}_{\max}\) and \[0=\left\langle f\,,\,Tg\right\rangle-\left\langle Tf\,,\,g\right\rangle=\left \langle f\,,\,\mathsf{D}g\right\rangle-\left\langle\mathsf{D}f\,,\,g\right\rangle =-\frac{1}{2\pi i}\left(\left\langle f(\vec{\beta})\,,\,g(\vec{\beta})\right \rangle-\left\langle f(\vec{\alpha})\,,\,g(\vec{\alpha})\right\rangle\right),\] therefore \[\left\langle f(\vec{\alpha})\,,\,g(\vec{\alpha})\right\rangle=\left\langle f( \vec{\beta})\,,\,g(\vec{\beta})\right\rangle,\text{ for all }f,g\in\mathscr{D}(T). \tag{3.4}\] Taking \(f=g\) in (3.4), we get that \(\|f(\vec{\alpha})\|^{2}=\|f(\vec{\beta})\|^{2}\) for all \(f\in\mathscr{D}(T)\); in particular, if \(f(\vec{\alpha})=0\), then \(f(\vec{\beta})=0\). This implies that, if \(f_{1},f_{2}\in\mathscr{D}(T)\) and \(f_{1}(\vec{\alpha})=f_{2}(\vec{\alpha})\) then \((f_{1}-f_{2})(\vec{\alpha})=0\) so \((f_{1}-f_{2})(\vec{\beta})=0\), and \(f_{1}(\vec{\beta})=f_{2}(\vec{\beta})\). This means that there exists a well defined function \(B\), \[B(f(\vec{\alpha}))=f(\vec{\beta}),\text{ for all }f\in\mathscr{D}(T),\quad B:B_{l} \to B_{r},\] where \[B_{l}:=\{f(\vec{\alpha}):f\in\mathscr{D}(T)\},\quad B_{r}:=\{f(\vec{\beta}):f \in\mathscr{D}(T)\}.\] In addition \(B\) is a linear isometry between the subspaces \(B_{l}\) and \(B_{r}\) of \(\mathbb{C}^{n}\), and, by definition, \(f(\vec{\beta})=Bf(\vec{\alpha})\), for \(f\in\mathscr{D}(T)\). We can define \(B\) to be zero on the orthogonal complement of \(B_{l}\). Because of this, we obtain that \(\mathscr{D}(\mathcal{T})\) is contained in \[\mathscr{D}_{B}:=\left\{f\in\mathscr{D}_{\max}:f(\vec{\alpha})\in B_{l},f( \vec{\beta})\in B_{r},Bf(\vec{\alpha})=f(\vec{\beta})\right\}.\] We prove that the reverse inclusion also holds. Let \(f\in\mathscr{D}_{B}\). Then \(f(\vec{\alpha})\in B_{l}\). Then, there exists \(f_{0}\in\mathscr{D}(T)\) such that \(f_{0}(\vec{\alpha})=f(\vec{\alpha})\). Then also \(f_{0}(\vec{\beta})=Bf_{0}(\vec{\alpha})=Bf(\vec{\alpha})=f(\vec{\beta})\). Hence \((f-f_{0})(\vec{\alpha})=(f-f_{0})(\vec{\beta})=0\), and so \(f-f_{0}\in\mathscr{D}_{0}(\Omega)\subseteq\mathscr{D}(T)\). Then \(f=(f-f_{0})+f_{0}\in\mathscr{D}(T)\). Assume now, conversely, that we are given an partial isometry \(B\) from \(B_{l}\) to \(B_{r}\), and we prove that \(\mathsf{D}\) on \(\mathscr{D}_{B}\) is symmetric and closed. For symmetry, we use (3.3); for \(f,g\in\mathscr{D}(T)\), we have \[\left\langle f\,,\,\mathsf{D}g\right\rangle=\left\langle\mathsf{D}f\,,\,g \right\rangle-\frac{1}{2\pi i}\left(\left\langle f(\vec{\beta})\,,\,g(\vec{ \beta})\right\rangle-\left\langle f(\vec{\alpha})\,,\,g(\vec{\alpha})\right\rangle\right)\] \[=\left\langle\mathsf{D}f\,,\,g\right\rangle-\frac{1}{2\pi i}\left(\left\langle Bf (\vec{\alpha})\,,\,Bg(\vec{\alpha})\right\rangle-\left\langle f(\vec{\alpha}) \,,\,g(\vec{\alpha})\right\rangle\right)=\left\langle\mathsf{D}f\,,\,g \right\rangle.\] To see that the operator is closed, take \(\{f_{n}\}\) in \(\mathscr{D}(T)\) convergent to \(f\) in \(L^{2}(\Omega)\), and \(\{f^{\prime}_{n}\}\) convergent to \(g\) in \(L^{2}(\Omega)\). For \(i\in\{1,\ldots,n\}\) and \(x\in J_{i}\), we have \[f_{n}(x)=f_{n}(\alpha_{i})+\int_{\alpha_{i}}^{x}f^{\prime}_{n}(t)\,dt.\] On the left hand side \(\{f_{n}\}\) converges to \(f\) in \(L^{2}(\Omega)\); on the right hand side \(\int_{\alpha_{i}}^{x}f^{\prime}_{n}(t)\,dt\) converges to \(\int_{\alpha_{i}}^{x}g(t)\,dt\) for all \(x\in J_{i}\). Therefore we obtain that \(\{f_{n}(\alpha_{i})\}\) converges to \(c_{i}=f(x)-\int_{\alpha_{i}}^{x}g(t)\,dt\), for a.e. \(x\). But then \(f(x)=c_{i}+\int_{\alpha_{i}}^{x}g(t)\,dt\) for a.e. \(x\) and therefore \(f^{\prime}=g\) a.e., so the operator is closed. Consider now the extension \(T=\mathsf{D}\) on \(\mathscr{D}_{B}\). We will compute its adjoint. For \(g\in\mathscr{D}(T^{*})\), we have that \(\mathscr{D}_{B}\ni f\mapsto\left\langle\mathsf{D}f\,,\,g\right\rangle\) is bounded. In particular, it is bounded on \(\mathscr{D}_{0}(\Omega)\) so \(g\in\mathscr{D}_{\max}\) and \(T^{*}g=\mathsf{D}g\), \(\left\langle\mathsf{D}f\,,\,g\right\rangle=\left\langle f\,,\,T^{*}g\right\rangle =\left\langle f\,,\,\mathsf{D}g\right\rangle\). Then, with integration by parts (3.3), we have that \[\left\langle f(\vec{\alpha})\,,\,B^{*}g(\vec{\beta})\right\rangle=\left\langle Bf (\vec{\alpha})\,,\,g(\vec{\beta})\right\rangle=\left\langle f(\vec{\beta})\,, \,g(\vec{\beta})\right\rangle=\left\langle f(\vec{\alpha})\,,\,g(\vec{\alpha}) \right\rangle,\text{ for all }f\in\mathscr{D}_{B}.\] This implies that \(\left(B^{*}g(\vec{\beta})-g(\vec{\alpha})\right)\) is orthogonal to the subspace \(B_{l}\). Conversely, if \(g\in\mathscr{D}_{\max}\) and \(\left(B^{*}g(\vec{\beta})-g(\vec{\alpha})\right)\perp B_{l}\), then from the previous computation, and using integration by parts (3.3), we get that, for \(f\in\mathscr{D}_{B}\), \[|\left\langle\mathsf{D}f\,,\,g\right\rangle|=|\left\langle f\,,\,\mathsf{D}g \right\rangle|\leq\|f\|\|\mathsf{D}g\|,\] and so the linear map \(\mathscr{D}_{B}\ni f\mapsto\left\langle\mathsf{D}f\,,\,g\right\rangle\) is bounded and \(g\in\mathscr{D}(T^{*})\). Now, let's consider the case when the extension \(T\) is self-adjoint. In this case \(\mathscr{D}(T^{*})\) is contained in \(\mathscr{D}(T)\), and therefore, if \(g\in\mathscr{D}_{\max}\) with \(B^{*}g(\vec{\beta})-g(\vec{\alpha})\) orthogonal to \(B_{l}\), then we have that \(Bg(\vec{\alpha})=g(\vec{\beta})\). We will prove that \(B_{l}\) must be the entire space \(\mathbb{C}^{n}\). Suppose there exists a non-zero vector \(v\) orthogonal to \(B_{l}\). Then, since \(B\) is a partial isometry, there exists a non-zero vector \(w\) orthogonal to \(B_{r}\). Let \(g\in\mathscr{D}_{\max}\) such that \(g(\vec{\alpha})=0\) and \(g(\vec{\beta})=w\) (for example, make \(g\) piecewise linear on the intervals \(J_{i}\)). Then \(B^{*}g(\vec{\beta})-\vec{g}(\alpha)=0-0=0\perp B_{l}\). Thus \(g\in\mathscr{D}(T^{*})=\mathscr{D}(T)\), and therefore \(Bg(\vec{\alpha})=g(\vec{\beta})\), which implies that \(B(0)=w\), a contradiction. Thus, when the extension is self-adjoint, we get that \(B_{r}=\mathbb{C}^{n}\) and \(B_{l}=\mathbb{C}^{n}\) and \(B\) is unitary. For the converse, if \(B\) is unitary and \(B_{l}=\mathbb{C}^{n}\), \(B_{r}=\mathbb{C}^{n}\), then, \(g\in\mathscr{D}(T^{*})\), if and only if \(B^{*}g(\vec{\beta})-g(\vec{\alpha})\) is orthogonal to \(B_{l}\) so it must be zero, i.e., \(B^{*}g(\vec{\beta})=\ g(\vec{\alpha})\), which is equivalent to \(Bg(\vec{\alpha})=g(\vec{\beta})\). This means that \(\mathscr{D}(T^{*})=\mathscr{D}(T)=\mathscr{D}_{B}\). **Definition 3.3**.: For a self-adjoint extension \(A\) of the operator \(\mathsf{D}|_{C^{\infty}_{c}(\Omega)}\) as in Theorem 3.2, we call the unitary matrix \(B\), the _boundary matrix associated to \(A\)_. ## 4. Spectral decomposition Having a self-adjoint extension \(A\) of the momentum operator \(\mathsf{D}\), we can use the Spectral Theorem 2.6 to obtain a spectral resolution of the self-adjoint operator \(A\). We present in this section an explicit description of this spectral resolution. **Definition 4.1**.: For \(\vec{z}=(z_{1},z_{2},\ldots,z_{n})\in\mathbb{C}^{n}\) denote by \(E(\vec{z})\), the \(n\times n\) diagonal matrix with entries \((e^{2\pi iz_{1}},e^{2\pi iz_{2}},\ldots,e^{2\pi iz_{n}})\). **Theorem 4.2**.: _Let \(A\) be a self-adjoint extension of the operator \(\mathsf{D}|_{C^{\infty}_{c}(\Omega)}\) and let \(B\) its unitary boundary matrix. Let \(P\) be the spectral measure for the operator \(A\), so_ \[A=\int_{\mathbb{R}}t\,dP(t)\] _Then the spectral measure is atomic, supported on the spectrum_ \[\sigma(A)=\left\{\lambda\in\mathbb{C}:\det(I-E(\lambda\vec{\beta})^{-1}BE( \lambda\vec{\alpha}))=0\right\}\subseteq\mathbb{R}\] _which is a discrete unbounded set. For \(\lambda\in\sigma(A)\), the eigenspace \(P(\{\lambda\})L^{2}(\Omega)\) has dimension at most \(n\), and it consists of functions of the form_ \[f(x)=e^{2\pi i\lambda x}\sum_{i=1}^{n}c_{i}\chi_{J_{i}}(x),\text{ where }c=(c_{i})_{i=1}^{n}\in\mathbb{C}^{n}\text{, and }BE(\lambda\vec{\alpha})c=E(\lambda\vec{\beta})c.\] Proof.: We begin with a proposition. **Proposition 4.3**.: _Let \(A\) be a self-adjoint extension as in Theorem 4.2. Let \(\lambda\in\mathbb{C}\). The following statements are equivalent:_ 1. \(\lambda\) _is in the resolvent set of_ \(A\)_._ 2. _The operator_ \(A-\lambda I\) _is onto._ 3. _The matrix_ \(E(\lambda\vec{\beta})^{-1}BE(\lambda\vec{\alpha})-I\) _is onto._ 4. _The operator_ \(A-\lambda I\) _in one-to-one._ 5. _The matrix_ \(E(\lambda\vec{\beta})^{-1}BE(\lambda\vec{\alpha})-I\) _is one-to-one._ Proof.: We prove that (ii) and (iii) are equivalent. The operator \(A-\lambda I\) is onto, means that for every \(g\in L^{2}(\Omega)\), there exists \(f\in\mathscr{D}_{B}=\mathscr{D}(A)\), such that \[\frac{1}{2\pi i}f^{\prime}-\lambda f=g.\] We solve this first order linear differential equation on each interval \(J_{i}\) of \(\Omega\). We have \(f^{\prime}-2\pi i\lambda f=2\pi ig\). Multiplying by the integrating factor \(e^{-2\pi i\lambda x}\), we get \[\left(e^{-2\pi i\lambda t}f(t)\right)^{\prime}=2\pi ig(t)e^{-2\pi i\lambda t}.\] Integrating, we get the general solution \[f(x)=e^{2\pi i\lambda x}\left(2\pi i\int_{\alpha_{i}}^{x}g(t)e^{-2\pi i \lambda t}\,dt+c_{i}\right),\] for some constant \(c_{i}\), for all \(x\in J_{i}\), and all \(i\in\{1,\ldots,n\}\). Since \(g\) is in \(L^{2}(\Omega)\), we see that \(f\) is absolutely continuous and \(f^{\prime}=2\pi i(\lambda f+g)\) is in \(L^{2}(\Omega)\), which means that \(f\) is in the domain \(\mathscr{D}_{\max}\), and the only thing that we have to insure is that \(B\vec{f}(\alpha)=\vec{f}(\beta)\), by picking the right constants \((c_{i})\). Let's see what the condition \(B\vec{f}(\alpha)=\vec{f}(\beta)\) means. We have \[f(\alpha_{i})=c_{i}e^{2\pi i\lambda\alpha_{i}},\quad f(\beta_{i})=e^{2\pi i \lambda\beta_{i}}(A_{i}+c_{i}),\] where \(A_{i}=2\pi i\int_{\alpha_{i}}^{\beta_{i}}g(t)e^{-2\pi i\lambda t}\,dt\). Let \(\vec{A}=(A_{i})_{i=1}^{n}\), \(\vec{c}=(c_{i})_{i=1}^{n}\). Note that, by varying \(g\), any vector in \(\mathbb{C}^{n}\) can be obtained as \(\vec{A}\). Then, the condition \(f(\vec{\beta})=Bf(\vec{\alpha})\) is equivalent to \[E(\lambda\vec{\beta})(\vec{A}+\vec{c})=BE(\lambda\vec{\alpha})\vec{c},\] or, equivalently, \[\vec{A}=(E(\lambda\vec{\beta})^{-1}BE(\lambda\vec{\alpha})-I)\vec{c}.\] This shows that, the operator \(A-\lambda I\) is onto if and only if the matrix \(E(\lambda\vec{\beta})^{-1}BE(\lambda\vec{\alpha})-I\) is onto. Next, we prove that (iv) and (v) are equivalent. The operator \(A-\lambda I\) is not one-to-one means that there exists a non-zero \(f\in\mathscr{D}_{B}\) with \(\vec{f}(\beta)=B\vec{f}(\alpha)\), such that \(\frac{1}{2\pi i}f^{\prime}=\lambda f\). Solving this differential equation on each interval \(J_{i}\), we obtain that \(f(x)=c_{i}e^{2\pi i\lambda x}\), for \(x\in J_{i}\), for some constant \(c_{i}\). Then, the relation \(\vec{f}(\beta)=B\vec{f}(\alpha)\) implies that, \((c_{i}e^{2\pi i\lambda\beta_{i}})_{i=1}^{n}=B(c_{i}e^{2\pi i\lambda\alpha_{i} })_{i=1}^{n}\); with \(\vec{c}:=(c_{i})_{i=1}^{n}\), this can be rewritten as \((E(\lambda\vec{\beta})^{-1}BE(\lambda\vec{\alpha})-I)\vec{c}=0\). Thus the operator \(A-\lambda I\) if and only if the matrix \(E(\lambda\vec{\beta})^{-1}BE(\lambda\vec{\alpha})-I\) is not one-to-one. Finally, the statements (iii) and (v) are equivalent because they refer to a square matrix in a finite dimensional space. Now, since \(A\) is self-adjoint, so also closed, \(\lambda\in\rho(A)\) if and only if \(A-\lambda I\) is both one-to-one and onto; but these two properties are equivalent, so (i) is equivalent to all the other statements. Returning to the proof of Theorem 4.2, we see that \(\lambda\) is in the spectrum of \(A\) if and only if \(\det(I-E(\lambda\vec{\beta})^{-1}BE(\lambda\vec{\alpha}))=0\). This is an analytic function of \(\lambda\), therefore the zero set is discrete and at most countable. From the proof of Proposition 4.3, we see that the eigenspace \(P(\{\lambda\})L^{2}(\Omega)\) is as in the statement of the theorem, and hence has dimension at most \(n\). Since, by the Spectral Theorem the orthogonal sum of the eigenspaces spans the entire Hilbert space \(L^{2}(\Omega)\), it follows that \(\sigma(A)\) cannot be finite, and since it is discrete, it has to be unbounded. **Theorem 4.4**.: _Let \(A\) be as in Theorem 4.2 and let \(\{\lambda_{n}:n\in\mathbb{Z}\}\) a list of the eigenvalues of \(A\) repeated according to multiplicity. Let \(\{\epsilon_{n}:n\in\mathbb{Z}\}\) be an orthonormal basis of eigenvectors for \(A\), \(A\epsilon_{n}=\lambda_{n}\epsilon_{n}\), for all \(n\in\mathbb{Z}\). Then_ \[\mathscr{D}(A)=\left\{f=\sum_{n\in\mathbb{Z}}f_{n}\epsilon_{n}\in L^{2}( \Omega):\sum_{n\in\mathbb{Z}}|\lambda_{n}|^{2}|f_{n}|^{2}<\infty\right\}. \tag{4.1}\] Proof.: Let \(g=\sum_{n\in\mathbb{Z}}a_{n}\epsilon_{n}\in L^{2}(\Omega)\) with \(\sum_{n\in\mathbb{Z}}|\lambda_{n}|^{2}|a_{n}|^{2}<\infty\). Then, for \(f\in\mathscr{D}(A)\), we have \[\left\langle\mathsf{D}f\,,\,g\right\rangle=\sum_{n}\overline{a}_{n}\left\langle \mathsf{D}f\,,\,\epsilon_{n}\right\rangle=\sum_{n}\overline{a}_{n}\left\langle f \,,\,\mathsf{D}^{*}\epsilon_{n}\right\rangle=\sum_{n}\overline{a}_{n}\lambda_ {n}\left\langle f\,,\,\epsilon_{n}\right\rangle=\left\langle f\,,\,\sum_{n} \lambda_{n}a_{n}\epsilon_{n}\right\rangle.\] This implies that \(g\in\mathscr{D}(A^{*})=\mathscr{D}(A)\) and \(Ag=A^{*}g=\sum_{n}\lambda_{n}a_{n}\epsilon_{n}\). Thus \(\mathscr{D}_{I}:=\{\sum_{n}a_{n}\epsilon_{n}:\sum_{n}|\lambda_{n}|^{2}|a_{n}|^ {2}<\infty\}\) is contained in \(\mathscr{D}(A)\). But, by Corollary 2.12, the diagonal operator \(M_{I}(\sum_{n}a_{n}\epsilon_{n})=\sum_{n}\lambda_{n}a_{n}\epsilon_{n}\) defined on \(\mathscr{D}_{I}\) is self-adjoint. Since self-adjoints operators are maximally symmetric, it follows that \(\mathscr{D}_{I}=\mathscr{D}(A)\). ## 5. The unitary group If we have a self-adjoint extension \(A\) of the momentum operator \(\mathsf{D}\), we can associate to it a one-parameter unitary group \(U(t)=\exp(2\pi itA)\), \(t\in\mathbb{R}\) as in Theorem 2.9. In this section we present some basic properties of this unitary group and show that it acts as translations inside the intervals and it splits points at the endpoints, with probabilities given by the boundary matrix \(B\). **Theorem 5.1**.: _Let \(A=\mathsf{D}\) on \(\mathscr{D}_{B}\) a self-adjoint extension with boundary matrix \(B\). Let_ \[U(t)=\exp{(2\pi itA)},\quad(t\in\mathbb{R}),\] _be the associated one-parameter unitary group._ * _The domain_ \(\mathscr{D}_{B}\) _is invariant for_ \(U(t)\) _for all_ \(t\in\mathbb{R}\)_, i.e., if_ \(f\in\mathscr{D}_{\max}\) _with_ \(Bf(\vec{\alpha})=f(\vec{\beta})\)_, then_ \(U(t)f\in\mathscr{D}_{\max}\) _with_ \(B(U(t)f)(\vec{\alpha})=(U(t)f)(\vec{\beta})\)_._ * _Fix_ \(i\in\{1,\ldots,n\}\) _and let_ \(t\in\mathbb{R}\) _such that_ \(J_{i}\cap(J_{i}-t)\neq\emptyset\)_. Then, for_ \(f\in L^{2}(\Omega)\)_,_ (5.1) \[(U(t)f)(x)=f(x+t),\text{ for a.e. }x\in J_{i}\cap(J_{i}-t).\] _In particular,_ (5.2) \[(U(\beta_{i}-x)f)(x)=f(\beta_{i}),\,(U(\alpha_{i}-x)f)(x)=f(\alpha_{i}),\text { for }f\in\mathscr{D}_{\max},x\in J_{i}.\] * _For_ \(f\in\mathscr{D}_{\max}\)_, if_ \(x\in J_{i}\) _and_ \(t>\beta_{i}-x\)_, then_ (5.3) \[\left[U(t)f\right](x)=\pi_{i}\left(B\left[U(t-(\beta_{i}-x))f\right](\vec{ \alpha})\right).\] _Here_ \(\pi_{i}:\mathbb{C}^{n}\to\mathbb{C}\) _denotes the projection onto the_ \(i\)_-th component_ \(\pi(x_{1},x_{2}\ldots,x_{n})=x_{i}\)_._ Proof.: (i) This follows from more general rules, see Theorem 2.9(c), but we include a more direct proof. With the notation as in Theorem 4.4, \(f=\sum_{n}f_{n}\epsilon_{n}\) is in the domain of \(A\), if and only if \(\sum_{n}|\lambda_{n}|^{2}|f_{n}|^{2}<\infty\). Then, for \(t\in\mathbb{R}\), \(U(t)f=\sum_{n}f_{n}e^{2\pi i\lambda_{n}t}\epsilon_{n}\), and \(\sum_{n}|\lambda_{n}|^{2}|f_{n}e^{2\pi i\lambda_{n}t}|^{2}=\sum_{n}|\lambda_{n} |^{2}|f_{n}|^{2}<\infty\), which means that \(U(t)f\) is also in the domain of \(A\). (ii) Let \(v_{\lambda}\) be an eigenvector for \(A\) with eigenvalue \(\lambda\). Then, by Theorem 4.2, we have \[v_{\lambda}(x)=\sum_{k=1}^{n}c_{k}\chi_{J_{k}}(x)e^{2\pi i\lambda x},\quad(x\in \Omega),\] for some constants \(c_{k}\in\mathbb{C}\). Then, for \(x\in J_{i}\cap(J_{i}-t)\), we have \(x,x+t\in J_{i}\), and \[(U(t)v_{\lambda})(x)=e^{2\pi i\lambda t}v_{\lambda}(x)=\sum_{k=1}^{n}c_{k}\chi _{J_{k}}(x)e^{2\pi i\lambda(x+t)}=v_{\lambda}(x+t). \tag{5.4}\] Now let \(f\in L^{2}(\Omega)\). One can find a sequence \(\{f_{n}\}\) of finite linear combinations of eigenvectors, such that \(\lim f_{n}=f\) in \(L^{2}(\Omega)\). Then \(\lim U(t)f_{n}=U(t)f\), passing to subsequences, we can assume in addition that \(\{f_{n}\}\) converges to \(f\) pointwise a.e. \(\Omega\), and \(U(t)f_{n}\) converges to \(U(t)f\) pointwise a.e. in \(\Omega\). Then, for a.e. \(x\in J_{i}\cap(J_{i}-t)\), we have \[(U(t)f)(x)=\lim(U(t)f_{n})(x)=\lim f_{n}(x+t)=f(x+t).\] (Note that we used also the fact that translation by \(t\) preserves measure zero sets.) The first relation in (5.2) follows from (5.4), by taking \(t=\beta_{i}-x-\epsilon\) and letting \(\epsilon\to 0\). Similarly for the second relation. (iii) Indeed, we have, with (i) and (ii), \[\left[U(t)f\right](x)=\left[U(\beta_{i}-x)U(t-(\beta_{i}-x))f \right](x)=\left[U(t-(\beta_{i}-x))f\right](\beta_{i})\] \[\qquad=\pi_{i}\left(\left[U(t-(\beta_{i}-x))f\right](\vec{\beta}) \right)=\pi_{i}\left(B\left[U(t-(\beta_{i}-x))f\right](\vec{\alpha})\right).\] ## 6. Spectral sets In this section we consider the case when \(\Omega\) is a spectral set. We present various characterizations of this property in terms of the self-adjoint extensions of the momentum operator \(\mathsf{D}\) and in terms of the associated unitary groups. **Definition 6.1**.: Assume that \(\Omega\) is a spectral set with spectrum \(\Lambda\). Recall that \(e_{\lambda}\) denotes the exponential function \(e_{\lambda}(x)=e^{2\pi i\lambda x}\). In order to make the vectors \(e_{\lambda}\) in \(L^{2}(\Omega)\) of norm one, we renormalize the Lebesgue measure on \(\Omega\) by \(\frac{1}{|\Omega|}\,dx\) (or we can simply assume that \(\Omega\) has measure \(1\)). The _Fourier transform (associated to the spectrum \(\Lambda\))_ is the unitary operator \[\mathcal{F}_{\Lambda}:L^{2}(\Omega)\to l^{2}(\Lambda),\quad\mathcal{F}_{ \Lambda}f=\left(\left\langle f\,,\,e_{\lambda}\right\rangle\right)_{\lambda\in \Lambda}. \tag{6.1}\] Define also the unbounded operator of _multiplication by the identity function_ on \(l^{2}(\Omega)\): \[M_{I}(a_{\lambda})_{\lambda\in\Lambda}=(\lambda a_{\lambda})_{\lambda\in \Lambda},\quad\mathscr{D}(M_{I})=\left\{(a_{\lambda})_{\lambda\in\Lambda}\in l ^{2}(\Lambda):\sum_{\lambda\in\Lambda}|\lambda|^{2}|a_{\lambda}|^{2}<\infty \right\}. \tag{6.2}\] Define the _unitary group associated to \(\Lambda\)_ on \(L^{2}(\Omega)\), by \[U_{\Lambda}(t)\left(\sum_{\lambda\in\Lambda}a_{\lambda}e_{\lambda}\right)=\sum _{\lambda\in\Lambda}e^{2\pi i\lambda t}a_{\lambda}e_{\lambda},\text{ for }\sum_{\lambda\in\Lambda}a_{\lambda}e_{\lambda}\in L^{2}(\Omega),t\in \mathbb{R}. \tag{6.3}\] **Definition 6.2**.: A _unitary group of local translations_ on \(\Omega\) is a strongly continuous one parameter unitary group \(U(t)\) on \(L^{2}(\Omega)\) with the property that, for any \(f\in L^{2}(\Omega)\) and any \(t\in\mathbb{R}\), \[(U(t)f)(x)=f(x+t)\text{ for a.e. }x\in\Omega\cap(\Omega-t). \tag{6.4}\] **Remark 6.3**.: Note the difference between the Definition 6.2, and the property (ii) in Theorem 5.1. The Definition 6.2 is a stronger condition, because it allows jumps between different intervals \(J_{i}\) of \(\Omega\). **Definition 6.4**.: A unitary boundary matrix \(B\) is called _spectral_ if, for every \(\lambda\in\mathbb{R}\), the equation \(BE_{\lambda}(\vec{\alpha})c=E_{\lambda}(\vec{\beta})c\), \(c\in\mathbb{C}^{n}\) has either only the trivial solution \(c=0\), or only constant solutions of the form \(c=\alpha(1,1,\dots,1)\), \(\alpha\in\mathbb{C}\). **Theorem 6.5**.: _Assume the \(\Omega\) is a spectral set with spectrum \(\Lambda\). Define the unbounded operator \(A\) on \(L^{2}(\Omega)\) by_ \[A\left(\sum_{\lambda\in\Lambda}f_{\lambda}e_{\lambda}\right)=\sum_{\lambda \in\Lambda}\lambda f_{\lambda}e_{\lambda},\quad\mathscr{D}(A)=\left\{f=\sum_{ \lambda\in\Lambda}f_{\lambda}e_{\lambda}:\sum_{\lambda\in\Lambda}|\lambda|^{2 }|f_{\lambda}|^{2}<\infty\right\}.\] _Then, the operator \(A\) is conjugate to the multiplication operator \(M_{I}\), by the Fourier transform, i.e.,_ \[A=\mathcal{F}_{\Lambda}^{-1}M_{I}\mathcal{F}_{\Lambda}. \tag{6.5}\] _The domain \(\mathscr{D}(A)\) contains \(\mathscr{D}_{0}(\Omega)\) and all functions \(e_{\lambda}\), \(\lambda\in\Lambda\), and \(A\) is a self-adjoint extension of \(\mathsf{D}|_{C^{\infty}_{c}(\Omega)}\) with the property that all eigenvectors are constant multiples of exponential functions \(ce_{\lambda}\), \(c\in\mathbb{C}\), \(\lambda\in\Lambda\)._ _Conversely, if there exists a self-adjoint extension \(A\) of \(\mathsf{D}|_{C^{\infty}_{c}(\Omega)}\) with the property that all eigenvectors are constant multiples of exponential functions \(ce_{\lambda}\), \(c\in\mathbb{C}\), \(\lambda\in\mathbb{R}\), then \(\Omega\) is spectral, with spectrum_ \[\Lambda:=\{\lambda\in\mathbb{R}:e_{\lambda}\in\mathscr{D}(A)\}=\sigma(A). \tag{6.6}\] Proof.: Equation (6.5) follows from a direct computation. To see that \(A\) is self-adjoint, it is enough to check that \(M_{I}\) is self-adjoint, and this is a consequence of Corollary 2.12. Clearly, the exponential functions \(e_{\lambda}\) are in the domain \(\mathscr{D}(A)\). Let's check that \(\mathscr{D}_{0}(\Omega)\) is also contained in \(\mathscr{D}(A)\), and \(Af=\mathsf{D}f\) for \(f\in\mathscr{D}_{0}(\Omega)\). Let \(f=\sum_{\lambda}f_{\lambda}e_{\lambda}\in\mathscr{D}_{0}(\Omega)\) and \(\lambda\in\Lambda(\subseteq\mathbb{R})\). Then, \[c_{\lambda}:=\left\langle\mathsf{D}f\,,\,e_{\lambda}\right\rangle=\left\langle f \,,\,\mathsf{D}^{*}e_{\lambda}\right\rangle=\left\langle f\,,\,\frac{1}{2\pi i }e_{\lambda}^{\prime}\right\rangle=\left\langle f\,,\,\lambda e_{\lambda} \right\rangle=\lambda f_{\lambda}.\] Therefore, \(\sum_{\lambda}|\lambda|^{2}|f_{\lambda}|^{2}=\sum_{\lambda}|c_{\lambda}|^{2}= \|\mathsf{D}f\|^{2}<\infty\), and \[\mathsf{D}f=\sum_{\lambda}\lambda f_{\lambda}e_{\lambda}=Af.\] This shows that \(A\) is indeed a self-adjoint extension of \(\mathsf{D}|_{C^{\infty}_{c}(\Omega)}\). Now we check that all eigenvectors of \(A\) are constant multiples of \(e_{\lambda}\). This follows from the next Lemma, and the fact that self-adjoint operators are closed. **Lemma 6.6**.: _Let \(A\) be a closed unbounded operator on a Hilbert space \(H\). Assume that there exists an orthonormal basis of \(H\), \(\{e_{\lambda}\}_{\lambda\in\Lambda}\), where \(\Lambda\subset\mathbb{C}\) and \(Ae_{\lambda}=\lambda e_{\lambda}\), for all \(\lambda\in\Lambda\). Then every eigenvector \(f=\sum_{\lambda}f_{\lambda}e_{\lambda}\) with the property that \(\sum_{\lambda}|\lambda|^{2}|f_{\lambda}|^{2}<\infty\) is of the form \(ce_{\lambda}\), for some \(c\in\mathbb{C}\), \(\lambda\in\Lambda\)._ Proof.: Suppose \(f=\sum_{\lambda}f_{\lambda}e_{\lambda}\) is an eigenvector for \(A\) with eigenvalue \(\lambda_{0}\). Then \[Af=\lambda_{0}f=\sum_{\lambda}\lambda_{0}f_{\lambda}e_{\lambda}.\] On the other hand, we have that \(\sum\) finite \(f_{\lambda}e_{\lambda}\) converges in \(L^{2}(\Omega)\) to \(\sum_{\lambda}f_{\lambda}e_{\lambda}\); also \[A\left(\ \sum_{\text{finite}}f_{\lambda}e_{\lambda}\right)=\ \sum_{\text{finite}} \lambda f_{\lambda}e_{\lambda}\to\sum_{\lambda}\lambda f_{\lambda}e_{\lambda},\] because \(\sum_{\lambda}|\lambda|^{2}|f_{\lambda}|^{2}<\infty\). Since \(A\) is closed we get \[Af=A\left(\sum_{\lambda}f_{\lambda}e_{\lambda}\right)=\sum_{\lambda}f_{ \lambda}e_{\lambda}.\] This means that \(\lambda_{0}f_{\lambda}=\lambda f_{\lambda}\) for all \(\lambda\in\Lambda\), which implies that either \(\lambda_{0}=\lambda\) or \(f_{\lambda}=0\). Thus all the coefficients \(f_{\lambda}\) are zero except for \(f_{\lambda_{0}}\), so \(f=f_{\lambda_{0}}e_{\lambda_{0}}\). For the converse in the Theorem 6.5, assume \(A\) is a self-adjoint extension of \(\mathsf{D}|_{C^{\infty}_{c}(\Omega)}\) with the property that all eigenvectors are constant multiples of exponential functions. Then, with Theorem 4.2, for \(\lambda\) in the spectrum \(\sigma(A)=:\Lambda\), the subspace \(P(\{\lambda\})L^{2}(\Omega)\) is one-dimensional, spanned by \(e_{\lambda}\). Then the set of all eigenvectors for \(A\), \(\{e_{\lambda}:\lambda\in\Lambda\}\) is an orthonormal basis for \(L^{2}(\Omega)\). Moreover, if \(e_{\lambda}\in\mathscr{D}(A)\) for some \(\lambda\in\mathbb{R}\), then \(Ae_{\lambda}=\frac{1}{2\pi i}e_{\lambda}^{\prime}=\lambda e_{\lambda}\), so (6.6) holds. **Theorem 6.7**.: _Assume that \(\Omega\) is spectral with spectrum \(\Lambda\). Let \(A=A_{\Lambda}\) be the self-adjoint extension of \(\mathsf{D}|_{C^{\infty}_{c}(\Omega)}\) defined in Theorem 6.5, and let \(B=B_{\Lambda}\) be the unitary boundary matrix associated to this extension as in Theorem 3.2. Then \(B\) is a spectral boundary matrix and it is uniquely and well-defined by the conditions_ \[Be_{\lambda}(\vec{\alpha})=e_{\lambda}(\vec{\beta}),\ \text{for all}\ \lambda\in\Lambda. \tag{6.7}\] _Moreover_ \[\operatorname{span}\{e_{\lambda}(\vec{\alpha}):\lambda\in\Lambda\}= \operatorname{span}\{e_{\lambda}(\vec{\beta}):\lambda\in\Lambda\}=\mathbb{C}^{ n}. \tag{6.8}\] _Conversely, if there exists a spectral boundary matrix \(B\), then \(\Omega\) is spectral with spectrum_ \[\Lambda=\{\lambda\in\mathbb{R}:Be_{\lambda}(\vec{\alpha})=e_{\lambda}(\vec{ \beta})\}. \tag{6.9}\] Proof.: By Theorem 6.5, the only eigenvectors of the operator \(A\) are multiples of \(e_{\lambda}\), \(\lambda\in\Lambda\). On the other hand, by Theorem 4.2, the eigenvectors are functions of the form \[f=\sum_{i=1}^{n}\left(c_{i}\chi_{J_{i}}\right)e_{\lambda},\] where \(c\in\mathbb{C}^{n}\), with \(BE_{\lambda}(\vec{\alpha})c=E_{\lambda}(\vec{\beta})c\). Thus, if \(BE_{\lambda}(\vec{\alpha})c=E_{\lambda}(\vec{\beta})c\) for some non-zero \(c\in\mathbb{C}^{n}\), then \(f\) is an eigenvector, but then it must be a constant multiple of \(e_{\lambda}\) so all the components of \(c\) are the same. This means that \(B\) is a spectral boundary matrix. We check now that the relation (6.7) completely determines \(B\) as a well-defined unitary matrix. Since \(\Lambda\) is a spectrum, for \(\lambda\neq\lambda^{\prime}\) in \(\Lambda\) we have \[0=\left\langle e_{\lambda}\,,\,e_{\lambda^{\prime}}\right\rangle=\sum_{k=1}^{n} \int_{\alpha_{k}}^{\beta_{k}}e^{2\pi i(\lambda-\lambda^{\prime})x}\,dx=\sum_{k=1 }^{n}\frac{1}{2\pi i(\lambda-\lambda^{\prime})}\left(e^{2\pi i(\lambda-\lambda ^{\prime})\beta_{k}}-e^{2\pi i(\lambda-\lambda^{\prime})\alpha_{k}}\right),\] which means that \[\left\langle e_{\lambda}(\vec{\alpha})\,,\,e_{\lambda^{\prime}}(\vec{\alpha}) \right\rangle=\left\langle e_{\lambda}(\vec{\beta})\,,\,e_{\lambda^{\prime}}( \vec{\beta})\right\rangle,\text{ for all }\lambda,\lambda^{\prime}\in\Lambda. \tag{6.10}\] Define the linear operator \(B\) from \(\text{span}\{e_{\lambda}(\vec{\alpha}):\lambda\in\Lambda\}\) to \(\text{span}\{e_{\lambda}(\vec{\beta}):\lambda\in\Lambda\}\), by \[B\left(\sum_{\lambda\in\Lambda}a_{\lambda}e_{\lambda}(\vec{\alpha})\right)= \sum_{\lambda\in\Lambda}a_{\lambda}e_{\lambda}(\vec{\beta}),\] where only finitely many coefficients are non-zero. The operator is well-defined, because, if \(\sum_{\lambda}a_{\lambda}e_{\lambda}(\vec{\alpha})=0\), then \[0=\left\langle\sum_{\lambda}a_{\lambda}e_{\lambda}(\vec{\alpha})\,,\,\sum_{ \lambda}a_{\lambda}e_{\lambda}(\vec{\alpha})\right\rangle=\sum_{\lambda, \lambda^{\prime}}a_{\lambda}\overline{a}_{\lambda^{\prime}}\left\langle e_{ \lambda}(\vec{\alpha})\,,\,e_{\lambda^{\prime}}(\vec{\alpha})\right\rangle\] \[=\sum_{\lambda,\lambda^{\prime}}a_{\lambda}\overline{a}_{\lambda^{\prime}} \left\langle e_{\lambda}(\vec{\beta})\,,\,e_{\lambda^{\prime}}(\vec{\beta}) \right\rangle=\left\langle\sum_{\lambda}a_{\lambda}e_{\lambda}(\vec{\beta}) \,,\,\sum_{\lambda}a_{\lambda}e_{\lambda}(\vec{\beta})\right\rangle\] A similar argument shows that \(B\) is unitary. Next we prove that \(B_{l}:=\text{span}\{e_{\lambda}(\vec{\alpha}):\lambda\in\Lambda\}=\mathbb{C}^{n}\). Since \(B\) is unitary, the same will be true for \(\vec{\beta}\). We proceed by contradiction, if \(B_{l}\) is not the entire space \(\mathbb{C}^{n}\) then it has a non-trivial orthogonal complement, and therefore we can extend the partial isometry \(B\), in two different ways to unitaries \(\tilde{B}\) and \(\tilde{B}^{\prime}\). Both of them give rise, by Theorem 3.2, to self-adjoint extensions of \(\mathsf{D}|_{C_{c}^{\infty}(\Omega)}\), and, since \(Be_{\lambda}(\vec{\alpha})=e_{\lambda}(\vec{\beta})\), we have \(e_{\lambda}\in\mathscr{D}_{\tilde{B}}\cap\mathscr{D}_{\tilde{B}^{\prime}}\). Then, with Lemma 6.6, all eigenvectors are multiples of \(e_{\lambda}\), \(\lambda\in\Lambda\), and with Theorem 4.4, we get that \[\mathscr{D}_{\tilde{B}}=\left\{\sum_{\lambda}f_{\lambda}e_{\lambda}:\sum_{ \lambda}|\lambda|^{2}|f_{\lambda}|^{2}<\infty\right\}=\mathscr{D}_{\tilde{B}^ {\prime}}.\] But this means that \(\tilde{B}=\tilde{B}^{\prime}\), a contradiction, and (6.8) follows. For the converse, if a spectral unitary boundary matrix is given, then the self-adjoint extension associated to it has all eigenvectors of the form \(ce_{\lambda}\), with \(c\in\mathbb{C}\) and \(\lambda\in\Lambda\) as in (6.9), and they form an orthonormal basis. Thus \(\Omega\) is spectral. **Theorem 6.8**.: _Assume that \(\Omega\) is spectral with spectrum \(\Lambda\). Then the unitary group \(U=U_{\Lambda}\) associated to \(\Lambda\) is a unitary group of local translations. In addition, if \(A\) is the self-adjoint extension associated to \(\Lambda\) as in Theorem 6.5, then_ \[U(t)=\exp(2\pi itA),\quad(t\in\mathbb{R}). \tag{6.11}\] _Conversely, if there exists a unitary group of local translations \((U(t))_{t\in\mathbb{R}}\) for \(\Omega\), then \(\Omega\) is spectral._ Proof.: Note first that \(U(t)e_{\lambda}=e^{2\pi i\lambda t}e_{\lambda}\), for all \(\lambda\in\Lambda\) and \(t\in\mathbb{R}\). Then, for \(t\in\mathbb{R}\) and \(x\in\Omega\cap(\Omega-t)\), we have \[(U(t)e_{\lambda})(x)=e^{2\pi i\lambda t}e^{2\pi i\lambda x}=e^{2\pi i\lambda(x+t )}=e_{\lambda}(x+t).\] Now fix \(t\in\mathbb{R}\) and let \(f\in L^{2}(\Omega)\). We want to check (6.4). Since \(\{e_{\lambda}:\lambda\in\Lambda\}\) form an orthonormal basis for \(L^{2}(\Omega)\), we can find a sequence of functions \(\{f_{n}\}\) which are finite linear combinations of function \(e_{\lambda}\), such that \(\{f_{n}\}\) converges to \(f\) in \(L^{2}(\Omega)\). Passing to a subsequence, we can assume that \(\{f_{n}\}\) converges to \(f\) almost everywhere. Since \(U(t)\) is unitary, \(\{U(t)f_{n}\}\) converges to \(U(t)f\) in \(L^{2}(\Omega)\), and again passing to a subsequence we can assume in addition that \(\{U(t)f_{n}\}\) converges to \(U(t)f\) almost everywhere. We have \((U(t)f_{n})(x)=f_{n}(x+t)\) for a.e. \(x\in\Omega\cap(\Omega-t)\), for all \(n\). Taking the limit \((U(t)f)(x)=f(x+t)\) for a.e \(x\in\Omega\cap(\Omega-t)\). This proves (6.4) so \(U\) is a unitary group of local translations. The relation (6.11) follows immediately, because we have the operators \(A\) and \(U(t)\) in diagonal form. Assume now that \(U(t)\) is a unitary group of local translations. By Stone's Theorem 2.10, there exists a self-adjoint operator \(A\) such that \(U(t)=\exp(2\pi itA)\). We claim that \(A\) is a self-adjoint extension of \(\mathsf{D}|_{C^{\infty}_{c}(\Omega)}\). We will prove that, for \(f\in C^{\infty}_{c}(\Omega)\), \[\frac{1}{2\pi it}(U(t)f-f)\text{ converges in }L^{2}(\Omega)\text{ to }\mathsf{D}f \text{ as }t\to 0, \tag{6.12}\] which, by Theorem 2.9(b) and (c), implies that \(f\) is in the domain of \(A\) and \(Af=\mathsf{D}f\). This is to be expected, since, especially for small values of \(t\), the operator \(U(t)\) acts as a translation. We need a Lemma. **Lemma 6.9**.: _Assume that the function \(f\in L^{2}(\Omega)\) is supported on the set \(\Omega_{\epsilon}=\cup_{i=1}^{n}[\alpha_{i}+\epsilon,\beta_{i}-\epsilon]\) for some \(\epsilon>0\), and that \(|t|<\epsilon\). Then_ \[(U(t)f)(x)=f(x+t)\text{ for a.e. }x\in\Omega, \tag{6.13}\] _where \(f(x):=0\) for \(x\) not in \(\Omega\)._ Proof.: If \(|t|<\epsilon\), then \(\Omega_{\epsilon}-t\subset(\Omega\cap(\Omega-t))\), and therefore \((U(t)f)(x)=f(x+t)\), for a.e. \(x\in\Omega_{\epsilon}-t\). We prove that \(g(x):=(U(t)f)(x)=0\) for \(x\in\Omega\setminus(\Omega_{\epsilon}-t)\). We have \[\|f\|_{L^{2}(\Omega)}^{2}=\|U(t)f\|_{L^{2}(\Omega)}^{2}=\int_{\Omega_{ \epsilon}-t}|f(x+t)|^{2}\,dx+\int_{\Omega\setminus(\Omega_{\epsilon}-t)}|g(x )|^{2}\,dx\] \[=\int_{\Omega_{\epsilon}}|f(x)|^{2}\,dx+\int_{\Omega\setminus(\Omega_{ \epsilon}-t)}|g(x)|^{2}\,dx=\|f\|_{L^{2}(\Omega)}^{2}+\|g\|_{L^{2}(\Omega)}^{2}.\] This implies that \(g\) is \(0\) and we obtain the Lemma. Take now \(f\in C^{\infty}_{c}(\Omega)\). This means that \(f\) is supported on a set \(\Omega_{\epsilon}\) for some \(\epsilon>0\). Using Lemma 6.9, for \(|t|<\epsilon\) and for a.e. \(x\in\Omega\), we have \[\frac{1}{2\pi it}((U(t)f)(x)-f(x))=\frac{1}{2\pi it}(f(x+t)-f(x)),\] which converges uniformly to \(\frac{1}{2\pi i}f^{\prime}(x)\) (since \(f\in C^{\infty}_{c}(\Omega)\)). Then (6.12) follows and therefore \(A\) is a self-adjoint extension of \(\mathsf{D}|_{C^{\infty}_{c}(\Omega)}\). With Theorem 4.2, the eigenvectors for \(A\) are of the form \(f=(\sum_{i=1}^{n}c_{i}\chi_{J_{i}})\,e_{\lambda}\), \(Af=\lambda f\). Then \(U(t)f=e^{2\pi it\lambda}f\). Since \(U(t)\) is a unitary group of local translations, for a.e. \(x\in\Omega\cap(\Omega-t)\), we have \[e^{2\pi it\lambda}\left(\sum_{i=1}^{n}c_{i}\chi_{J_{i}}(x)\right)e_{\lambda}(x)= (U(t)f)(x)=f(x+t)=\left(\sum_{i=1}^{n}c_{i}\chi_{J_{i}}(x+t)\right)e_{\lambda} (x+t).\] Fix \(i\neq k\) in \(\{1,\ldots,n\}\) and choose \(t\) such that \(J_{i}\cap(J_{k}-t)\neq\emptyset\) and then pick \(x\in J_{i}\cap(J_{k}-t)\) such that the previous relation holds. Then, we get \[e^{2\pi it\lambda}c_{i}e^{2\pi i\lambda x}=c_{k}e^{2\pi i\lambda(x+t)}.\] This means that \(c_{i}=c_{k}\), and thus \(f=c_{1}e_{\lambda}\). Therefore, all the eigenvectors of \(A\) are multiples of \(e_{\lambda}\), and, by Theorem 6.5, it follows that \(\Omega\) is spectral. ### Concluding remarks As noted in the body of our paper, our present focus for the Fuglede conjecture is based on our particular choices of notions from geometry, harmonic analysis, and from spectral theory. Indeed, we have made these definite choices. Naturally, there are others, and readers will be able to review such alternative approaches in the literature; each one serving its purpose. As a guide to the relevant papers, we conclude here with the following list of citations [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]. ### Statements The paper does not generate data. The authors do not have any conflicts of interest.
2308.05609
LASIGE and UNICAGE solution to the NASA LitCoin NLP Competition
Biomedical Natural Language Processing (NLP) tends to become cumbersome for most researchers, frequently due to the amount and heterogeneity of text to be processed. To address this challenge, the industry is continuously developing highly efficient tools and creating more flexible engineering solutions. This work presents the integration between industry data engineering solutions for efficient data processing and academic systems developed for Named Entity Recognition (LasigeUnicage\_NER) and Relation Extraction (BiOnt). Our design reflects an integration of those components with external knowledge in the form of additional training data from other datasets and biomedical ontologies. We used this pipeline in the 2022 LitCoin NLP Challenge, where our team LasigeUnicage was awarded the 7th Prize out of approximately 200 participating teams, reflecting a successful collaboration between the academia (LASIGE) and the industry (Unicage). The software supporting this work is available at \url{https://github.com/lasigeBioTM/Litcoin-Lasige_Unicage}.
Pedro Ruas, Diana F. Sousa, André Neves, Carlos Cruz, Francisco M. Couto
2023-08-10T14:41:17Z
http://arxiv.org/abs/2308.05609v1
# LASIGE and UNICAGE solution to the NASA LitCoin NLP Competition ###### Abstract Biomedical Natural Language Processing (NLP) tends to become cumbersome for most researchers, frequently due to the amount and heterogeneity of text to be processed. To address this challenge, the industry is continuously developing highly efficient tools and creating more flexible engineering solutions. This work presents the integration between industry data engineering solutions for efficient data processing and academic systems developed for Named Entity Recognition (LasigeUnicage_NER) and Relation Extraction (BiOnt). Our design reflects an integration of those components with external knowledge in the form of additional training data from other datasets and biomedical ontologies. We used this pipeline in the 2022 LitCoin NLP Challenge, where our team LasigeUnicage was awarded the 7th Prize out of approximately 200 participating teams, reflecting a successful collaboration between the academia (LASIGE) and the industry (Unicage). The software supporting this work is available at [https://github.com/lasigeBioTM/Litcoin-Lasige_Unicage](https://github.com/lasigeBioTM/Litcoin-Lasige_Unicage). Data Processing Named-Entity Recognition Relation Extraction Knowledge Graphs ## 1 Introduction Biomedical data is presented normally in complex, large, and diverse formats. Whether in free-text form or highly specialized knowledge graphs, the data needs to be processed before one can integrate it into predictive pipelines or derive new research hypotheses to target. The data to be processed frequently falls in more than one format and comes in large volumes, which requires efficient data processing [1]. Due to the elevated data needs in the industry, there is a need to develop efficient, flexible data engineering solutions to accommodate different formats and volumes. For that, Unicage1 offers a set of commands that allow the user to build efficient programs that can be combined in a modular way to build robust, yet flexible, big data processing pipelines. Unicage Europe is a data engineering startup company with a focus on big data processing through the usage of a shell scripting development methodology alongside a set of command-line tools. These commands are format agnostic, meaning they can work with any type of text data. The tools are written in the C programming language and have been geared towards performance, leveraging the OS's memory and resource management capabilities to deliver high-speed processing. The variety of commands is also able to cover gaps that are present in the toolbox of built-in OS utilities. When compared with other big data processing solutions, Unicage tools and methodology are able to fare reasonably well, with cases where it is able to surpass them in terms of speed and efficiency [2, 3]. LASIGE also has vast experience using shell scripting to perform data and text Processing for Health and Life Sciences [4]. Lately, LASIGE's NLP academic research focused on two main common tasks, Named-Entity Recognition (NER) and Relation Extraction (RE), by developing systems such as BiOnt [5]. These tasks correspond to the 2-phase 2022 LitCoin NLP Challenge2 in which we participated with our team, LasigeUnicage, creating a successful industry-academia collaboration. Our solution allied the data processing efficiency of Unicage with LASIGE's expertise in NLP. Footnote 2: [https://mcats.nih.gov/funding/challenges/litcoin](https://mcats.nih.gov/funding/challenges/litcoin) The 2022 LitCoin NLP Challenge was a part of the NASA Tournament Lab, hosted by the National Center for Advancing Translational Sciences (NCATS) and the National Library of Medicine (NLM). The competition aimed to create a data-driven technological solution that leverages the vast volumes of biomedical publications published daily to advance the biomedical field by increasing discoverability and formulating new research hypotheses. Specifically, the goal was to extract scientific concepts from scientific articles (Part 1), connect them by generating knowledge assertions, and label them as novel findings or background information (Part 2). Our design to target the 2022 LitCoin NLP Challenge relied on two steps: a NER pipeline developed explicitly for the task (Part 1) and the use of the BiOnt system [5] for RE (Part 2). In Part 1, we used the Unicage commands to build a pipeline to gather all the datasets by category and then convert them to the BIO/IBO ("inside-outside-beginning") format required for the NER step. Then, we ensemble six models based on PubMedBERT [6] to recognize the six different types of entities (DiseaseOrPhenotypicFeature, ChemicalEntity, OrganismTaxon, GeneOrGeneProduct, SequenceVariant, CellLine). For Part 2 of the challenge, we used Unicage commands to preprocess the input data and the BiOnt system to perform RE. This system relies not only on the training data itself but can also integrate external knowledge in the form of biomedical ontologies such as ChEBI [7] and GO [8] to further improve the RE process. Finally, we resorted to Unicage commands to post-process the output to identify whether the relations were considered novel. We state our main contributions described in this paper below: * Efficient biomedical NLP Pipeline based on industry data processing tools and academically developed systems for NER and RE. * Application of external data into the biomedical NLP pipeline in the NER and RE stages. * Integration of LASIGE's NER and RE systems and Unicage industry data processing solutions. ## 2 Part 1: Named-Entity Recognition In Part 1, given an abstract text the goal was to find/recognize all biomedical entities of types: DiseaseOrPhenotypicFeature, ChemicalEntity, OrganismTaxon, GeneOrGeneProduct, SequenceVariant, or CellLine. For example, given the sentence _Late-onset metachromatic leukodystrophy: molecular pathology in two siblings._, the goal is to identify the entity _metachromatic leukodystrophy_, a DiseaseOrPhenotypicFeature. We started by preprocessing the training datasets containing documents from several Named-Entity Recognition (NER) corpora. Then, we ensemble six trained models (PubMedBERT + linear layer for token classification) to recognize the six different types of entities. ### Data Processing For data processing, we first converted the corpora to the BIO format, merged the different corpora into a single file for each entity type, and generated the final training datasets for each entity type. Most data was pre-processed using Unicage commands and Shell Scripting. The majority of Unicage commands used were data manipulation commands, such as _self_ and _delf_, which are two commands that allow easy manipulation of the data fields on each record of the files, independent of their format, or commands that present an optimization over OS built-in tools, such as _uawk_, which is an optimized version of GNU awk [9]. To complement this pipeline, the usage of specific Python libraries, such as bconv3 and the standoff2conll 4, were also used in order to convert the datasets to specific file types so that their manipulation by the Unicage tools could be facilitated. Footnote 3: [https://pypi.org/project/bconv/](https://pypi.org/project/bconv/) Footnote 4: [https://github.com/spyysalo/standoff2conll](https://github.com/spyysalo/standoff2conll) The datasets used per entity type are the following: * DiseaseOrPhenotypicFeature: BC5CDR [10], PGxCorpus [11], NCBI Disease [12], Disease Names and Adverse Effects [13], MedMentions [14], and PHAEDRA [15]. * ChemicalEntity: Corpora for Chemical Entity Recognition [16], CRAFT [17], BC5CDR [10], CHR [18], and PHAEDRA [15]. * OrganismTaxon: LINNAEUS [19], CRAFT [17], Species-800 [20], and Cell Finder [21]. * GeneOrGeneProduct: BC2GM [22], JNLPBA [23], CRAFT [17], PGxCorpus [11], FSU_PRGE [24], and Cell Finder [21]. * CellLine: JNLPBA [23], GELLUS [25], CLL 3, and Cell Finder [21]. * SequenceVariant: tmVar [26], PGxCorpus [11], and SNPPhenA [27]. Footnote 3: [https://turkunlp.org/Cell-line-recognition/](https://turkunlp.org/Cell-line-recognition/) The integration of each training dataset and the competition dataset was done by assigning the relevant tags to the targeted entity. For example, in the training dataset for entities of the type DiseaseOrPhenotypicFeature, the tokens relative to entities in the competition dataset were tagged with \(B\) and \(I\), whereas tokens relative to entities of other types were assigned the tag \(O\). ### Model architecture Our approach used PubMedBERT embeddings (originally trained on PubMed articles), jointly with a linear classification layer to classify each token that was fine-tuned in each training dataset. We defined a post-processing rule for entities of type CHEMICAL with a length of 1. We checked if the character corresponded to a letter since chemical elements can be represented by a single letter (e.g., \(C\) represents _carbon_). For the remaining entity types, we excluded entities with a length of 1 in the output. ### Methodology For Part 1, our methodology consisted of the following: 1. Hyperparameter optimization: training epochs, learning rate, train batch size, test batch size. Small versions of the training sets and competition training dataset (in BIO format) were used as test sets. 2. Fine-tuning: After finding the optimal number of training epochs and learning rates, we combined the competition training dataset with the rest of the training datasets and then trained the model (90% training, 10% validation). Training of a distinct model for each of the six competition entity types. 3. Prediction: Sequential application of each model in a given sentence of an abstract present in the competition's test set. ### Implementation * 1 Tesla M10 GPU * Training time (excluding hyperparameter optimization) took approximately 15 hours. * Prediction time took approximately 8 minutes. ## 3 Part 2: Relation Extraction For Part 2, given the abstract text and the identified entities from Part 1, the goal was to identify relations between the biomedical entities. The relation could be classified in a combination of a first label of Association, Positive Correlation, Negative Correlation, Bind, Cotreatment, Comparison, or Drug Interaction, and a second label of Novel or Not Novel. For example, given the sentence _Middle B3 serotonin nerves in rat medulla are involved in hypotensive effect of methyldopa_. with the identified biomedical entities _serotonin_ (ChemicalEntity), _rat_ (OrganismTaxon), _hypotensive_ (DiseaseOrPhenotypicFeature), and _methyldopa_ (ChemicalEntity), the goal is to identify the relation between _serotonin_ and _hypotensive_ and classify as a Positive Correlation Novel. Identically to Part 1, we started by preprocessing the datasets provided by the challenge organizers and the biomedical ontologies associated with the entities' identifiers. We assembled two BiOnt [5] models. We used the first model to identify the eight types of relations: Association, Positive Correlation, Negative Correlation, Bind, Cotreatment, Comparison, and Drug Interaction. The second model was to classify the relation even further between Novel and Not Novel. ### Data Processing Part of the initial data was pre-processed using Unicage commands and Shell Scripting. Afterwards, we linked the MESH ontology1 to the entity types DiseaseOrPhenotypicFeature and ChemicalEntity and the NCBITaxon ontology** to the entity types Species and CellLine. Footnote 1: url[https://www.ncbi.nlm.nih.gov/mesh/](https://www.ncbi.nlm.nih.gov/mesh/) The competition training set was tokenized using BiOnt, and each entity covered by the ontologies considered above was mapped within the hierarchy of that ontology. Finally, the output generated from the model was processed using Unicage commands and Shell Scripting. These scripts used a small set of rules to choose which No/Novel tag to keep for each relation, while at the same time, it generated the final files in the format required by the competition. ### Model architecture Our approach used the BiOnt system, a biomedical RE system built using bidirectional LSTM networks. The BiOnt system incorporates Word2Vec word embeddings [28] and uses different combinations of input channels to maximize performance, including ontology embeddings. We used the full provided abstract and considered the multiple relations mentioned within each abstract to have more training cases on the same type of relation. We are aware of the limitations of our approach, given that the BiOnt system architecture is no longer state-of-the-art for biomedical relation extraction. However, the system's unique approach to external knowledge injection allows us to include each representative ontology within the training pipeline furthering the knowledge about each entity in a candidate relation. ### Methodology For Part 2, our methodology consisted of the following: 1. Hyperaparameter optimization: training epochs, learning rate, train batch size, test batch size, max text length, class weights. Small versions of the training sets and competition training dataset (in BIO format) were used as test sets. 2. Fine-tuning: After finding the optimal number of training epochs and learning rates, we obtained two trained models, one to predict the different types of relations and the other to predict if they were Novel or not. 3. Prediction: Sequential application of each model in a given abstract present in the competition's dataset. ### Implementation * 3 Tesla M10 GPU * Training time (excluding hyperparameter optimization) took approximately 10 hours. * Prediction time took approximately 5 minutes. ## 4 Evaluation For both parts of the challenge (Part 1 and Part 2), we followed the evaluation guidelines provided by the 2022 LitCoin NLP Challenge organizers. The evaluation metric was the average of the Jaccard similarity calculated for each document: \[J(O,P)=\frac{|P\cap O|}{|P|+|O|-|P\cup O|} \tag{1}\] In Part 1, \(P\) corresponded to the set of predicted mentions and \(O\) to the set of correct mentions in a given abstract. A match between two mentions occurred when they had the same type and similar offsets. Our pipeline achieved a score of 0.8423 in this part (calculated from 50.0% of the test data), which corresponded to 30% of the final score. The highest-scoring team achieved 0.9067. In Part 2, \(P\) corresponded to the set of predicted relations and \(O\) to the set of correct relations in a given abstract. A relation is characterized by a pair of entities, its type (Association, Positive Correlation, Negative Correlation, Bind, Cotreatment, Comparison, and Drug Interaction) and its novelty (No, Novel). For each correct relation in a given abstract, it is calculated the following intersection score with the predictions: \[intersection\_score=0.25\times A+0.5\times B+0.25\times C \tag{2}\] Where \(A\), \(B\), and \(C\) can either have a value of 1 or 0. If a relation in \(P\) includes the same pair of entities present in the correct relation, \(A\) has a value of 1. If the relation is also of the same type, \(B\) has a value of 1. If the relation also has the same novelty, \(C\) has a value of 1. This means that the intersection score for a correct relation and the predictions is a value between 0 and 1. The intersection between \(O\) and \(P\) in a given abstract is calculated by the averaged intersection scores for each correct relation in \(O\). Our pipeline achieved a score of 0.2124 in this part (calculated from 50.0% of the test data), which corresponded to 70% of the final score. The highest-scoring team achieved 0.6279. ## 5 Conclusion This work presented the pipeline elaborated to participate in the 2022 LitCoin NLP Challenge by our team, Lasige-Unicage, highlighting a successful collaboration between the academia (LASIGE) and the industry (Unicage). Our biomedical NLP pipeline used data engineering pre-processing tools and two systems to perform NER and RE that could incorporate external knowledge. The NER system was explicitly designed to tackle the challenge, whereas, for RE, we used the BiOnt system [5] with minimal modifications. We were awarded the 7th Prize ($ 5000) in the LitCoin competition out of approximately 200 participating teams. In the future, the goal is to improve the NER module by expanding and refining training datasets and exploring different classification layers. As for RE, we could link more external ontological data to boost performance. Unicage commands offered advantages, namely in the ease of use and the versatility of the commands, which allowed us to convert and merge the different corpora files to the BIO/IBO format efficiently, along with the post-processing of the results from the Relation Extraction model. We intend to explore how we can further integrate Unicage approaches in NLP tasks and pipelines, with a particular focus on data processing aspects. ## Acknowledgments This work has been supported by FCT through Deep Semantic Tagger (DeST) Project under Grant PTDC/CCIBIO/28685/2017 ([http://dest.rd.ciencias.ulisboa.pt/](http://dest.rd.ciencias.ulisboa.pt/)), in part by LASIGE Research Unit under Grants UIDB/00408/2020 and UIDP/00408/2020, and in part by FCT and FSE through PhD Scholarship under Grant SFRH/BD/145221/2019 and PhD scholarship ref. 2020.05393.BD.
2301.09538
A uvbyCaHbeta CCD Analysis of the Open Cluster Standard, M67,and its Relation to NGC 752
Precision CCD uvbyCaHbeta photometry is presented of the old cluster, M67, covering one square degree with typical internal precision at the 0.005-0.020 mag level to V~17. The photometry is calibrated using standards over a wide range in luminosity and temperature from NGC 752 and zeroed to the standard system via published photoelectric observations. Relative to NGC 752, differential offsets in reddening and metallicity are derived using astrometric members, supplemented by radial-velocity information. From single-star members, offsets in the sense (M67 - NGC 752) are Delta E(b-y) = -0.005 +/-0.001 (sem) mag from 327 F/G dwarfs and Delta [Fe/H] = 0.062 +/- 0.006 (sem) dex from the combined m1 and hk indices of 249 F dwarfs, leading to E(b-y) = 0.021 +/- 0.004 (sem), and [Fe/H] = +0.030 +/- 0.016 (sem) for M67, assuming [Fe/H]{Hyades} = +0.12. With probable binaries eliminated using c1,(b-y) indices, 83 members with relative parallax errors < 0.02 generate (m-M)_0 = 8.220 +/- 0.005 (sem) for NGC 752 and an isochronal age of 1.45 +/- 0.05 Gyr. Using the same parallax restriction for 312 stars, M67 has (m-M) = 9.77 +/- 0.02 (sem), leading to an age tied solely to the luminosity of the subgiant branch of 3.70 +/- 0.03 Gyr. The turnoff color spread implies +/- 0.1 Gyr, but the turnoff morphology defines a younger age/higher mass for the stars, consistent with recent binary analysis and broad-band photometry indicating possible missing physics in the isochrones. Anomalous stars positioned blueward of the turnoff are discussed.
Bruce A. Twarog, Barbara J. Anthony-Twarog, Constantine P. Deliyannis
2023-01-23T16:48:09Z
http://arxiv.org/abs/2301.09538v1
# A \(uvbyCa\)H\(\beta\) CCD Analysis of the Open Cluster Standard, M67, ###### Abstract Precision CCD \(uvbyCa\)H\(\beta\) photometry is presented of the old cluster, M67, covering one square degree with typical internal precision at the 0.005-0.020 mag level to \(V\)\(\sim\) 17. The photometry is calibrated using standards over a wide range in luminosity and temperature from NGC 752 and zeroed to the standard system via published photoelectric observations. Relative to NGC 752, differential offsets in reddening and metallicity are derived using astrometric members, supplemented by radial-velocity information. From single-star members, offsets in the sense (M67 - NGC 752) are \(\delta E(b-y)\) = -0.005 \(\pm\) 0.001 (sem) mag from 327 F/G dwarfs and \(\delta\)[Fe/H] = 0.062 \(\pm\) 0.006 (sem) dex from the combined \(m_{1}\) and \(hk\) indices of 249 F dwarfs, leading to \(E(b-y)\) = 0.021 \(\pm\) 0.004 (sem), and [Fe/H]\({}_{M67}\) = +0.030 \(\pm\) 0.016 (sem) assuming [Fe/H]\({}_{Hyades}\) = +0.12. With probable binaries eliminated using \(c_{1},(b-y)\) indices, 83 members with \((\pi/\sigma_{\pi})>\) 50 generate \((m-M)_{0}\) = 8.220 \(\pm\) 0.005 (sem) for NGC 752 and an isochronal age of 1.45 \(\pm\) 0.05 Gyr. Using the same parallax restriction for 312 stars, M67 has \((m-M)\) = 9.77 \(\pm\) 0.02 (sem), leading to an age tied solely to the luminosity of the subgiant branch of 3.70 \(\pm\) 0.03 Gyr. The turnoff color spread implies \(\pm\) 0.1 Gyr, but the turnoff morphology defines a younger age/higher mass for the stars, consistent with recent binary analysis and broad-band photometry indicating possible missing physics in the isochrones. Anomalous stars positioned blueward of the turnoff are discussed. ## 1 Introduction Star clusters have long been extolled as critical testbeds of stellar evolution while simultaneously serving as well-defined, individual data points for probing the temporal, chemical, and spatial evolution of the Galaxy due to the unique distance, age, and chemical composition common to all stars within a cluster (see, e.g. Twarog, Ashman, & Anthony-Twarog (1997); Friel et al. (2002); Netopil et al. (2016); Donor et al. (2020)). While the claim of uniformity for the latter two parameters has been successfully challenged by globular clusters exhibiting multigenerational abundance trends (see, e.g. Gratton, Sneden, & Carretta (2004); Carretta et al. (2009); Milone et al. (2012); Piatti (2020)), open clusters still retain the mantle of single-generation parametric homogene ity, apart from abundance variations due to the internal evolution of specific elements like Li or CNO. In particular, in the case of M67, diffusion and rotational mixing have arisen as a potential sources of significant multi-elemental variations with evolutionary phase (Souto et al., 2019; Boesgaard, Lum, & Deliyannis, 2020). Of the thousands of clusters now known and isolated as physical entities within the Galactic environment, thanks in large part to the expanding astrometric insight supplied by the ongoing _Gaia_ mission (Gaia Collaboration et al., 2016, 2018, 2021, 2022), with the possible exception of the very nearby Hyades, few open clusters have received as much attention as M67 with approximately 2200 published references, over one-third of these within the last decade. Its high profile was driven initially by: (a) a modest apparent distance modulus (\(\sim\)9.5-9.7) (Johnson & Sandage, 1955; Eggen, 1959; Sandage, 1962; Eggen & Sandage, 1964), reducing the excessive areal coverage necessary to compile a substantial cluster sample as required for nearby objects like the Hyades, NGC 752, and, more recently, Rup 147 (Curtis et al., 2013); (b) an "old" age (\(\sim\)5-6 Gyr) (Sandage & Eggen, 1969; VandenBerg, 1985), making it comparable to the sun while placing it with NGC 188 among the very few "old" disk clusters accessible for probing the early evolution of the disk; (c) a commonly derived and adopted metallicity less than the Hyades and potentially similar, if not identical, to the sun (Eggen & Sandage, 1964); and (d) uniform and low reddening across the face of the populous cluster (see Taylor (1978) and references therein). Over the decades, significant and coupled changes have altered a number of the key cluster parameters, subtly impacting the contextual role of the cluster within stellar and galactic evolution. Compared to the initial cluster studies of 50 years ago, the current consensus, based upon a variety of analyses, places the cluster farther away with an age younger than the sun (see, e.g. Sandquist et al. (2021) (SA)). There is still no evidence for significant variation in reddening across the face of the cluster, but the absolute value of the reddening from some techniques exhibits offsets that measurably affect the cluster age estimate to an annoying degree in a time of supposedly precision photometry, astrometry, and stellar isochrones (see, e.g. Taylor (2007) and references therein). Of primary importance is the metallicity. The initial estimates for M67 from broad-band photometry and modest spectroscopy tagged the cluster as less metal-rich than the Hyades and potentially solar in metallicity (see Twarog (1978) and references therein), but the scatter in values from all techniques ranged from less than one-half solar (Cohen, 1980) to approximately twice the value of the Hyades (Spinrad & Taylor, 1969; Spinrad et al., 1970; Gottlieb & Bell, 1972). Fortunately, improvements in the quality and quantity of the photometric and spectroscopic analyses have reduced, but not eliminated, the range, with a current spread in [Fe/H] from just below solar to almost Hyades metallicity (Reddy, Giridhar, & Lambert, 2015; Ray et al., 2022). The aforementioned phase-dependent diffusion effects aside, potential sources of the metallicity offsets at the level of 0.05 to 0.1 dex among the various techniques for obtaining a mean cluster metallicity are numerous. To name just a few, for traditional spectroscopy the zero-point of the scale can be set by reference observations to one star, typically the Sun for dwarfs or a bright giant like Arcturus for evolved stars. Such approaches work well for stars with parameters (\(T_{\rm eff}\) and log \(g\)) similar to the reference star but can become less reliable with increasing parametric distance from the standard. More recent approaches often make use of multiple spectroscopic standards or an array of synthetic spectra covering a range in [Fe/H], log \(g\) and \(T_{\rm eff}\), but the final values are only as reliable as the consistency of the adopted standard values or the accuracy of the atmospheric models. Different schemes for deriving the \(T_{\rm eff}\), dependent for some techniques upon the adopted reddening and/or the microturbulent velocity, can generate small alterations in the metallicity zero-point, star-to-star scatter aside. When compounded with differences in spectral resolution, S/N, bandpass selection, and line lists, it is perhaps surprising that study-to-study comparisons of the same cluster don't show more variation. For photometry, the observational approach is simpler and the transformation from photometric indices to stellar parameters is straightforward using relations defined by a large body of precision photometry of stars with independently derived fundamental parameters. Alternative relations linking observation to physical parameters may exist, but differences between these can be readily sorted to place any set of photometric stellar parameters on a common scale. Thus, the challenge for photometric abundance derivation in clusters is that of getting enough precision for individual stars in a large enough sample to reduce the cluster standard-error-of-the-mean (sem) to the desired precision. The potential sources affecting the zero-point accuracy of the metallicity determination bear some similarity to those of spectroscopy. Photometric indices are often designed to work well over a modest range in \(T_{\rm eff}\), log \(g\), and [Fe/H]; what supplies reliable abundances for giants may fail completely for dwarfs and vice versa. Derivation of photometric stellar parameters often requires correction for reddening, which may not be obtainable from the photometry itself. Invariably, the greatest uncertainty arises from the transformation of the photometry to the standard system; differences in photometric zero-points at the level of \(\pm 0.01\) mag for key indices, depending upon the photometric system, can generate metallicity offsets at the level of \(\pm 0.1\) dex or less. In the simplest terms, the purpose of the current investigation is to present precision photometry on the \(uvbyCa\)H\(\beta\) system of a one-degree-square field which includes M67. The discussion follows the approach laid out in a similar survey of the nearby younger cluster, NGC 752 (Twarog et al., 2015) (hereinafter Paper I), with one key difference. While high quality photoelectric photometry will be used to set the zero-points of the M67 indices, the slopes of the calibration curves for all indices, giants and dwarfs independently when required, will be defined using the extensive CCD photometry of NGC 752 as presented in Paper I. This key cluster was observed during every run with M67 and supplies calibration standards for dwarfs and giants alike numbering in the hundreds, bypassing the lack of extensive standard star fields for intermediate-band photometry but common for traditional broad-band filters (see e.g. Landolt (1992); Stetson (2000)). The ultimate goals of this approach are three-fold: (1) with the photometry of both clusters on an internally coupled system across all temperatures and luminosities, highly reliable differential measures of the key cluster parameters of reddening, metallicity, distance, and age are attainable; (2) M67 has been used as a photometric link for precision observations of 5 additional clusters included in the ongoing survey of cluster Li abundances, all but one of which have no previous internal intermediate-band photometry. These data will allow the same differential approach used with NGC 752 to be applied to the less well-studied clusters covering a significant range in age and metallicity; (3) the fact that so much detailed analysis is available for the rich population of stars in M67 provides a testing ground for ways in which the multiple combinations of indices for either dwarfs or giants can be used to identify and isolate subclasses of stars of evolution ary interest for application to stellar systems where the dataset may be restricted to photometry alone. The outline of the paper is as follows: Section 2 discusses the collection and processing of the CCD observations of M67 while Section 3 details the procedure for compiling the internal photoelectric standards used to define the CCD zero-points, as well as the transformation of the instrumental data to the standard system using NGC 752 to generate the transformation slopes at all temperatures and luminosities. Section 4 uses _Gaia_ astrometry and ground-based radial velocities (if available) to isolate probable single-star members and red-active the cluster reddening and metallicity using the precision photometry of Paper I. Section 5 applies the same membership approach to M67 and uses this select sample to derive the reddening and metallicity relative to NGC 752. Section 6 details the derivation of both distance and age for both clusters, identifying and exploring the discrepancies between theory and observation while Section 7 summarizes our conclusions. ## 2 Observations and Data Reduction Intermediate and narrow-band images of M67 were obtained using the WIYN 0.9-m telescope during four observing runs between Nov. 2015 and Feb. 2017. During each run frames also were obtained in multiple fields of NGC 752, observed as the primary source of standards for the extended Stromgren and H\(\beta\) systems. Because of the reduction approach outlined below, frames of both clusters were collected on both photometric and nonphotometric nights. For all runs the telescope was equipped with the Half-Degree-Imager (HDI), a 4K \(\times\) 4K chip with 0.43\({}^{\prime\prime}\) pixels covering a 29\({}^{\prime}\times\) 29\({}^{\prime}\) field. The seven filters were from the extended Stromgren set acquired for specific use with the HDI. Bias frames and dome flats were collected for every filter every night, while sky flats for all filters except \(b\) and \(y\) were obtained at twilight every night sky conditions allowed. Sky flats were always used in the frame processing except when the integrated counts on the sky set of a given filter proved inadequate and the dome flats were adopted instead. To optimize the cluster frame collection, extinction fields were selected from the cluster fields themselves and monitored 3 or more times each night over a range in airmass, though for the current discussion, extinction corrections remain irrelevant. To expand areal coverage, in contrast with the approach for NGC 752 (Paper I), M67 was divided into four barely overlapping fields, ultimately linked through a central field overlapping with a quarter of each of the outer four fields. Given the 29\({}^{\prime}\times\) 29\({}^{\prime}\) field of the HDI chip and the variable positioning of each field, the final photometry covers approximately one-square-degree. Exposure times in all filters were staggered to allow reliable photometry from \(V\)\(\sim\) 8.5 in all filters to varying depths in each. The total M67 frameset amounted to \(\sim\)450 frames over 7 filters in 5 fields. A description of our procedures for processing and merging the photometry from multiple frames and fields is given in Paper I and will not be repeated. Suffice it to say that, unlike the earlier work with NGC 752, all frames for this investigation were collected with the same CCD chip and filter set, making the photometric merger both simpler and more reliable. Once transferred to a common coordinate system, all frames in a given filter were adjusted in magnitude using a derived offset to one frame of that filter adopted as the \(standard\) for the instrumental system. Final instrumental magnitudes for each star were derived from the weighted averages of all frames for a given filter, with indices constructed from these averages. The final sem for each index is based upon the photomet ric scatter calculated from each magnitude used to construct the index. The sem for each star in \(V\) and all five indices as a function of \(V\) are plotted in Figure 1. The sem is only calculated for stars with 3 or more observations in each filter used to construct an index. The tick marks on the vertical scale for all panels in Figure 1 define a change of 0.02 mag; the total range for sem is 0.10 mag for \(V\) and 0.15 mag for the five photometric indices. As expected, due to the inclusion of a filter weighted toward the ultraviolet, \(hk\) and \(c_{1}\) indices have sem precision limits approximately 0.5 and 1.0 magnitude brighter than \(m_{1}\), respectively. ## 3 Transforming the CCD Photometry A critical focus of current and future cluster comparisons is the need to ensure that all cluster photometry is on a common system, supplying confidence that differences among the indices between clusters are signatures of true differences in the relative cluster parameters rather than byproducts of calibration offsets. Fortunately, the two clusters of interest have been exhaustively investigated in a series of papers designed specifically to detect and minimize any zero-point differences in broad-band VRI (Joner et al., 2008; Taylor, Joner, & Jeffery, 2008; Taylor & Joner, 2011) and \(uvby\)H\(\beta\)(Joner & Taylor, 1995, 1997) photometry. As with NGC 752 (Paper I), we will first compile a set of internal M67 \(uvbyCa\)H\(\beta\) photoelectric standards tied to the established systems. Unlike NGC 752, however, these data will only be used to fix the zero-points for the transformations between the CCD instrumental system and the standard system once the transformation slopes have been defined using the NGC 752 CCD observations. This approach is crucial given that the internal \(uvby\)H\(\beta\) standards for M67 include no red giants or red dwarfs and simple linear extrapolation of the relations from bluer stars, particularly for CCD photometry, does not work (Paper I). ### Internal M67 Standards: \(V\), \(uvby\)H\(\beta\) The \(y\) magnitudes of the Stromgren system are unique in that, despite the narrower bandwidth of the filter compared to traditional \(V\) filters, they can be transferred directly to the Cousins \(V\) system with usually only a modest linear color-dependent correction. The obvious advantage is that, unlike the multifilter Stromgren indices, reliable broad-band photometry can be adopted to calibrate the \(y\) magnitudes without the need to call solely upon \(V\) defined through intermediate-band observations, photoelectric or otherwise. There have been many broad-band surveys of M67 over the years, beginning with the photoelectric data of Eggen & Sandage (1964) through the photographic work of Racine (1971), to the CCD studies of Montgomery, Marshall, & Janes (1993) and Sandquist (2004), among others. Taylor, Joner, & Jeffery (2008) supply a comprehensive discussion of multiple sources of \(V\) photometry for Figure 1: Standard-errors-of-the-mean (sem) for each index and \(V\) as a function of \(V\) mag. The tick marks on the vertical scale for all panels define a change in the sem equal to 0.02 mag. M67 on the Cousins system, producing two catalogs for the cluster, one based upon corrected data of Sandquist (2004) alone (210 stars) and a second composed of a combination of recalibrated sets of both photoelectric and CCD data (241 stars). Because it covers a wider range in color though with a brighter magnitude limit, we will adopt the latter catalog to transfer our \(y\) mags to the \(V\) system, but use the former as a secondary check. The largest set of \(uvby\)H\(\beta\) photoelectric observations of M67 is that of Nissen, Twarog, & Crawford (1987)(hereinafter NTC) which includes a mixture of indices for 79 stars, 33 of which have H\(\beta\). Given the size and precision of the samples, our first step will be a merger with the data of Joner & Taylor (1995, 1997), providing a reliable data set for testing and redefining the photometry from a number of smaller, less precise compilations, usually heavily weighted to the brighter blue stragglers that populate the cluster field. Comparisons will only be presented for data sets where the overlap between the sample and this photometric set of indices, referred to as the core set, is statistically significant enough to define a reliable offset calculation. In the following discussion, quoted uncertainties refer to the standard deviations among the residuals, unless otherwise noted. The combined M67 H\(\beta\) set of 22 stars from Taylor (1978) and Joner & Taylor (1997) has a 19 star overlap with NTC. Unweighted residuals, in the sense (JT-NTC), generate a mean offset of -0.005 \(\pm\) 0.014. If one star with a significantly larger than average residual is dropped, the offset shifts slightly to -0.007 \(\pm\) 0.011. Applying an offset of -0.006 mag to NTC, H\(\beta\) photometry for stars in common to the two samples was averaged using a weighting by the inverse square of their individual errors, leading to a combined core H\(\beta\) sample of 36 stars. The next H\(\beta\) comparison is with the sample of 9 blue stragglers in Eggen (1981), 8 of which overlap with our newly defined core. The derived offset, in the sense (CORE - EG), of -0.015 \(\pm\) 0.008 mag has been applied to the Eggen (1981) data. The final attempted H\(\beta\) match is to 21 stars of Strom, Strom, & Bregman (1971), 16 of which overlap with the core. The mean residual in H\(\beta\), in the sense (CORE - SSB), is +0.006 \(\pm\) 0.027 mag. Despite a larger overlap and expected improvement in the precision of the core sample, the rms scatter among the residuals is the same as that derived by NTC from their sample alone, implying that the primary source of the noise lies with the Strom, Strom, & Bregman (1971) data. Because of the large photometric uncertainty implicit in their data, it was decided to exclude the Strom, Strom, & Bregman (1971) data set from the final H\(\beta\) merger. The three reliably recalibrated data sets discussed above were averaged by weighting the individual H\(\beta\) values by the inverse squares of their photometric uncertainties, producing a final sample of 37 stars. For \(b-y\), \(m_{1}\), and \(c_{1}\), the core system is again defined by the merger of the NTC data with that of Joner & Taylor (1995, 1997). Before discussing the offsets, it should be noted that for 5 blue stragglers, Joner & Taylor (1997) supply two sets of data for some indices, the first set obtained not later than 1987 and the second in 1996. The implication is that these stars exhibit statistically significant evidence for potential long-term variability and that the individual indices should be treated as intrinsically different measures from two widely separated time frames. For the 9 pairs of duplicate indices affecting 4 stars, we have adopted a simple average of the two values as the correct index for each star. For \(b-y\), \(m_{1}\), and \(c_{1}\), the mean offsets from 15 stars in common, in the sense (JT - NTC), are -0.005 \(\pm\) 0.009 mag, +0.006 \(\pm\) 0.009 mag, and +0.002 \(\pm\) 0.019 mag, respectively. The offsets for \(b-y\) and \(c_{1}\) are in excellent agreement with the analysis of Joner & Taylor (1997). However, the offset derived for \(m_{1}\) by Joner & Taylor (1997) is +0.0006 mag, significantly smaller than found here. As opposed to taking a simple average, we have attempted various combinations of the indices using different assumptions for which set of paired data to use for the blue stragglers, including total exclusion of the paired sets, and are unable to obtain an offset value as small as 0.001 mag; the full range of offsets goes from 0.004 mag to 0.009 mag. As with the H\(\beta\) comparison, we will retain our calculated value as the appropriate offset. (For a detailed discussion of the many issues associated with the zero-point of the \(m_{1}\) photoelectric system the reader is referred to the Appendix of NTC.) With the NTC photometry adjusted and merged with that of Joner & Taylor (1995, 1997), the first comparison is with the \(uvby\) data from Eggen (1981). Unlike the H\(\beta\) data, the \(m_{1}\) and \(c_{1}\) indices are assumed to be on a slightly different system compared to the core data due to differences in the \(v\) filter adopted by Eggen (1981), thus leading to the identification of the indices as \(M_{1}\) and \(C_{1}\). For the 8 stars overlapping with the core, all blue stragglers, the mean residuals, in the sense (CORE - EG), are +0.004 \(\pm\) 0.008 mag, +0.003 \(\pm\) 0.004 mag, and -0.032 \(\pm\) 0.023 mag for (\(b-y\)), \(m_{1}\), and \(c_{1}\), respectively. The next data set is that of Bond & Perry (1971) for 7 blue stragglers, all of which overlap with the core sample. The mean offsets are -0.007 \(\pm\) 0.013 mag, +0.012 \(\pm\) 0.020 mag, and +0.010 \(\pm\) 0.016 mag for (\(b-y\)), \(m_{1}\), and \(c_{1}\), respectively, in excellent agreement with the comparison in Joner & Taylor (1997). Part of the scatter is due to the photometry being listed to only two decimal places. Despite the apparent failure of the H\(\beta\) comparison, a check was repeated for the \(uvby\) set of Strom, Strom, & Bregman (1971) using the 16 stars that overlap with the core. Previous attempts to transfer these data to a standard system have presented challenges, illustrated by the need to either break the sample into two distinct color ranges (Joner & Taylor, 1997) or include a color-dependent term within the offset (NTC). With the larger database provided by the core sample, it became apparent that both approaches were partially correct. For all three indices, the residuals show a well-defined pattern. For stars with (\(b-y\)) below 0.31, there is a constant offset of -0.012 \(\pm\) 0.009 mag, +0.015 \(\pm\) 0.008 mag, and +0.038 \(\pm\) 0.020 mag, respectively, for (\(b-y\)), \(m_{1}\), and \(c_{1}\). For the stars redder than (\(b-y\)) = 0.31, the offset includes a color term in (\(b-y\)) with a slope of -0.29, 0.48, and -0.74 for (\(b-y\)), \(m_{1}\), and \(c_{1}\), respectively. Application of these offset transformations produces photometry on the core system with a residual scatter of \(\pm\)0.009, \(\pm\)0.012, and \(\pm\)0.020 mag for (\(b-y\)), \(m_{1}\), and \(c_{1}\), respectively. As with H\(\beta\), the core-realibrated (\(b-y\)), \(m_{1}\), and \(c_{1}\) indices from the five sources above were merged using the inverse square of the photometric uncertainties as weights, producing the M67 internal photoelectric standards compiled in Table 1. Identification numbers as defined in WEBDA are included, as well as the coordinates on the _Gaia_ DR3 system. The other columns are self-explanatory. For \(hk\), photoelectric photometry in M67 on the \(ybCa\) system was obtained as part of the compilation of the original catalog of stars defining the fundamental system (Twarog & Anthony-Twarog, 1995), though not included in the published catalog. Presented in Table 2 are the (\(b-y\)), \(hk\) data for 19 stars in the field of M67, ranging from blue stragglers through red giants. Identification of each star is via WEBDA number and (RA, Dec) coordinates on the _Gaia_ DR3 system. ### Defining the Calibration Relations To define the slopes and/or color terms for the transformation of the instrumental extended Stromgren data to the standard system, use was made of the NGC 752 frames obtained and compiled during the same observing runs as M67. Because of the extensive set of frames covering all the fields of Paper I, almost 1770 stars brighter than \(V=18\) were cross-matched with the final indices of Paper I. To optimize the precision of the calibration, calibration stars were retained only if there were 3 or more observations in each filter used to construct an index for both the standard and the instrumental data. To improve the calibration definition, cuts were also made based upon the calculated sem for both the instrumental and standard photometry. These limits will be detailed within the discussion of the individual indices. For all indices and the \(V\) magnitude for the standard stars in NGC 752, a general calibration equation of the form \begin{table} \begin{tabular}{r r r r r r r r r r r} \hline \hline \multicolumn{1}{c}{ WEBDA ID} & \multicolumn{1}{c}{\(\alpha\)(2000)} & \multicolumn{1}{c}{\(\delta\)(2000)} & \multicolumn{1}{c}{\(b-y\)} & \multicolumn{1}{c}{sem} & \multicolumn{1}{c}{\(m_{1}\)} & \multicolumn{1}{c}{sem} & \multicolumn{1}{c}{\(c_{1}\)} & \multicolumn{1}{c}{sem} & \multicolumn{1}{c}{H\(\beta\)} & \multicolumn{1}{c}{sem} \\ \hline [MISSING_PAGE_POST] & & & \\ 106 & 132.82224 & 11.78351 & 0.341 & 0.008 & 0.194 & 0.010 & 0.394 & 0.011 & & \\ 111 & 132.82492 & 11.76505 & & & & & & & 2.630 & 0.007 \\ 112 & 132.82536 & 11.71520 & 0.372 & 0.006 & 0.187 & 0.007 & 0.401 & 0.008 & & \\ \hline \end{tabular} Note. –(This table is available in its entirety in machine-readable and Virtual Observatory (VO) forms.) \end{table} Table 1: Merged Photoelectric Secondary Standards in M67 \(\text{INDEX}_{stand}=\textit{a*INDEX}_{instr}+\textit{b*}(\textit{b-y})_{instr}+ \textit{c}\) was adopted. Following the procedure outlined in Paper I, for \(V\) and \(hk\) stars of all luminosity classes were treated as a single group. For the other indices the sample was separated into three groups: cooler dwarfs, blue dwarfs, and red giants, as defined in Table 4 of Paper I. Calibration relations were tested both individually and in combination for the three categories and the optimal fit adopted for each index. The resulting calibration slopes, \(a\) and \(b\), along with the number of stars used in each calibration, are listed in Table 3. We will discuss the definition of the zero-points, \(c\), in the next subsection. As noted earlier, for \(V\) we make primary use of the Taylor, Joner, & Jeffery (2008) compilation from a mixture of photoelectric and CCD data, bypassing the need to separately define the slope and zero-point of the photometric system. Having transferred our (X,Y) coordinates to the (RA, Dec) system defined by DR3, we cross-matched our photometry with the catalog of 241 stars from Taylor, Joner, & Jeffery (2008), identifying 239 stars in common. Of these, 6 did not have \(V\) mags, only \(RI\), and were dropped. Using the remaining 233 stars and assuming \(a\) = 1.00 in the calibration relation, an initial linear fit was made to the data to define the color slope and zero-point. All stars with residuals greater than 0.1 mag relative to the mean relation were removed and the process repeated. With a revised estimated scatter about the mean relation (\(\sigma\)) calculated, all stars with residuals greater than 3.5\(\sigma\) were eliminated and the linear fit rederived. The process rapidly converged to a stable \(\sigma=\pm 0.010\) and all stars with residuals greater than 0.035 mag \begin{table} \begin{tabular}{r r r r r r r r} \hline \hline WEBDA ID & \(\alpha(2000)\) & \(\delta(2000)\) & \(b-y\) & sd & \(hk\) & sd & \(n\) \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 2: Internal Photoelectric _hk_ Secondary Standards in M67 eliminated, leaving a net of 217 standards and \(b_{V}=0.070\pm 0.006\) (sem) and \(c_{V}\)\(=1.391\pm 0.002\) (sem). Of the 16 stars eliminated due to their residuals, 10 were more than 0.05 mag removed from the mean relation. While the standard set used above was adopted because of its wide range in color, extending from extreme blue stragglers to cool red giants, it is also dominated by stars brighter than \(V=15\). As a simple check, we also derived a calibration curve using 207 stars from the modified photometry of Sandquist (2004), eliminating 7 stars due to larger than average residuals. As expected, the transformation relations from both catalogs are statistically indistinguishable, with the modified Sandquist (2004) comparison showing residuals with larger scatter (\(\pm 0.020\)) due to a range in \(V\) extending to 18.7 among the standards. For the \((b-y)\) calibration, in addition to eliminating stars with fewer than 3 observations in either filter for both the instrumental and standard system, stars with the sem for \((b-y)>0.015\) mag for either data set were eliminated. This left 863 potential standards at all colors and luminosity classes. Optimal fits between the instrumental and standard system were produced by dividing the sample into two categories: blue dwarfs through giants (650 stars) and red dwarfs (222 stars). Removal of 2 (1) anomalously deviant points for the blue dwarf/giant (red dwarf) data led to dispersions of the residuals around the mean relations (Table 3) amounting to \(\pm\) 0.011 (0.012) mag. Unlike the other indices, no distinction is made among the giants and dwarfs, red or blue, for defining the \(hk\) calibration curve. Limiting our sample to all stars with at least 3 observations each for \(b\), \(y\), and \(Ca\) for both standards and instrumental values and an sem limit for \(hk\) of 0.020 generates a sample of 607 stars. After a preliminary fit to the data, 10 stars with anomalously large residuals were removed, leading to the calibration slopes of Table 3 and a dispersion of the residuals about the mean relation of \(\pm 0.019\) mag. As discussed in Paper I, breaking the calibration sample for H\(\beta\) into three distinct tem \begin{table} \begin{tabular}{r r r r r r r r} \hline \hline Index & Class & \(a\) & \(b\) & \(c\) & \(N_{752}\) & \(RES_{752}\) & \(N_{M67}\) & \(RES_{M67}\) \\ \hline \(V\) & All & 1.000 & 0.070 & 1.391 & & & 217 & 0.010 \\ \(hk\) & All & 1.161 & 0.000 & -2.027 & 597 & 0.019 & 19 & 0.020 \\ H\(\beta\) & RG/BD & 1.092 & 0.000 & 0.471 & 451 & 0.011 & 35 & 0.010 \\ H\(\beta\) & RD & 1.000 & 0.000 & 0.637 & 121 & 0.014 & & \\ \(b-y\) & RG/BD & 1.060 & 0.000 & 0.208 & 648 & 0.011 & 74 & 0.010 \\ \(b-y\) & RD & 0.900 & 0.000 & 0.248 & 221 & 0.012 & & \\ \(m_{1}\) & BD & 1.000 & 0.000 & -1.039 & 418 & 0.016 & 68 & 0.011 \\ \(m_{1}\) & RG & 0.695 & 0.000 & -0.651 & 157 & 0.018 & & \\ \(m_{1}\) & RD & 1.000 & 0.441 & -1.160 & 143 & 0.017 & & \\ \(c_{1}\) & RG & 1.000 & 0.312 & 0.340 & 98 & 0.024 & & \\ \(c_{1}\) & BD/RD & 1.058 & 0.000 & 0.403 & 442 & 0.027 & 65 & 0.016 \\ \hline \end{tabular} Note. – BD designates Blue Dwarfs; RD Red Dwarf; RG Red Giant. Note. – Form of calibration: \(INDEX_{stand}=a\times INDEX_{instr}+b\times(b-y)_{instr}+c\) \end{table} Table 3: Summary of Transformation Coefficients perature/luminosity categories can be challenging given the modest range for the index across all temperatures (\(\sim\)2.45 to 2.95) coupled with the location of the boundary between cool dwarfs/giants and blue dwarfs near \((b-y)=0.45\). This has an approximate location in H\(\beta\) of 2.59. As illustrated in Paper I, the red giant H\(\beta\) range extends to only 2.55, while the cool dwarf boundary defines the lower limit of the index near 2.45. Thus, any defined linear transformation of the giants alone is readily dominated by the photometric scatter within the index, equivalent in size to the full range of the index itself. For the current calibration, the red giants and blue dwarfs were transformed as one group while the red dwarfs were treated separately. Eliminating all stars with instrumental and/or standard errors \(>\) 0.015 mag generated a sample of 464 blue dwarf/red giant stars. After a preliminary fit, removal of 13 stars with residuals larger than 0.03 mag led to a final calibration relation defined by 451 stars with a dispersion about the mean relation of \(\pm\)0.011 mag. For the red dwarfs, the difference in slope compared to the blue dwarf/red giant relation was immediately apparent. Applying the same internal error cut to the dwarfs resulted in a sample of 123 stars. A linear fit produced a slope statistically indistinguishable from 1.00. Removing 2 stars with large residuals and adopting a slope, \(a\), of 1.0, the scatter among the residuals is \(\pm\)0.014 mag. Unlike H\(\beta\), among the three stellar classes \(m_{1}\) has the smallest range among the blue dwarfs, despite a significant range in \((b-y)\). Restricting the sample to 419 stars with both instrumental and standard errors below 0.020 mag, one finds no statistically significant evidence for a color term, i.e. \(b\) = 0.000, or a slope, \(a\), other than 1.0 linking the instrumental and standard systems. Eliminating three stars with anomalously large residuals produces a scatter among the residuals of \(\pm\)0.016 mag. For red dwarfs, the slope \(a\) is also 1.00, but a significant color term, \(b\), now emerges. From 143 red dwarfs, eliminating none due to large residuals, produces a scatter about the mean relation of \(\pm\)0.017 mag. For the red giants, with a significantly greater range in \(m_{1}\) and \((b-y)\), elimination of 9 giants with anomalous residuals leaves 157 stars. One derives the calibration relations of Table 3 with a residual scatter about the mean relation of \(\pm\)0.018 mag. Finally, for \(c_{1}\), all stars with internal errors above \(\pm\)0.020 mag in either the standard or instrumental systems were eliminated. Calibrating all dwarfs, red and blue, with a common relation from 442 stars (Table 3), 3 eliminated due to excessive residuals, resulted in a scatter about the mean relation of \(\pm\)0.027 mag. For the red giants alone, eliminating 7 stars due to excessive residuals, produced a scatter of \(\pm\)0.024 mag from 98 stars. ### Zeroing the Scales While the slopes of the calibration relations are best set by the large database of NGC 752 photometry covering all luminosity classes, ideally the zero-points of the M67 CCD photometry should be linked to the \(uvbyCaH\beta\) standard system via the well-defined photoelectric secondary standards presented in Tables 1 and 2. After applying the calibration relations to the CCD data for the standards of Tables 1 and 2, the zero-points for each index were then derived by minimizing the residuals between the two systems. We note again that, unlike \(V\) and \(hk\), with the exception of star 117 which lies just beyond the edge of the blue dwarf boundary, all M67 photoelectric standards in \((b-y)\), \(m_{1}\), \(c_{1}\), and H\(\beta\) fall within the category of blue dwarfs. Thus the zero-points for the red dwarfs and the red giants for these indices are derived indirectly through the relative relationships these three classes define among the NGC 752 sample. Table 3 lists the numbers of stars used in defining the zero-points, as well as the scatter among the residuals for each index. Three stars (115, 131, 132) exhibited anomalously large residuals in \(c_{1}\) and were excluded from the final determination of the zero-point for that index. Since it is probable that in these cases the issue lies with the photoelectric data, these stars have been flagged with note in Table 1 indicating that the values should be treated with caution. For the case of star 155, a blue straggler, the instrumental indices showed signs of variability. This star was excluded from the zero-point determinations, indicated by a flag for this star in Table 1. The final M67 photometry is presented in Table 4, where the columns are self-explanatory. Coordinates are on the (Gaia Collaboration et al., 2022) (DR3) system. With the exception of a few stars above \(V=14\) where the internal errors for the frames are assumed to be small due to the brightness of the stars, photometric indices are only listed if every filter within an index has at least 3 observations. If not, the number of filter observations has been set to 0, with the index and error values set to a null indicator value of 9.999. As in Paper 1, the final column of the Table lists the categorization of the star as a blue dwarf, red dwarf, or red giant. Unlike Paper I, the ability to separate red dwarfs and red giants has been greatly enhanced by the availability of _Gaia_ parallaxes. For the subset of stars for which parallax is either unavailable or poorly determined, use has been made of the photometric indices, following the pattern of Paper I. For fainter stars with questionable parallax and potentially unreliable photometric indices, it has been assumed that any red star is highly likely to be a red dwarf rather than an exceptionally distant red giant situated well above the galactic plane (\(b=+32^{\circ}\)). CCD observations of M67 on the Stromgren and H\(\beta\) systems are few and far between. The first attempt to test the use of a CCD to obtain \(uvby\) photometry was made by Anthony-Twarog (1987) on M67. The data included only two frames in each color for two 3\(\arcmin\times\) 5\(\arcmin\) fields taken with the 4-m Blanco telescope, non-imaging \(uvby\) filters and a high-readout-noise, low-\(u\) sensitivity RCA chip. Not surprisingly, the \(uvby\) data calibrated directly to the internal standards of NTC produced very similar results for the cluster parameters, though with double the uncertainty. More recently, Balaguer-Nunez, Galadi-Enriquez, & Jordi (2007) have attempted an evaluation of the fundamental M67 cluster parameters from wide-field CCD \(uvby\)H\(\beta\) photometry. While the quoted internal precision is comparable to the present sample for stars \(V\leq 17\) for most indices, H\(\beta\) data are available for less than 20% of the sample and are significantly less accurate. Equally problematic, the photometry was calibrated solely using NTC despite the lack of cool dwarf and red giant standards. Membership was estimated from ground-based astrometry and photometric criteria, isolating 776 members and, as with Anthony-Twarog (1987), produced distance moduli and metallicities the same, within the large errors, as NTC. ## 4 NGC 752 Revisited ### Core Cluster Membership The initial step in the modern evaluation of any open cluster sample is the detection and isolation of probable cluster members via the astrometric database supplied by _Gaia_, the current version being the DR3 data release. For NGC 752, the transition from the definitive ground-based proper-motion study by Platais (1991)(PL) to the initial membership survey by Cantat-Gaudin et al. (2018) using both proper motion (\(\mu\)) and parallax (\(\pi\)) expanded the cluster database in both magnitude range and sky coverage, while significantly improving the astrometric precision. Until very recently, the third kinematic component, radial velocity, was only available for a modest subset of the cluster sample, dominated early on by the precision ground-based observations of Mermilliod et al. (2009), but expanded slowly by more recent spectroscopic surveys of the cluster. The DR3 database includes a more comprehensive set covering the entire field of the cluster, but the precision is modest for stars on the unevolved main sequence, making membership and binarity estimation problematic. Of immediate interest, revisiting some of the pre-_Gaia_ discussion of Paper I from an astrometric standpoint serves two purposes. First, the fundamental cluster parameters of reddening and metallicity were derived from photometric analysis of 68 highly probable cluster members, F dwarfs with proper motions and/or radial velocities consistent with the cluster average and no indications of photometric anomalies, i.e. variability or a deviant photometric metallicity. Despite the already high precision of these parameters, for consistency, any newly identified nonmembers and/or binaries should be eliminated from the averages. Second, and more important, while one can argue about the zero-point accuracy of the absolute parametric scales rather than their precision, our primary interest lies with differential cluster-to-cluster comparisons possible with high precision intermediate-band photometry when the program cluster (M67) has been photometrically calibrated using the reference cluster as the standard, rather than tying the program cluster into the standard system using a small sample of field stars of inadequate temperature and luminosity range. With the photometric zero-points and calibration curves optimized, one can derive more precise and accurate differential (cluster to cluster) reddening and metallicity, thereby hopefully leading to improved relative ages and, independent of the parallax, relative distances. As a starting point we compiled two datasets. The 1590 stars of Table 4 in Paper I were matched with the coordinates of DR3, resulting in cross-identification of 1585 stars to \(V=18.0\), keeping in mind that only \(V,(b-y)\) data are available for all stars to this limit. The 253 cluster members as derived by Cantat-Gaudin et al. (2018) were identified in DR3 and restricted to the same core area as Table 4 of Paper I, generating a preliminary core membership set of 145 stars. The reduced fraction of members relative to Cantat-Gaudin et al. (2018) is readily \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c c c c c c c c c c} \hline \hline \(a(2000)\) & \(\delta(2000)\) & \(V\) & \(b-y\) & \(m_{1}\) & \(c_{1}\) & \(hk\) & R\(\beta\) & \(sem(V)\) & \(sem(by)\) & \(sem(m1)\) & \(sem(c1)\) & \(sem(hk)\) & \(sem\beta\) & \(N_{y}\) & \(N_{b}\) & \(N_{v}\) & \(N_{v}\) & \(N_{C}a\) & \(N_{n}\) & \(N_{w}\) & Class \\ \hline [MISSING_PAGE_POST] & 30 & 35 & 3 explained by the difference in areal coverage of the two surveys (\(\sim\)0.75\({}^{\circ}\) x 0.75\({}^{\circ}\) for Paper I vs. 6.1\({}^{\circ}\) x 4.5\({}^{\circ}\) for Cantat-Gaudin et al. (2018)) and an approximately one magnitude difference in the photometric limits of the two surveys. A simple check also shows that the DR3 data set has produced a tighter astrometric profile for the cluster. The core cluster members from Cantat-Gaudin et al. (2018) had mean values of \(\pi\), \(\mu_{\alpha}\) and \(\mu_{\delta}\) of 2.223 \(\pm\) 0.073 mas, 9.784 \(\pm\) 0.303 mas-yr\({}^{-1}\), and -11.686 \(\pm\) 0.305 mas-yr\({}^{-1}\), respectively. The same stars within DR3 have the analogous values of 2.263 \(\pm\) 0.053 mas, 9.723 \(\pm\) 0.253 mas-yr\({}^{-1}\), and -11.806 \(\pm\) 0.253 mas-yr\({}^{-1}\). To isolate cluster members within the core, all stars with Stromgren photometry and \(\mu_{\alpha}\) and \(\mu_{\delta}\) within 0.759 mas-yr\({}^{-1}\) (3\(\sigma\)) of the cluster mean \(\mu_{\alpha}\) and \(\mu_{\delta}\) were identified. Additionally, all stars whose individual quoted uncertainties for each \(\mu\) measure were within 3\(\sigma\) of these boundaries were retained. The same test was then applied to \(\pi\) and \(\sigma_{\pi}\) for this restricted sample, leading to a final sample of 126 probable members in the cluster core with some degree of Stromgren photometry. More recently, Bhattacharya et al. (2021) have revisited the entire cluster membership issue using the early-release (Gaia Collaboration et al., 2021)(EDR3) dataset within a 5\({}^{\circ}\) radius of the cluster center. Due to the extended area of this survey, required to identify the tidal tails of the kinematically evaporating cluster, ML-MOC (Agarwal et al., 2021), a \(k\)-nearest neighbor algorithm coupled with a Gaussian mixture model, was used to isolate cluster members in \(\mu\)-\(\pi\) space, resulting in 282 likely members. The larger sample compared to Cantat-Gaudin et al. (2018) is primarily due to the addition of stars below \(G=18\). For the cluster core data of Paper I, the Bhattacharya et al. (2021) analysis identifies 107 members compared to 110 from Cantat-Gaudin et al. (2018). The small difference in the overlap between the two datasets emerged as a byproduct of the larger than average astrometric errors among a handful of stars in the original membership analysis. Adoption of the Bhattacharya et al. (2021) core members as the basis for identifying and restricting the final DR3 membership among stars with Stromgren photometry produces essentially identical overlap with the 126 stars identified above. To constrain the cluster membership further, we make use of the third kinematic component, the radial velocity. In addition to eliminating field star contamination, adequate sampling can also identify binaries, members which should be eliminated from intercluster comparisons to minimize distortions caused by photometric anomalies and/or non-standard evolution. For NGC 752, the baseline survey among the brighter stars is that of Mermilliod et al. (2009), where 55 of the 126 astrometric members have radial-velocity information. (In the discussion that follows, if identification numbers are given, they are from the proper-motion survey of PL.) Of the 55, one (772) is a definite radial-velocity nonmember and 15 are definite binaries. The remaining 39 are either definitely single or have radial velocities consistent with cluster membership but an inadequate sample to test for binarity. The next relevant survey is that of Agueros et al. (2018) who made use of unpublished long-term, radial-velocity monitoring of stars in NGC 752 to identify 11 nonmembers, 6 of which appear within our original set of 126. One of these is the previously identified nonmember (772), so five others can be eliminated (477, 641, 728, 888, 1008). One star (786) has a radial velocity consistent with membership but is a definite binary. Among the probable proper-motion members at the time, Agueros et al. (2018) confirmed the binarity of three stars within our sample (814, 857, 1117), as well as a binary nature for star 849, now classed as a proper-motion nonmember from _Gaia_ data (Cantat-Gaudin et al., 2018), a result confirmed with the more recent data releases (Rain, Ahumada, & Carraro, 2021). This last star is unique in that it had long been considered the sole blue straggler member of the cluster and analyzed even recently as such (Leiner & Geller, 2021). Turning next to Maderak et al. (2013), based on a single epoch of observation, 8 potential non-members or radial velocity variables were identified out of a sample of 45 stars. Four of these are also _Gaia_-based nonmembers. One, star 552, is an astrometric member and spectroscopic binary (Mermilliod et al., 2009), thereby explaining its discrepant velocity. The remaining 3 (828, 964, 1161) are astrometric members and 964 is a definite radial-velocity member according to Mermilliod et al. (2009). Star 828 sits 4.3 km-sec\({}^{-1}\) (\(\sim 5\sigma\)) above while 1161 is 7.5 km-sec\({}^{-1}\) (\(\sim 9\sigma\)) below the cluster mean as derived by Maderak et al. (2013). An average of the DR3 radial velocities for the brightest stars with the smallest radial velocity uncertainties leads to a cluster mean of 5.4 \(\pm\) 0.15 (sem) km-sec\({}^{-1}\). The DR3 radial velocities for these two stars place them both below the cluster mean by 3 and 9 km-sec\({}^{-1}\), respectively. Unfortunately, the uncertainties in the velocities are 4.7 and 5.5 km-sec\({}^{-1}\), respectively. All three stars are assumed to be either binaries like 552 or non-members. Stars 897 and 950 are QX And (Qian et al., 2007) and DS And (Milone et al., 2019), respectively, two eclipsing binary members of NGC 752. The latter star is one of two eclipsing binaries in NGC 752 analyzed in detail by Sandquist et al. (2023), a followup to their discussion of a detached binary system in M67, a star which plays a key role in the discussion of Section 6. Finally, while supplying no radial velocities for their sample, Lum & Boesgaard (2019) did evaluate rotation speeds as part of the line measurement process. Two stars that were excluded from their analysis due to significant line broadening were 552 and 413. As noted above, the former star has been previously classed as a spectroscopic binary and Lum & Boesgaard (2019) apply the same description to both stars. The DR3 radial velocity for this star is 8.16 \(\pm\) 0.97 km-sec\({}^{-1}\). Removing all nonmembers and/or probable binaries leaves a sample of 98 stars; we emphasize again the lack of precision radial-velocity measures for stars \(V\geq 14.0\). For these stars the position in the color-magnitude diagram (CMD) can offer some indication of binarity, a point we will return to below. ### Zeroing the Scale: Metallicity and Reddening for NGC 752 The derivation of reddening and metallicity for NGC 752 based upon the extended Stromgren photometric system from precision CCD photometry of the cluster core is discussed in exceptional detail in Paper I and will not be repeated here. Suffice it to say that the reddening is defined by comparison of cluster F dwarfs in the \((b-y)\) - H\(\beta\) plane to a standard sequence defined by nearby field stars with apparently zero reddening, and metallicity comparable to that of the Hyades. Since H\(\beta\), as a line filter ratio, is designed to be unaffected by reddening and exhibits minimal, if any, dependence upon metallicity, displacements in \((b-y)\) from the standard relation are assumed to be signatures of differences in reddening and/or metallicity. With reddening determined, the metallicity-dependent indices of \(m_{1}\) and \(hk\) can be adjusted for reddening and metallicity derived independently from comparison of each index to a standard, dereddened relation tied to an adopted Hyades-metallicity sequence. With an estimate of the metallicity known, one can adjust the program \((b-y)\) measures for the effects of a metallicity difference relative to the standard relation, with the entire sequence re peated until the adjustments to each drop below some critical limit, which usually happens very quickly. Using a modified version of this approach in Paper I, from 68 F dwarfs it was found that \(E(b-y)\) = 0.025 \(\pm\) 0.003 (sem). The dominant source of the uncertainty in the quoted accuracy is the existence of two slightly different standard sequences applied to estimate the reddening; the individual sequences supplied \(E(b-y)\) with a precision at 0.001 mag, but the absolute values of \(E(b-y)\) differ by 0.004 mag. Likewise, the \(m_{1}\) and \(hk\) indices, dominated by predominantly Fe lines and Ca \(H\) and \(K\), respectively, produced [Fe/H] = -0.071 \(\pm\) 0.014 (sem) and -0.017 \(\pm\) 0.008 (sem), respectively. The higher precision for \(hk\) is the combined impact of a smaller sensitivity to reddening changes and a higher sensitivity to metallicity changes. With the revised membership list and removal of all potential binaries, the sample of F dwarfs drops to 38; after analysis, one additional star (783) with anomalously large [Fe/H] from both \(m_{1}\) and \(hk\) was removed from the discussion. (It is intriguing to note that this star, given its magnitude and the high number of observations in each filter, exhibits unexpectedly large scatter among all indices, possibly indicative of either variability or contamination by another star.) Treating the sample in a fashion identical to Paper I generates reddening almost identical to that of Paper I, with \(E(b-y)\) = 0.026 \(\pm\) 0.004 (sem). The slight increase in the standard error of the mean is dominated by the smaller sample of stars used to construct the average. For metallicity, however, \(m_{1}\) and \(hk\) generate slightly higher and lower metallicities, respectively, [Fe/H] = -0.053 \(\pm\) 0.020 (sem) and -0.023 \(\pm\) 0.013 (sem), leading to weighted average of [Fe/H] = -0.032 \(\pm\) 0.015 (sem), identical to the value derived in Paper I. An often forgotten source of uncertainty in the absolute value of this estimate is that the differentials are defined relative to standard relations assumed to have Hyades metallicity and then translated to solar using an adopted Hyades value of [Fe/H] = +0.12. If the adopted scale for the Hyades is different, e.g. +0.15 (Cummings et al., 2017), the cluster values must be adjusted accordingly, i.e. raising the derived NGC 752 value to solar. ## 5 M67: Fundamental Properties ### Cluster Membership - Astrometry and Radial Velocities While there have been numerous astrometric analyses for membership isolation in M67 (see Geller, Latham, & Mathieu (2015) for a summary of the multiple ground-based investigations), we will follow a procedure for M67 similar to that for NGC 752, relying on _Gaia_ as the exclusive astrometric source for isolating cluster members. We begin again with the data sample of Cantat-Gaudin et al. (2018), cross-matching the 835 original members with the updated DR3 _Gaia_ parameters. The original cluster averages of 1.137 \(\pm\) 0.060 mas, -10.983 \(\pm\) 0.238 mas-yr\({}^{-1}\), and -2.958 \(\pm\) 0.239 mas-yr\({}^{-1}\) become 1.153 \(\pm\) 0.056 mas, -10.970 \(\pm\) 0.213 mas-yr\({}^{-1}\), and -2.916 \(\pm\) 0.226 mas-yr\({}^{-1}\) for the \(\pi\), \(\mu_{\alpha}\), and \(\mu_{\delta}\), respectively. Using the stars of Table 4 matched to the DR3 database, potential members were selected if their \(\mu_{\alpha}\) (\(\mu_{\delta}\)) was within 3\(\sigma\) of the cluster mean at 0.636 mas-yr\({}^{-1}\) (0.678 mas-yr\({}^{-1}\)). Stars outside these boundaries were then checked and retained if they were within 3\(\sigma\) of the boundary, where \(\sigma\) here refers to the individual quoted uncertainty in either the \(\mu_{\alpha}\) or \(\mu_{\delta}\). This select sample was then reduced to those stars with \(\pi\) within 3\(\sigma\) (0.168 mas) of the cluster mean, if \(\pi/\sigma_{\pi}\) was 10 or higher. Finally, stars were retained if they were within 3\(\sigma_{\pi}\) of the parallax boundary. The final astrometric sample is composed of 897 stars brighter than \(V\) = 19.2. As with the discussion of NGC 752 in Sec. 4.1, the next phase of membership restriction is built upon radial velocities to eliminate non members and isolate possible binaries. Unlike NGC 752, however, one has access to the exquisite 40-year comprehensive, high-precision radial-velocity survey of the cluster field to \(V\) = 16.5 to a radius of 30\({}^{\prime}\) by Geller, Latham, & Mathieu (2015). As noted earlier, Geller, Latham, & Mathieu (2015) based their final membership classification upon both the radial-velocity data and the compiled ground-based astrometric surveys available at the time. Since the latter estimates are now superceded by the _Gaia_ results, we will appeal to Geller, Latham, & Mathieu (2015) only for radial-velocity membership probabilities and binarity. A coordinate match with the 897 astrometric members and the radial-velocity catalog produced an overlap of 652 stars to \(V\) = 16.5. Of these, 143 were set aside as binary or triple systems. Of the remaining 509, 42 had indeterminate radial-velocity membership probability while an additional 10 had probabilities in single digits, leaving 457 stars as the probable single-star member database. ### Reddening and Metallicity The technique for reddening and metallicity estimation in M67 is the same iterative procedure as that for NGC 752 in Sec. 4.2 and in Paper I, except the photometry for NGC 752 becomes the defining standard, i.e. the goal is a direct determination of \(E(b-y)\) and [Fe/H] for M67 relative to NGC 752. This does not eliminate the need to approach the final estimates in an iterative fashion. Comparison of the M67 \((b-y)\) - H\(\beta\) data to that of NGC 752 can reveal an offset due to a difference in cluster reddening and/or a difference in metallicity. At the same reddening, a more metal-rich star at a given H\(\beta\) will have a redder \((b-y)\). Likewise, \(m_{1}\) and, to a lesser degree, \(hk\) at a given H\(\beta\) are affected by reddening. A differential reddening between the two clusters must be accounted for before the final metallicity is determined. As a starting point, the preliminary reddening difference is obtained by comparing the M67 \((b-y)\) - H\(\beta\) photometry between H\(\beta\) = 2.55 and 2.68 to the mean relation defined by the single-star members of NGC 752 as compiled in Sec. 4.1. From 339 single-star members of M67, the mean difference in the sense (M67 - NGC 752) is -0.002 \(\pm\) 0.001 (sem) mag in \((b-y)\), implying that if M67 and NGC 752 have the same metallicity, the reddening in the latter cluster is larger in \(E(b-y)\) by 0.002 mag. If 12 stars with larger than typical \(\delta E(b-y)\) are eliminated, the mean offset remains the same, but the uncertainty reduces to \(\pm\)0.010 mag for a single star. Turning to the metallicity, the M67 \(m_{1}\)(H\(\beta\)) and \(hk\)(H\(\beta\)) data were adjusted for the preliminary difference in reddening between M67 and NGC 752 and then compared to the mean relations as defined by NGC 752, rather than the Hyades, between H\(\beta\) = 2.68 and 2.58. The low cutoff for H\(\beta\), the same cutoff used in deriving the absolute abundances of NGC 752 in Sec. 4.2, is bluer compared to that for the reddening analysis because the standard relations for both metallicity indicators steepen significantly among the G dwarfs, leading to a large uncertainty in [Fe/H] for a small uncertainty in H\(\beta\). By contrast, the \((b-y)\) - H\(\beta\) relation for dwarfs remains approximately linear to at least H\(\beta\) = 2.55 (see Figure 8 of Paper 1). With the standard relation now defined by the single, main sequence stars of NGC 752 rather than the Hyades, from 256 single-star members, the average difference in [Fe/H] based upon \(m_{1}\), in the sense (M67 - NGC 752), is +0.115 \(\pm\) 0.007 (sem) dex. If 7 stars with signifiacntly larger than average residuals are eliminated, the difference becomes +0.106 \(\pm\) 0.006 (sem) dex. The comparable numbers for [Fe/H] based upon \(hk\) are +0.044 \(\pm\) 0.006 (sem) from 256 stars and, with the 7 stars with the larger than average residuals eliminated, \(\delta\)[Fe/H] = +0.036 \(\pm\) 0.005 (sem). Adopting the average offset of \(\delta\)[Fe/H] \(=\) 0.07 dex as the relative abundance of M67 to NGC 752, one can now recalculate the relative cluster reddening. As expected, since a portion of the redder (\(b-y\)) values in M67 is attributable to a higher metallicity, the revised reddening differential, in the sense (M67 - NGC 752) becomes -0.005 mag. Applying an effect equivalent to the new reddening offset to the \(m_{1}\) and \(hk\) values of M67 to place this cluster at the same reddening value as NGC 752 reduces the \(m_{1}\) and \(hk\) indices of M67, i.e. makes the stars more metal-poor. The revised metallicity differentials from 249 stars now become +0.091 \(\pm\) 0.006 (sem) dex for \(m_{1}\) and \(\delta\)[Fe/H] \(=\) +0.033 \(\pm\) 0.006 (sem) dex for \(hk\). The revisions are small enough that further iterations become unnecessary. The implication is that M67 is clearly more metal-rich than NGC 752 by \(\delta\)[Fe/H] \(=\) 0.062 \(\pm\) 0.006 (sem) dex. Adopting the combined absolute abundance of [Fe/H] \(=\) -0.032 \(\pm\) 0.015 (sem) derived in Sec. 4.2 for NGC 752 leads to an absolute [Fe/H] \(=\) +0.030 \(\pm\) 0.016 (sem) dex for M67, again on a scale where the Hyades is at [Fe/H] \(=\) +0.12. The final reddening for M67 becomes \(E(b-y)=\) 0.021 \(\pm\) 0.004 (sem). Note that the dominant source of the uncertainty in the absolute reddening and metallicity for M67 is the baseline uncertainty in the estimates for NGC 752 which are built upon a significantly smaller sample of stars than M67. In an absolute sense, the largest uncertainty in the metallicity estimates for M67 and NGC 752 may lie with the absolute [Fe/H] for the Hyades, typically adopted as +0.12 for the photometric calibrations but more recently derived from some spectroscopic analyses as +0.15 (Cummings et al., 2017). Beyond this issue, the most probable source of error lies with the photometric zero-points. The sequence of analyses (Joner & Taylor, 1995, 1997; Taylor, 2007; Taylor, Joner, & Jeffery, 2008) linking the exceptionally accurate Stromgren photometry of the Hyades, NGC 752, and M67 discussed in Section 3 should generate zero-point uncertainties in the relative cluster indices as close to 0.000 as possible, with most of the offsets found relative to the older published data as derived in Section 3 being a byproduct of different filters, standards selection, and reduction procedures, a not uncommon issue with all-sky photometry, even when care is taken to minimize such offsets (see, e.g. the Appendix of NTC). A zero-point uncertainty in the \(m_{1}\) photometry at the level of \(\pm\)0.002 mag propagates into an error of \(\sim\)\(\pm\)0.022 in the absolute value of [Fe/H]. For \(hk\), M67 stars were observed as program stars within the observations used to define the standard system (Twarog & Anthony-Twarog, 1995), i.e. the night-to-night photometry was transferred to a common system defined collectively by all the stars in the catalog and merged. Since the individual M67 stars were observed over multiple nights and multiple runs, the most likely source of uncertainty in the zero-point arises from the reduced number of observations at fainter magnitudes. To attain the same accuracy in [Fe/H] from \(hk\) as defined by \(m_{1}\), the uncertainty in the \(hk\) zero-point would need to be at least \(\pm\)0.006 mag. When coupled with the fact that \(m_{1}\) is twice as sensitive to reddening effects as \(hk\), it seems highly likely that the [Fe/H] from \(hk\) is at least as accurate from a zero-point standpoint as that of \(m_{1}\). A final point we will return to below is the obvious issue that \(m_{1}\) and \(hk\) also measure different indicators of metallicity, the former dominated by weak Fe lines and the latter indicative of Ca. ## 6 Cluster Distance and Age Before discussing the derivation of the individual cluster distances and ages through parallax and comparison to appropriate isochrones, we first revisit the question of binarity among the cluster members. As noted earlier, the binary evaluation of M67 is unique due to the unusually high precision radial-velocity coverage of virtually all the stars brighter than \(V=16.5\) for over 40 years (Geller, Latham, & Mathieu, 2015; Geller et al., 2021). While the sample is less complete, the velocities less precise, and, with the exception of Mermilliod et al. (1998, 2008, 2009), the temporal coverage generally random for NGC 752, compared to the vast majority of open clusters, the determination of radial-velocity binarity for NGC 752, as with the Hyades, remains exceptional, in large part due to the proximity of the cluster. For clusters without such detailed radial-velocity insight, the \(uvby\)H\(\beta\) system has long offered an option for identifying potential binary systems composed of stars of comparable mass on the main sequence. For stars significantly redward of a cluster turnoff, such systems are readily identifiable due to their position above the unevolved main sequence to the limit of 0.75 mag. However, as has been illustrated in innumerable cluster CMD discussions, the cluster binary sequence eventually crosses and merges with the vertical turnoff, making it impossible to visually distinguish between a single star evolving off the main sequence or a composite composed of two stars still sitting near the base of the turnoff. This confusion becomes problematic for delineation of the evolutionary path of stars passing through the hydrogen-exhaustion phase (HEP) and post-HEP phase _en route_ to the subgiant branch since these phases are rapid and few stars are likely to tbe seen during them. Thus, the contamination of this portion of the CMD by a handful of binaries can easily distort the perceived location of the turnoff and the luminosity of the stars populating the blue edge of the subgiant branch. The fundamental technique is straightforward. For stars on the unevolved main sequence (ZAMS) at a given metallicity there is a well-defined relation between \((b-y)\) and \(c_{1}\). Since \(c_{1}\) for late A through early G stars is a surface gravity/luminosity indicator, as a star evolves away from the main sequence and its luminosity grows, \(c_{1}\) grows accordingly. Thus, at a given \((b-y)\) for the turnoff region of a cluster CMD, the brighter stars should exhibit a correlation between increasing \(c_{1}\) and decreasing \(M_{V}\). Using a mixture of field stars and nearby clusters, including NGC 752, Crawford (1975) initially derived the slope of the relation between \(\delta c_{1}=\) (\(c_{1OBS}\) - \(c_{1ZAMS}\)) and \(\delta V=\) (\(V_{ZAMS}\) - \(V_{OBS}\)). This slope was revised by NTC using the extensive M67 turnoff among cool F dwarfs and applied to identify unknown binaries in M67 and again in NGC 752 (Daniel et al., 1994) using the photoelectric photometry of Twarog (1983). While the luminosity correction to \(c_{1}\) was crucial for the estimation of cluster distances from \(uvby\) photometry of the turnoff stars, with parallaxes available one can focus exclusively on the identification of undetected binaries. Because our concern is solely with the differential comparison of stars within the cluster, the analysis is totally independent of the cluster reddening and metallicity and limited only by the precision of the photometry. ### Ngc 752 For the first case, we return to the 126 astrometric members of NGC 752. Eliminating the 6 probable radial-velocity nonmembers from Agueros et al. (2018) and all red giants, we are left with 67 stars bluer than \((b-y)=0.55\) to \(V=14.5\), i.e. to mid-G stars. A linear fit was drawn between \(V\) and \((b-y)\) for stars fainter than \(V=11.2\) to the limit of 14.5 to define \(V_{ZAMS}(b-y)\); the same stars were then used to define \(c_{1ZAMS}(b-y)\). For all 67 stars, \(\delta c_{1}\) and \(\delta V\) were derived; the results are plotted in Figure 2. The trend of increasing differential luminosity with increasing differential \(c_{1}\) is obvious, though there are 11 stars (red points) that lie systematically above the predominant trend. In Figure 3 we plot the CMD for the 67 member single and binary stars included in the discussion, tagging the 11 deviants of Figure 2 again as red open circles. Of the 11 discrepant stars, 9 are known binaries. The remaining two, PL 648 and 1003, have not exhibited radial-velocity variability but clearly sit significantly above the main sequence at \(V\) = 12.06 and 11.17, respectively. (Star 1003 was first tagged as a probable binary via photoelectric \(uvby\) photometry using the same approach, as demonstrated in Figure 1 of Daniel et al. (1994). Of the 7 stars tagged as potential binaries in that analysis, with the exception of 1003, all are now known to be binaries and/or nonmembers.) Given the high probability of cluster membership for both 648 and 1003, it is possible that the two systems are inclined at an angle that minimizes the radial-velocity variations of the stars and/or that the orbital period of the system is large. The value of identifying probable binaries in NGC 752 is best illustrated by the stars at \(V\) = 11 and brighter. For the red points with \((b-y)\)\(>\) 0.30 and fainter than \(V\) = 11 in Figure 3, any precision CMD would reveal their probable binary nature given their position significantly above the ZAMS. For the five red points at the top of the turnoff, their removal narrows the spread in \(V\) among the stars defining the red hook at the turnoff and eliminates the possibility of constraining the post-HEP and subgiant branch beyond this point using star 1117 at \(V\) = 9.6, a known SB2 system. To close, we determine the age of the cluster using the CMD with all nonmembers and/or binaries, photometric or spectroscopic, eliminated. For isochrones, we adopt the same set (VandenBerg, Bergbusch, & Dowler, 2006) discussed in Paper I, interpolated slightly between [Fe/H] = 0.000 and -0.039 to match the derived value of [Fe/H] = -0.032. For the distance, use is made solely of the _Gaia_ DR3 parallax data. If the parallaxes for the 120 members discussed above are averaged, the mean \(\pi\) is 2.260 \(\pm\) 0.076 (sd) mas or \((m-M)_{0}\) = 8.23. Since \(\sigma_{\pi}\) grows larger on average with increasing \(V\), we can first restrict the sample to members brighter than \(V\) Figure 3: CMD for stars in Figure 2. Symbols have the same meaning. Figure 2: Correlation between distance in \(V\) above the ZAMS with the change in \(c_{1}\) for stars at the turnoff in NGC 752. Blue points define the relation for single stars. Red points identify probable binaries. \(=16.0\), generating an average \(\pi=2.267\pm 0.065\) (sd) mas or \((m-M)_{0}=8.22\) from 96 stars. Finally, one can limit the sample to only stars where \(\pi/\sigma_{\pi}>50\). From 83 stars, the average \(\pi\) = 2.270 \(\pm\) 0.046 (sd) or \((m-M)_{0}=8.22\pm\) 0.04 (sd). With \(E(b-y)=0.026\), the apparent modulus becomes \((m-M)\) = 8.33 \(\pm\) 0.04 (sd). It is important to recognize the increase in the parallax for NGC 752 (0.047 mas) relative to the mean value for the stars from Cantat-Gaudin et al. (2018). Systematic offsets at this level have been applied to a number of clusters in this series analyzed using Gaia Collaboration et al. (2018) (DR2) data, NGC 6819 (Deliyannis et al., 2019), NGC 7142 and M67 (Sun et al., 2020) (Paper II), and NGC 2243 (Anthony-Twarog et al., 2021), with the common justification for these adjustments supplied by Riess et al. (2018); Stassun and Torres (2018); Zinn et al. (2019), among others. Figure 4 shows the resulting CMD-isochrone comparison covering the full range of the CMD. The fit to the isochrones is excellent over the \(M_{V}\) range from the giant branch to \(M_{V}\sim 5\). Toward fainter magnitudes, the observed points lie above the models by an amount that increases toward fainter magnitudes. Such discrepancies are common for cooler dwarfs in comparisons with theoretical isochrones due to the difficulty of transferring cool star models to the observational plane because of the complexity of cool dwarf atmospheres, challenging bolometric corrections, and the construction of synthetic color indices on intermediate and narrow-band systems. A similar trend is seen in a comparison of the earlier _Gaia_ photometry of NGC 752 using the same models adopted here but transferred to the _Gaia_ photometric system (see Figure 9 of Boesgaard, Lum, and Deliyannis (2020)). An empirically corrected ZAMS for the isochrones of the cooler dwarfs in NGC 752 has been derived and plotted in Figure 4 as a green dashed curve. Such an approach has long proven valuable in adjusting theoretical isochrones to more realistically match observed stellar photometry in temperature and luminosity space where theoretical transformation relations may be inadequate (see, e.g. Pinsonneault et al. (2004); An et al. (2007)). We will return to this issue in Sec. 6.2. As expected, quality of the fit between the observations and theory in Figure 4 is identical to that in the pre-_Gaia_ Paper I analysis, though the clarity of the CMD is enhanced by the removal of binaries and probable nonmembers. Since the best-fit \((m-M)\) of 8.30 \(\pm\) 0.05 with \(E(b-y)=0.025\) led to a tight age range of 1.4 to 1.5 Gyr in Paper I, the minor alterations to the fundamental cluster parameters have little impact on the current derivation. For comparison, Agueros et al. (2018) used MINESweeper, a Bayesian approach for determining stellar parameters with MESA Figure 4: Age derivation for NGC 752 from single stars using \((m-M)_{0}\) derived solely from parallax, coupled with photometric reddening. Isochrones are metallicity adjusted models of VR. Dashed green line represents an empirically derived correction to the isochrone lower main sequence. Isochrones & Stellar Tracks (MIST) evolutionary models (Choi et al., 2016; Dotter, 2016) to infer probability distribution functions for the age and distance of each of 53 single cluster members in NGC 752. MINESweeper provided full posterior distributions of all predicted stellar parameters from the MIST models, including ages, masses, and radii, leading to \((m-M)_{0}=8.21^{0.04}_{0.03}\), [Fe/H] = \(+0.02\pm 0.02\), \(A_{V}=0.198^{0.008}_{0.009}\), and an age of 1.34 \(\pm\) 0.06 Gyr. The true distance modulus, metallicity, and age all overlap at the \(\pm 1\sigma\) level. However, the derived \(A_{V}\) implies a reddening that is 8\(\sigma\) larger than derived in Paper I. Differences in adopted isochrones and photometric systems aside, the excessive reddening derived for the average star in the sample is consistent with the younger age at higher metallicity for NGC 752, though the adoption of too low an overshooting parameter may be the dominant factor, as discussed by Boesgaard, Lum, & Deliyannis (2020). A more recent analysis of the cluster age that made use of the _Gaia_ DR2 astrometry to identify cluster members is that of B\(\ddot{\rm o}\)cek Topcu et al. (2020). Of equal importance is the derivation of the cluster metallicity from high-resolution (R \(\sim\) 45000) near-IR spectra of 10 red giants. Adopting the derived reddening of Paper I, the mean [Fe/H] values from optical Fe I and Fe II lines are +0.01 \(\pm\) 0.07 (sd) and -0.06 \(\pm\) 0.04 (sd), respectively, while the IR Fe II lines generate 0.00 \(\pm\) 0.06 (sd). It should be noted that the [\(\alpha\)/Fe] abundances for light elements like Ca, the source of the \(hk\) index, are above solar, typically +0.06 to +0.10 dex. This may be an indication that the higher metallicity from \(hk\) relative to \(m_{1}\) is tied to a real metallicity offset for the two indices, but the differential is small enough that it falls within the combined uncertainty of the photometry and the spectroscopy. Adopting solar metallicity, a cluster age of 1.52 Gyr was obtained from isochrone fits to Victoria-Regina (VandenBerg, Bergbusch, & Dowler, 2006; VandenBerg et al., 2014) (VR) isochrones and MESA (Paxton et al., 2011, 2013) models on the _Gaia_\(G\), \((G-G_{RP})\) photometric system. The true distance modulus adopted by B\(\ddot{\rm o}\)cek Topcu et al. (2020) for the fit was \((m-M)_{0}=8.26\), derived from the parallaxes of stellar members of NGC 752 in DR2, as discussed in Sec. 4.1. Given the differences in the photometric systems and the changes in the adopted isochrones, the agreement is excellent. In a contemporaneous discussion of the NGC 752 age via CMD fits to isochrones as part of the interpretation of the masses estimated from detached eclipsing binaries, Sandquist et al. (2023) derive a preferred age near 1.6 Gyr using PARSEC isochrones (Bressan et al., 2012), with an uncertainty of about 0.08 Gyr based upon the scatter among the stars in the brightest region of the main sequence, assuming solar metallicity and \(E(B-V)=0.044\). Since the PARSEC models have similar amounts of overshooting to the Victoria-Regina set, this seems an unlikely source for the difference in age. The simplest solution for the discrepancy is that suggested by Sandquist et al. (2023), the adopted abundance for solar metallicity: the PARSEC models assume \(Z_{\odot}=0.0152\) while VR models imply 0.0188. The most recent reexamination of the solar value by Magg et al. (2022) implies 0.0177. If correct, the isochrones used in Figure 4 are, fortuitously, almost identical to the revised solar value ([Fe/H] = -0.006), while the PARSEC isochrones have [Fe/H] = -0.066, leading to an older age at a given turnoff color. Finally, Lum & Boesgaard (2019) derived abundances for 23 dwarfs and 6 red giants in NGC 752 from high-resolution (R\(\sim\) 48000) HIRES spectra, finding [Fe/H] = -0.01 \(\pm\) 0.06 (sd) and no difference between the giants and the dwarfs. As in B\(\ddot{\rm o}\)cek Topcu et al. (2020), the 6 red giants show some enhancement of Ca relative to Fe with [Ca/Fe] = +0.05 \(\pm\) 0.03 (sd). However, the combined sample of 29 dwarfs and giants produces [Ca/Fe] = +0.02 \(\pm\) 0.04 (sd), implying a solar ratio within the errors. ### M67 To identify possible undetected binaries near the turnoff of M67, we begin with the sample of 457 single-star members isolated in Sec. 5.1. To ensure as pure a sample as possible, we raise the radial-velocity membership limit to 50% and remove all stars classed as photometric variables, blue stragglers, and/or X-ray sources, reducing the sample to 421 stars. Since our primary interest is in the stars that populate the vertical turnoff, we finally restrict the photometry to stars between \((b-y)\) = 0.32 and 0.46 with \(V>\) 12.5, leaving 253 stars. To ensure that any identifiable CMD features are unlikely to be caused by photometric scatter, we have also tested our \(V\), \((b-y)\) photometry against the \(G\), \((B_{P}-R_{P})\) data of _Gaia_ DR3. Due to the modest range in color and luminosity under discussion, transformation between the two systems should be possible using little more than a linear color term and an offset (Anthony-Twarog et al., 2021). The transformation relation from \(G\) to \(V\) with 1 star excluded exhibits a scatter of \(\pm\)0.007 mag from 252 stars. The analogous relation for \((B_{P}-R_{P})\) to \((b-y)\), with 2 stars excluded, has a scatter of \(\pm\)0.004 mag. We have converted the _Gaia_ photometry to the \(V\), \((b-y)\) system and averaged them with the observed \(V\), \((b-y)\) data. The two stars exhibiting discrepancies between the two samples have \(y\) and \(b\) observations numbering in single digits, a possible indicator of potential issues with the photometry given the brightness of these stars. As with NGC 752, the CMD of the 251 turnoff stars was used to define \(V_{ZAMS}\) between \((b-y)\) = 0.32 and 0.46, and the ZAMS sample then adopted to define \(c_{1ZAMS}\) over the same color range. For each star, \(\delta V\) and \(\delta c_{1}\) were derived; the result is plotted in Figure 5, where the blue and red points are from the averaged, single-star data. Somewhat surprisingly, despite the comparable precision of the photometry in M67 relative to NGC 752 for the same class of stars located 1.5 mag fainter in the former cluster, the separation into single and possible binary stars is less obvious. Moreover, the \(\delta V\) - \(\delta c_{1}\) distribution in Figure 5 appears different from that in Figure 2. The two primary contributors to this are the presence of stars defining the blue hook and a subgiant branch extending to \((b-y)\) = 0.46, neither of which are included in the NGC 752 CMD, and the declining slope of the \(\delta V/\delta c_{1}\) relation with increasing \((b-y)\) (NTC). The former phenomenon ensures that a significant fraction of the stars redder than the turnoff sit at increasingly larger distance above the ZAMS with increasing \((b-y)\). In addition to the basic change in the slope of the relation with increasing color, because the turnoff of M67 is significantly cooler than that of NGC 752, the latter phenomenon ensures that a comparable error in \(c_{1}\) generates a larger spread in \(\delta V\). To illustrate the continued value of the data presented in Figure 5, the photometry for all systems classified as SB2 or triplet near the turnoff of M67 by Geller, Latham, & Mathieu (2015) has been processed in the same manner as the single stars, leading to the green symbols in Figure 5. The separation of these points from the band defined by the majority of the single stars is obvious. Using the green symbols as the defining binary sample, stars that lie above the approximate boundary between the green and blue points have been characterized as probable photometric binaries, with the likelihood of a correct identification increasing with increasing distance above the blue band. In Figure 6, the CMD for the stars of Figure 5 is presented; the symbols have the same meaning including the use of blue five-pointed stars for single subgiants and green open triangles for likely binaries. As in Figure 3, the combination of photometry and spectroscopy eliminates the majority of the well-defined sequence of isolated stars sitting sigificantly above the ZAMS, though it should be remembered that, based upon radial velocity, all the red points above the ZAMS are classed as single stars. The second largest concentration of possible binaries lies, as one might expect, along the binary sequence extension into the main sequence hook. It should be emphasized that these stars fall in the zone just above the blue band of Figure 5, making their binary nature less probable than the obvious band of stars sitting 0.5 to 0.7 mag above the single-star relation in Figure 6. We close this discussion by noting that there are a number of stars that lie in apparently anomalous locations of the CMD without any photometric or spectroscopic indications of binarity, an issue returned to below. As with NGC 752, the age of the cluster is best determined from the CMD after removal of all potential binaries or triples, photometric or spectroscopic, variables, blue stragglers, X-ray sources, and other known anomalies. For the cluster distance we again rely solely on the _Gaia_ DR3 parallax data. As discussed in Sec. 5.1, the full set of astrometric members has an average \(\pi\) = 1.153 \(\pm\) 0.056 (sd) mas. If only the stars brighter than \(V\) = 16 are included, the mean becomes 1.154 \(\pm\) 0.052 (sd) mas. Finally, if only 312 stars with \(\pi/\sigma_{\pi}>\) 50 are used, \(\pi\) = 1.162 \(\pm\) 0.032 (sd) mas. With \(E(b-y)\) = 0.021, the latter two determinations lead to \((m-M)\) = 9.76 and 9.78, respectively; \((m-M)\) = 9.77 will be assumed. As with the earlier discussion of NGC 752, the final parallax from DR3 data is larger by 0.025 mas than the estimate from the original Cantat-Gaudin et al. (2018) parallaxes, a smaller offset than for NGC 752, but well within the range for other open clusters. For photometry, the \(V,(b-y)\) data of Table 4 will be used except at the turnoff where the averaged results illustrated in Figure 6 will be given precedence. The CMD superposed on isochrones of age 3.5 Figure 5: Same as Figure 2 for M67. Blue and red symbols show stars classed as single and probable binary stars, respectively. Green symbols show stars classed via radial velocities as SB2 or triples. Blue asterisks show the location for single subgiants, while green open triangles indicate binary subgiants. Figure 6: Same as Figure 3 for M67. Symbols have the same meaning as in Figure 5. and 3.7 Gyr adjusted to an assumed [Fe/H] of 0.03 is shown in Figure 7. The isochrones supply an overall excellent match to the observations down to the main sequence near \(M_{V}\sim 5\). For fainter stars, the isochrones are increasingly too faint compared to the data, the same pattern seen for NGC 752. Superposed on the plot (dashed green curve) is the empirically derived ZAMS correction to the isochrones as defined by the lower main sequence data of NGC 752. This correction leads to an excellent match between theory and observation to the limit of the plot. It should be noted that, unlike the subgiant branch and the unevolved main sequence, the first-ascent red giant branch for the isochrones supplies a less satisfying match to the cluster photometry. Unlike NGC 752, which has no blue stragglers or subgiants and only one probable first-ascent red giant, M67 is rich in blue stragglers and composite systems near the turnoff. The evolved counterparts of these anomalous systems are the likely source of the scatter at the base of the red giant branch between \(M_{V}=2\) and 3, making the exact location of the single-star red giant branch difficult to define. More important, as with the red dwarfs, conversion of the isochrones from the theoretical to the observational plane requires transformation relations between temperature/luminosity and \(b-y\) as a function of metallicity. While these have been well developed for G dwarfs and hotter, as with the red dwarfs, they are less reliably defined for cool giants and likely require an empirical correction of the type developed for the dwarfs to better reproduce the true Stromgren system. Before moving to the turnoff, three items should be mentioned. First, the lack of binary information for \(M_{V}\sim 5.5\) is apparent given the reemergence of the usual photometric binary sequence for the ZAMS below this point. Second, the isolated star below the main sequence near \(M_{V}\sim 7.3\) is an almost certain nonmember. Due to the absence of radial velocities at this apparent magnitude level, membership depends entirely upon astrometry. This star sits at the very boundary of the astrometric limits used to separate members from nonmembers. Third, two stars above the blue end of the subgiant branch are plotted as green triangles, implying probable binaries. These two stars were excluded from the earlier \(V\), \(c_{1}\) discussion because they were situated well above the subgiant branch and their inclusion extended the \(\delta V\) scale of Figure 5 by an additional 0.75 mag, giving exaggerated emphasis to the evolved stars. A simple check allows us to determine if these systems are likely composites of a pair of subgiants. From 29 supposedly single subgiants between \((b-y)\) = 0.35 and 0.38, the average \((b-y)\) and \(c_{1}\) indices are 0.365 \(\pm\) 0.009 (sd) and 0.433 Figure 7: Cleaned CMD composed of probable members of M67, adjusted for \((m-M)\) = 9.77 and \(E(b-y)\) = 0.021 and compared with isochrones of age 3.5 and 3.7 Gyr. Dashed green curve is the corrected cooler ZAMS of Figure 4 for NGC 752 superposed on the adopted isochrones. Green triangles represent additional photometric binaries. \(\pm\) 0.009 (sd), respectively. The two stars represented by the green triangles have \((b-y,c_{1})\) = (0.362, 0.426) and (0.374, 0.443), identical within the uncertainties despite sitting 0.48 mag and 0.67 mag, respectively, above the subgiant branch. The expanded turnoff region of Figure 7 is presented in Figure 8. Unlike NGC 752, the isochrone profile of the turnoff hook provides a less satisfying match to the data. While the magnitude level of the bluest point of the turnoff below the HEP can be considered consistent with either isochrone, the luminosity of the subgiant branch is almost perfectly matched by the 3.7 Gyr isochrone. The color of the turnoff data places it blueward of either isochrone, implying an age younger than 3.5 Gyr, though such an age would be contradicted by the need for a subgiant branch even brighter than the already too bright 3.5 Gyr isochrone. Thus, the key to defining the cluster age is the distinction between the isochrones of Figure 4 versus those of Figure 8. Keep in mind that for clusters in the 1-2 Gyr range, as in NGC 752, the rapid evolution of turnoff stars after reaching the end of the red hook often leaves the blue hook and the bluer portion of the subgiant branch poorly populated, if at all. Thus, the defining pattern of the younger isochrones with increasing age is a correlated shift to the red and fainter magnitudes. By contrast, the color band defined by the red hook and post-HEP phase among isochrones near 4 Gyr evidences very little color evolution with increasing age. For solar metallicity models, the range in \((b-y)\) between the blue hook and the limit of the red hook goes from \((b-y)_{0}\) = 0.316 to 0.359 for 3.5 Gyr, to 0.320 to 0.353 for 3.7 Gyr, to 0.321 to 0.345 for 3.9 Gyr. By contrast, \(M_{V}\) of the subgiant branch at the color of the blue limit below the HEP shifts from 2.75 to 2.90 to 3.03 over the same age range. As clearly demonstrated in Fig. 8, with precision photometry \(M_{V}\) changes at this level are easily detectable, particularly once the confusion from binaries and variables is eliminated. On an absolute scale, the availability of _Gaia_ parallaxes and cluster membership for a stellar sample numbering in the hundreds reduces the uncertainty in \((m-M)_{0}\) to the level of \(\pm\)0.01 mag, independent of both reddening and metallicity. The primary uncertainty in the age estimate tied to the luminosity of the subgiant branch becomes the reddening since the observed CMD must be adjusted for extinction. For M67, due to the exceptional accuracy of the reddening estimate, this adds no more than \(\pm\)0.02 mag to the uncertainty of the position of the subgiant branch. Thus, the estimated age of M67 defined by the luminosity of the subgiant branch and tied to the specific isochrones and cluster parameters of Figure 8, is 3.70 \(\pm\) 0.03 Gyr. Until the issue with the color morphology of the turnoff is clarified, a more conservative estimate of the uncertainty based upon the spread in color at the turnoff below the HEP is 3.7 \(\pm\) 0.1 Gyr. Figure 8: Expanded turnoff region of the M67 CMD of Figure 7. Blue points are stars with anomalously blue colors. How do these results compare with other analyses? Rather than generating a litany of derived ages and distances from multiple sources using different stellar models under varying assumptions about the cluster age and reddening, we will focus on two distinctly relevant investigations at opposite ends of the stellar sample scale. The first is Paper II. Paper II derives the fundamental cluster parameters of NGC 7142 using precision multicolor broadband photometry compared to the multicolor data of the Hyades and through a differential CMD comparison to a virtually identical cluster, M67. As part of the analysis, the authors also redetermine the absolute paramters for M67, making use of the Yale-Yonsei isochrones (Demarque et al., 2004) in multiple colors to constrain the distance and age. From the multicolor index comparisons to the Hyades with the standard star cluster data of M67 from Stetson (2000), Paper II finds \(E(B-V)\) = 0.04 \(\pm\) 0.01 and [Fe/H] = -0.02 \(\pm\) 0.05, on a scale where the Hyades has +0.15. The age and distance results are 3.85 \(\pm\) 0.17 Gyr and \((m-M)\) = 9.75, respectively. To compare with our age and distance results, the reddening must be lowered to \(E(B-V)\) = 0.028 and the metallicity increased by +0.08 dex, keeping in mind that our scale has the Hyades at +0.12. The lower reddening decreases the apparent modulus by 0.07 mag, while boosting the metallicity requires a larger distance by \(\sim\)0.08 mag (Twarog, Anthony-Twarog, & Edgington-Giordano, 2009), changing the final apparent modulus to 9.76. Likewise, lowering the reddening leads to a redder turnoff, but higher metallicity at a given turnoff color generates a lower age (Paper II). While the consistency between the two sets of photometric data from two distinctly different sets of isochrones is encouraging, the more relevant aspect of the analysis is the CMD comparison between the isochrones and the M67 fiducial points (Figure 15 of Paper II). The data for M67 are compared to isochrones of age 3.5 and 4.0 Gyr. What is apparent upon close examination of the turnoff is that while the points below the HEP follow the trend expected for a 3.6 Gyr isochrone, they do not reproduce the blue hook, and populate the subgiant branch at a luminosity consistent with an age of 3.9 Gyr. As is also the case with VR (below), one cannot simultaneously match the color of the turnoff, the shape of the blue hook, and the luminosity of the subgiant branch. Shifts to the red to optimize the color of the turnoff reduce the apparent modulus by an amount that contradicts the limits of the _Gaia_ parallaxes, while moving the red giant fiducials away from the isochrones, not closer. Note, a similar argument can be made for the fit in Figure 8. A shift to the red for the \((b-y)_{0}\) photometry by reducing \(E(b-y)\) to 0.011 mag would provide a reasonable color match between the observations and the 3.7 Gyr isochrone, but the correlated shift downward to realign the ZAMS data with the isochrone makes the observed subgiant branch too faint by more than 0.1 mag compared to the isochrones and again reduces the distance modulus to a value outside the allowed limits of the parallaxes. To search for an alternative solution to the morphology issue at the turnoff, we turn to the detailed and incisive investigation of a single M67 binary system by SA. (In the discussion that follows, identifications in M67 are given using WOCS numbers from Geller, Latham, & Mathieu (2015).) Star 11028 was selected for analysis by SA as a double-lined spectroscopic binary near the cluster turnoff with a large enough semimajor axis to minimize the likelihood of mass transfer and/or tidal distortions between the two members, i.e. both stars followed the evolutionary path of isolated single stars. From Kepler K2 observations (Stello et al., 2016), only one eclipse is observed, but the inclination angle is so well constrained that precise mass estimates can be achieved. SA adopted \(E(B-V)=0.041\)(Taylor, 2007) and, after discussing the issue of systematic errors in DR2 parallaxes, adopted \((m-M)_{0}\) = 9.63 \(\pm\) 0.06. Fortuitously, the difference in \(E(B-V)\) compared to our value almost exactly compensates for the lower absolute modulus, leading to \((m-M)\) = 9.76. Using multiwavelength spectral energy distributions (SED), SA identified possible singe-star pairs of stars from the observed M67 CMD which, when combined, best matched the composite 11028 system. The optimal fit for the primary component placed the star at the bluest point of the turnoff below the HEP, a critical location ideally suited to constrain the cluster age because the CMD turnoff at this location is effectively vertical. Given the distance, \(T_{\rm eff}\) from the SED, the radial-velocity and eclipsing light curves, one can tightly constrain the mass, radius, and luminosity of the primary. Beyond this point, SA encountered for their single pair of stars the problem equivalent to the deviant morphology of the turnoff seen in Figure 8. Their problem is that the well-constrained primary mass, 1.222 \(\pm\) 0.006 \(M_{\odot}\), \(T_{\rm eff}\), and luminosity combine to predict an age for the star (3.0 \(\pm\) 0.3 Gyr) that is below the consistently derived 3.5 to 4.5 Gyr range. Because stars of a given mass generally grow more luminous as they evolve off the main sequence and approach the HEP, a younger age implies a lower luminosity, i.e. the primary star is too faint for the cluster age or, equivalently, the radius of the star is too small for its mass. This is exactly the same pattern seen for the stars at the blue limit below the HEP in Figure 8. The primary single-star analog, 6018, sits at (\((b-y)_{0}\), \(M_{V}\)) = (0.334, 3.837), exactly at the blue limit below the HEP. Alternatively, the mass of a star at this luminosity from our isochrones ranges from 1.191 \(M_{\odot}\) to 1.184 \(M_{\odot}\) for isochrones between 3.5 and 3.8 Gyr. While raising the metallicity of M67 moves the mass in the correct direction (SA), even adopting the Hyades metallicity for M67 comes up short with the predicted mass range from 1.211 to 1.208 \(M_{\odot}\) over the same age range. The added flaw in this solution is that boosting the metallicity to as high a value as the Hyades forces a 0.1 mag increase in the distance modulus from main sequence fitting to the isochrone ZAMS. With the true distance modulus set by _Gaia_, this increase in \((m-M)\) can only be taken up by increasing the reddening. The combined shift to the blue for the observed data places the points systematically below the isochrones. Oddly, the observed data can be forced to match the Hyades isochrone below the level of the HEP if \((m-M)_{0}\) = 9.68 and reddening is set to 0. Unfortunately, the position of the observed HEP is 0.5 mag too faint and the shape of the HEP phase is distinctly different from the isochrone. It should be noted that that the mass range for stars on the giant branch for the 3.7 Gyr isochrone of Figure 7 is 1.357 to 1.395 \(M_{\odot}\); for a Hyades metallicity, the analogous range is 1.385 to 1.42 \(M_{\odot}\). At an age near 4 Gyr, Stello et al. (2016) measured an average asteroseismic mass for the red giants of 1.36 \(\pm\) 0.01 \(M_{\odot}\). SA also offer the possibility that the binary discrepancy is due to a significantly lower degree of convective overshoot and/or diffusion. While we can offer no additional insight beyond the excellent discussion of the former option by SA, the latter may have relevance for the primary component. Diffusion can only occur in an atmosphere that lacks significant levels of mixing triggered by convection and/or rotation. As regularly noted in previous papers of this series designed to probe the Li evolution of open clusters, standard stellar evolution theory predicts only a small amount of convection for stars at the mass of the M67 turnoff and higher, but increasing levels of mixing for stars of 1.2 \(M_{\odot}\) and lower (Deliyannis et al., 1990; Pinsonneault et al., 1990; Swenson et al., 1994; Chaboyer et al., 1995; Pinsonneault, 1997). The existence of the Li dip (Wallerstein et al., 1965; Boesgaard & Tripicco, 1986) for stars between 1.1 \(M_{\odot}\) and 1.5 \(M_{\odot}\), with the exact range being dependent upon metallicity (Anthony-Twarog et al., 2021), as well as the increasing degree of depletion of Li for stars with decreasing mass at a rate significantly higher than predicted, clearly implies a missing physical process to drive the mixing in these stars. The growing observational evidence indicates that while rotation matters, it is the rate of spindown among stars that drives mixing, both for stars above the Li dip (Deliyannis et al., 2019; Twarog et al., 2020; Anthony-Twarog et al., 2021), and early on in the evolution of lower mass stars (Anthony-Twarog et al., 2018; Jeffries et al., 2021; Sun et al., 2022). Because the Li dip is well formed by the age of the Hyades, the implication is that the stars occupying this mass range have spun down to reduced rotation speeds compared to their ZAMS values by 650 million years, occupying what is known as the Kraft curve (Kraft, 1967) in rotation velocity. Beyond the age of the Hyades, these stars continue to spin down, though at a decreased rate, as illustrated by the comparison of NGC 752 with the Hyades (Boesgaard et al., 2022) and analysis of the even older cluster, NGC 6819 (Deliyannis et al., 2019). Outside the Li dip toward lower masses, i.e. below 1.2 \(M_{\odot}\), the Li plateau is defined by stars which initially, i.e. by the age of the Hyades, have little Li depletion (0.1 dex) compared to the primordial Hyades value. As these stars age, the plateau remains, but the level gradually declines. By the age of NGC 752, the plateau is approximately 0.3 dex below that defined by the Hyades at the same mass (Boesgaard et al., 2022). The decline in the plateau level between the Hyades and NGC 752 is beautifully matched by the decline in \(v\) sin\(i\) between the Hyades stars and those in NGC 752 (Boesgaard et al., 2022). What is surprising is that beyond the age of NGC 752, within the plateau in similar to much older clusters like IC 4651 (1.5 Gyr) (Anthony-Twarog & Twarog, 2000; Anthony-Twarog et al., 2009), NGC 3680 (1.8 Gyr) (Anthony-Twarog et al., 2009), NGC 6819 (2.25 Gyr) (Anthony-Twarog et al., 2014; Deliyannis et al., 2019) and M67 (Cummings et al., 2017), the Li level is the same as that for NGC 752 (IC 4651, NGC 3680, NGC 6819) or slightly (0.15 dex) lower (M67), within the uncertainties. Even the super-metal-rich cluster, NGC 6253 (Anthony-Twarog et al., 2009), where the stars populating the turnoff and subgiant branch come from a higher-mass Li dip due to supermetallicity, has a plateau level identical with that of M67 (Cummings et al., 2017) at an age of \(\sim\)3.0 Gyr. For completeness, subtle secondary effects may play a role in producing the apparent convergence of the Li plateau with increasing age. For example, increased metallicity shifts the the mass range for the Li-dip to higher masses, placing stars of higher mass and potentially higher initial rotation rates at the plateau. This effect may be counterbalanced by the observation that Galactic Li production leads to a higher initial value of A(Li) for young clusters with higher [Fe/H] (Cummings, 2011). If mixing and Li depletion are driven by the rotational decline of stars on the plateau, the simple interpretation is that the degree of spindown for stars just cooler than the Li dip is so small after the age of NGC 752 that rotationally induced mixing becomes small, though just how small remains an open question. Since the primary star in 11028 falls in exactly the mass range where the plateau appears, it is probable that the stars populating the blue limit of the turnoff below the HEP have atmospheres that have remained stable, i.e. subject to little or no mixing, for the last 1-2 Gyr. The stars more massive than the plateau, those occupying the HEP and post-HEP, come directly from the Li dip, but the Li dip was in place by the age of the Hyades. If these stars reached their minimum rotational velocity by this age, it is likely that these stars, despite obvious evidence for significant mixing at an earlier main sequence age, remain stable to further mixing even longer than the stars in the plateau and until they reach the subgiant branch. The final focus of the discussion are the seven filled blue circles in Figure 8. These stars (1020, 3050, 7026, 7044, 8006, 8048, 10055) have been tagged as unusual because of their locations in the CMD blueward of the ZAMS or mean turnoff relation. While the separation of the four brighter stars is unarguable, it should be emphasized that the fainter three points exhibit the same blueward positions independently in both the \(V,(b-y)\) and \(G,(B_{R}-B_{P})\) diagrams. All the stars have been included in the long-term radial-velocity survey (Geller, Latham, & Mathieu, 2015) and all have radial-velocity membership probabilities between 95% and 98%. None of the stars falls within the photometric binary category. In fact, most lie either on the ZAMS or below it, within the uncertainties. The common assumption for the existence of stars that lie blueward and brighter than the turnoff of any cluster is that they are blue stragglers (BS), especially for a cluster like M67 which has long been known for its rich BS population. The commonly accepted scenario for the formation of such systems is a binary mass transfer/merger event that turns the lower mass companion into a higher mass star, potentially leaving a visually faint but UV-bright white dwarf companion behind (McCrea, 1964; Hills & Day, 1976; Leonard, 1989; Perets & Fabrycky, 2009). The literature in support of this scenario is heavily tied to M67 and growing (see, e.g. Leiner et al. (2019); Subramaniam et al. (2020); Pandey, Subramaniam, & Jadhav (2021); Leiner & Geller (2021); Geller et al. (2021) and the many references therein.) The high percentage of binaries among BS in M67 is detailed in Geller, Latham, & Mathieu (2015), supplying an obvious source for most photometric and/or spectroscopic anomalies. Two alternative means of identifying BS systems, particularly where the radial-velocity variations may be small to negligible, make use of the presence of a UV bright white dwarf companion (Sindhu, Subramaniam, & Radha, 2018) or, more recently, searching for stars with anomalously high rotation rates, indicative of a spinup caused by mass-transfer/merger (Leiner et al., 2019). These two approaches should be relevant for systems that fall outside the CMD zones canonically occupied by BS, e.g. bluer than but fainter than the turnoff and recently formed BS still on the main sequence. The latter class of stars has been named _blue lurkers_(Leiner et al., 2019). As already noted, all radial-velocity binary and/or BS systems identified in Geller, Latham, & Mathieu (2015) have been excluded from our analysis so detection of a BS binary origin for the seven bluer stars of Figure 8 must come from an alternative technique; the majority of M67 BS systems have been detected in the UV via their white dwarf companion (Pandey, Subramaniam, & Jadhav, 2021). We have searched the primary sources for matches to our seven anomalous stars, the UVIT Catalog of Open Clusters (Jadhav et al., 2021) and the potential blue lurkers identified in M67 (Leiner et al., 2019; Jadhav, Sindhu, & Subramaniam, 2019; Subramaniam et al., 2020). Unfortunately, of the seven stars, only two lie within the UVIT field for M67, 1020 and 8006. The brightest of the blue stars in Figure 8, 1020, is classified as a blue lurker (Leiner et al., 2019). While not identified as a blue lurker, 8006, the bluest star in Figure 8, is one of the 25 brightest stars detected in the far-UV in the field of M67 and is actually brighter than 1020 at these wavelengths. From SED analysis, Jadhav, Sindhu, & Subramaniam (2019) find it probable that 8006 has a low mass white dwarf companion but indicate that further observations are required to make this claim definitive. It seems highly probable that these two stars are traditional binary BS with white dwarf companions. To get some possible insight into the remaining five stars, we can examine the stars classed as single-star blue lurkers. In addition to 1020, Leiner et al. (2019) also list 2001 and 7035, stars that have rotational periods of 5.6 and 8.0 days, respectively. 2001 isn't included in our final sample because it was excluded for astrometric reasons; even taking its astrometric uncertainty into account, the star sits 5\(\sigma\) away from the cluster mean \(\mu_{\alpha}\). For 7035, its membership is not in question. It was excluded from the discussion of the turnoff region because it had been classified as a photometric variable (Geller, Latham, & Mathieu, 2015). If we insert both stars into the sample, 7035 is essentially a photometric twin for our anomalous blue candidate, 7044. In Figure 8, 7035 sits at ((\(b-y)_{0}\), \(M_{V}\)) = (0.324, 3.575), leading to the possibility that 7044 is a likely blue lurker. The situation with 2001 is more complex. It sits at the red edge of the HEP, with ((\(b-y)_{0}\), \(M_{V}\)) = (0.358, 3.471). While 11008 occupies almost the identical position in the CMD, it isn't a photometric twin in one key respect. The \(c_{1}\) index for 2001 is lower by 0.046 mag compared to 11008. This is crucial because (\(\delta c_{1}\), \(\delta V\)) becomes (0.0, 0.8), placing the star solidly within the binary category of Figure 5. If it is a binary, any white dwarf companion cannot be the source of the extra luminosity. Whether the CMD position is an indication of nonmembership, true binarity, or a side effect of its rapid rotation (rapid rotators can support the same mass at a lower/redder temperature/color) remains an open question. The final single-star member classed as a possible blue lurker is 11005 (Subramaniam et al., 2020). This star was detected within the UVIT Catalog, but only in the far-UV. SED analysis indicates the presence of a low-mass white dwarf companion but, surprisingly, the star is a slow rotator. Its position in the CMD of Figure 8 is ((\(b-y)_{0}\), \(M_{V}\)) = (0.335, 2.991), i.e. on the subgiant branch, which might explain the slow rotation. It's \(c_{1}\) index implies a single star. ## 7 Conclusions The open cluster, M67, is a rich and rapidly growing source of insight on the evolution of stars of lower mass, with particular impact constraining models of the sun. The valuable insight which the cluster provides is intricately tied to the accuracy and precision of the parameters that define the cluster, particularly the reddening, metallicity, distance, and age. While there has been a convergence of these values from an initial range of even 30 years ago, there is enough uncertainty, for example, to cast doubt on the suitability of using the cluster as a proxy for stars of solar composition and age, with some studies deriving supersolar metallicity and generally subsolar ages. The \(uvby\)H\(\beta\) photometry detailed in this investigation supplies a second wide-field, deep set of precision data capable of being used as photometric standards and tied directly to the extensive CCD data for NGC 752 (Paper I), covering stars over a wide range of temperature and luminosity. The calibration and comparison of the two clusters demonstrates that M67 is definitely more metal-rich than NGC 752, as defined by either \(m_{1}\) or \(hk\) data, by approximately 0.062 dex. Depending upon the choice of the Hyades metallicity (+0.12 or +0.15) (Cummings et al., 2017), M67 has either [Fe/H] = +0.03 or +0.06. It should be emphasized that these values are defined by the F and early G turnoff stars. Since both indices lose sensitivity to metallicity changes near solar metallicity among giants, it is impossible to test if the giants are more metal-rich than the turnoff stars, as claimed if significant diffusion operates on the turnoff stars. It should be noted that the lack of metallicity sensitivity does allow the red giants to be used as a test of the differential reddening estimate between the clusters, since both clump stars and first-ascent red giants at a given (\((b-y)\)) should have the same \(hk\). While the sample of NGC 752 red giants is small in size and color range, comparison with M67 nicely constrains the derived offset (\(\delta E(b-y)=-0.005\)) from the few hundred stars on the main sequence, i.e. \(E(b-y)\) is smaller in M67 than NGC 752 by no more than 0.010 mag. Returning to the turnoff metallicity, it is impossible to say at present what the impact is of diffusion on the specific photometric intermediate and narrow-band indices, though it is likely that the \(hk\) index is directly tied to [Ca/H] while \(m_{1}\) feels a broader impact from the overall change in [m/H]. With the reddening and metallicity in hand, the cluster age and distance should be readily determinable. Before doing this, however, _Gaia_ DR3 astrometry, in conjunction with the exquisite 40-year radial-velocity database (Geller, Latham, & Mathieu, 2015), is used to eliminate probable nonmembers and binaries. This tightly constrained sample of single-star members with the highest precision parallaxes leads to an absolute distance modulus of \((m-M)_{0}\) = 9.69 and, combined with the reddening, \((m-M)\) = 9.77 \(\pm\) 0.03 (sem), where the uncertainty is dominated by the small uncertainty in the reddening, Again, it should be emphasized that the change in the mean parallax from earlier work is consistent with the claims of a zero-point error in the early parallaxes but assumes that none exists in the current database. With radial-velocity binaries removed, the precision photometry of the stars at the turnoff can be used to identify potential binary systems composed of stars of comparable mass that may have been missed by radial-velocity analysis, especially in the CMD region where the binary sequence normally crosses the vertical turnoff. With all known potential sources of scatter eliminated, the parallax-based distances for the clusters can be tested against isochrones of the appropriate metallicity, leading to a color-dependent correction for the coolest stars on the ZAMS, but an excellent match otherwise for \((m-M)\) = 8.33 and 1.45 Gyr for the apparent distance and age of NGC 752, respectively. For M67, using the same color-dependent correction defined by NGC 752, an excellent match to the ZAMS and subgiant branch for an age of 3.7 Gyr and apparent distance of \((m-M)\) = 9.77 is derived. The morphology of the turnoff, however, cannot be meshed with a single isochrone. Based upon the dispersion in color at a fixed luminosity, the age can be constrained with an uncertainty of \(\pm\)0.1 Gyr. However, the color at the bluest point below the HEP implies an age between 3.3 and 3.5 Gyr. Attempts to shift the data by tweaking the reddening, metallicity, and/or distance fail to supply an internally consistent solution for the three key parameters. As illustrated by the expansive discussion of the binary 11028 by SA and the broad-band analysis of M67 by Paper II, this problem reflects a fundamental discrepancy between the theoretical physics and the observations of stars near 1.2 \(M_{\odot}\) in M67. Whatever the source(s) of the morphological distortions in the isochrones, they impact the entire turnoff region from the HEP to the Li plateau, i.e. to stars beyond the Li dip and potentially to the those of solar mass. Finally, despite the removal of a wide array of photometrically and spectroscopically anomalous stars, a few still remain at the turnoff, blueward of the main sequence by amounts that appear unlikely to be caused by photometric scatter. The blueward position is critical since combinations of stars on the normal CMD track cannot create a composite bluer than either star. So, unless the system is a composite of a normal turnoff star and a UV-bright source like a white dwarf, as appears to be the case for three of the anomalous stars, the remaining systems cannot be simple binaries. If they aren't mass transfer/merger systems, their evolution off the ZAMS must have been delayed by some physical process that occurs in a more extreme form than for the vast majority of stars evolving off the main sequence and toward the HEP. It is possible that this phenomenon is related to the distortion that more extensively impacts the broad CMD distribution of stars at the turnoff relative to the isochrones. ## Acknowledgments NSF support for this project was provided to BJAT and BAT through NSF grant AST-1211621, and to CPD through NSF grants AST-1211699 and AST-1909456. The filters used in the program were obtained by BJAT and BAT through NSF grant AST-0321247 to the University of Kansas. Extensive use was made of the WEBDA database maintained by E. Paunzen at the University of Vienna, Austria ([http://www.univie.ac.at/webda](http://www.univie.ac.at/webda)). It is a pleasure to thank Eric Sandquist for making a draft copy of the NGC 752 eclipsing binary analysis available during revisions of this manuscript. It is a pleasure to thank the referee whose careful reading of the manuscript led to changes that enhanced the clarity of the discussion. This work has made use of data from the European Space Agency (ESA) mission _Gaia_, processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. WHYN: 0.9m IRAF Tody (1986)
2308.02166
Transformer-Based Denoising of Mechanical Vibration Signals
Mechanical vibration signal denoising is a pivotal task in various industrial applications, including system health monitoring and failure prediction. This paper introduces a novel deep learning transformer-based architecture specifically tailored for denoising mechanical vibration signals. The model leverages a Multi-Head Attention layer with 8 heads, processing input sequences of length 128, embedded into a 64-dimensional space. The architecture also incorporates Feed-Forward Neural Networks, Layer Normalization, and Residual Connections, resulting in enhanced recognition and extraction of essential features. Through a training process guided by the Mean Squared Error loss function and optimized using the Adam optimizer, the model demonstrates remarkable effectiveness in filtering out noise while preserving critical information related to mechanical vibrations. The specific design and choice of parameters offer a robust method adaptable to the complex nature of mechanical systems, with promising applications in industrial monitoring and maintenance. This work lays the groundwork for future exploration and optimization in the field of mechanical signal analysis and presents a significant step towards advanced and intelligent mechanical system diagnostics.
Han Chen, Yang Yu, Pengtao Li
2023-08-04T06:55:07Z
http://arxiv.org/abs/2308.02166v1
# Transformer-Based Denoising of Mechanical Vibration Signals for System Health Monitoring ###### Abstract Mechanical vibration signal denoising is a pivotal task in various industrial applications, including system health monitoring and failure prediction. This paper introduces a novel deep learning transformer-based architecture specifically tailored for denoising mechanical vibration signals. The model leverages a Multi-Head Attention layer with 8 heads, processing input sequences of length 128, embedded into a 64-dimensional space. The architecture also incorporates Feed-Forward Neural Networks, Layer Normalization, and Residual Connections, resulting in enhanced recognition and extraction of essential features. Through a training process guided by the Mean Squared Error loss function and optimized using the Adam optimizer, the model demonstrates remarkable effectiveness in filtering out noise while preserving critical information related to mechanical vibrations. The specific design and choice of parameters offer a robust method adaptable to the complex nature of mechanical systems, with promising applications in industrial monitoring and maintenance. This work lays the groundwork for future exploration and optimization in the field of mechanical signal analysis and presents a significant step towards advanced and intelligent mechanical system diagnostics. ## 1 Introduction In today's interconnected world, the importance of robust and responsive systems cannot be over-stated. Whether in the context of industrial manufacturing, healthcare, transportation, or information technology, the reliance on complex systems to maintain operational continuity is ubiquitous [1, 2]. System health monitoring (SHM) has emerged as a critical field that ensures the smooth functioning of these complex mechanisms, mitigating the risk of sudden failure, and thus protecting the intricate web of daily operations across various sectors. SHM is a multifaceted discipline that draws on engineering, data analytics, predictive modeling, and other advanced technologies to track the performance of a system in real-time. Its primary aim is to detect, diagnose, and predict anomalies or failures in a system, allowing for timely intervention and maintenance [3]. This is not only crucial for minimizing downtime but also for optimizing the lifecycle and efficiency of the system. System health monitoring is akin to a vigilant sentinel, continually assessing various parameters to maintain optimal performance and safety. The evolution of SHM has been rapid, paralleling advancements in sensor technology, machine learning, and data processing. It is now possible to integrate a multitude of sensors that can track everything from temperature, pressure, vibration, to more nuanced indicators of system performance [4]. These data streams are then processed using intelligent algorithms that can discern patterns indicative of a potential failure or inefficiency. The application of artificial intelligence and machine learning has particularly transformed SHM, enabling predictive analytics that can forecast failures well before they become critical, allowing for preventive measures that can save both time and resources. The generic signal processing methodology enhances the analysis of acoustic signals with scattered sound field [5]. However, the implementation of SHM is not without challenges. The complexity of modern systems, coupled with the ever-growing demands for efficiency and sustainability, requires a nuanced understanding of the particular system's architecture and operational context. The integration of sensors and analytical tools must be carefully tailored to the specific needs and constraints of the system in question. Furthermore, ethical considerations related to data privacy and security must be judiciously addressed to maintain trust and compliance with regulatory requirements. In industries where system failures can result in catastrophic consequences, such as aviation, nuclear energy, and healthcare, SHM is not merely an optimization tool but a vital component for safety and compliance [6]. Even in less critical domains, the economic impact of unplanned downtime can be substantial, underscoring the universal relevance of SHM. Vibration signals have been studied to detect the artifects of mechanical products [7]. This paper aims to explore the contemporary landscape of system health monitoring, delving into the underlying technologies, methodologies, applications, and challenges. Through a comprehensive examination of recent developments and case studies, it will illuminate the vital role that SHM plays in our modern world, fostering a deeper understanding of its potential to drive innovation, efficiency, and safety across various domains. ## 2 Prior Arts and Methods In mechanical systems, the reliability and effectiveness of data analysis and control depend heavily on the quality of the acquired signals. Noise interference, stemming from various sources, often corrupts these signals and hampers accurate monitoring and control. Signal denoising, the process of extracting the true signal from the noise-infected observation, plays a pivotal role in improving the efficiency and reliability of mechanical systems. In industrial applications, transportation, healthcare equipment, and more, denoising is indispensable for accurate system modeling, fault detection, and performance enhancement [8]. The challenge of signal denoising lies in the diversity of noise types and their often complex interaction with the underlying true signal. Mechanical systems are typically subjected to various environmental noises, equipment vibrations, electronic interference, and human-induced errors [9]. The task of distinguishing the genuine signal from this multifaceted noise without compromising essential information is a non-trivial problem that requires sophisticated techniques and algorithms [10]. This paper aims to survey the field of signal denoising with a specific focus on its applications in mechanical systems. It will delve into the theoretical foundations, methodologies, and state-of-the art technologies used for denoising, with a particular emphasis on exploring how these methods can be adapted to meet the unique demands of different mechanical systems. With the advent of wavelet transform in the late 1980s, a new era of signal denoising began. Donoho and Johnstone's seminal work in 1994 introduced wavelet thresholding, providing a powerful tool to preserve signal characteristics while removing noise [11, 12]. This approach revolutionized denoising applications in mechanical systems, allowing for more nuanced handling of non-stationary signals typical in such contexts. Recent advances integrating wavelet transform and Autoencoder shed light on 2D signal denoising, achieving state of the art denoising performance with minimal amount of parameters [13]. The field of signal denoising has a rich and diverse history, spanning several decades. In the early stages, linear filtering techniques, such as the Wiener filter, were commonly used. These methods provided a simple way to reduce noise but often led to the loss of important signal features [14]. The development of adaptive filtering techniques, like the Kalman filter, added to the arsenal of tools for denoising, offering more flexibility in handling diverse noise structures. More recently, the emergence of machine learning, particularly deep learning techniques like Convolutional Neural Networks (CNNs), has opened up new frontiers in denoising [15]. These methods leverage large datasets to learn intricate noise patterns, providing highly customized denoising solutions [8]. However, despite these advancements, challenges remain in the application of denoising to mechanical systems. The complexity of noises, non-linearity of mechanical systems, and the need for real-time processing pose persistent hurdles. Recent research has begun to explore hybrid methods, combining traditional statistical techniques with machine learning, to create more robust and adaptive denoising frameworks[14]. This paper will provide an in-depth examination of these historical developments, current state-of-the-art methodologies, and the emerging trends in signal denoising for mechanical systems. By synthesizing insights from across this broad spectrum, it seeks to identify potential future directions and innovations that can further enhance the efficiency and reliability of mechanical systems through advanced signal denoising techniques. ## 3 Results and Discussion ### AutoRegressive Denoising Preliminary Analysis of Mechanical Vibration Signal. The mechanical vibration signal's frequency components and noise types are initially analyzed. This understanding informs the choice of the AR model order and helps in recognizing how noise is embedded within the signal in a mechanical context. Selection of Autoregressive Model Order for Mechanical Systems. The appropriate AR model order for the specific mechanical system is selected using criteria like AIC or BIC. A suitable model order captures the mechanical system's underlying dynamics without overfitting. The chosen order is used to fit the AR model to the mechanical vibration signal, employing methods such as the Yule-Walker equations. Estimation of Noise in Mechanical Signal. Residual errors from fitting the AR model to the mechanical signal are analyzed to estimate noise characteristics. This noise estimation, grounded in the mechanical system's unique vibration signature, helps in differentiating true signal from noise. Reconstruction of Mechanical Vibration Signal. The AR coefficients and noise estimates are used to reconstruct the denoised mechanical vibration signal. This step aims to preserve the genuine mechanical vibration information while eliminating noise. Iterative Refinement of Mechanical Signal Denoising (Optional). For intricate mechanical systems with complex noise structures, iterative refinement may be performed. Using the denoised signal as the new input for repeated denoising improves precision in mechanical signal recovery. Validation of Denoising in Mechanical Context. The effectiveness of the denoising method is assessed considering the mechanical system's specific requirements. Comparisons with reference mechanical signals or evaluation through mechanical-relevant metrics like SNR validate the method's performance. Conclusion of the Method Section. The described autoregressive method provides a robust approach specifically tailored for mechanical vibration signal denoising. By considering the unique characteristics of mechanical systems, this method offers a targeted solution for accurate vibration signal recovery. Its adaptability allows for applications across various mechanical domains, enhancing efficiency and reliability in situations where precise vibration analysis is essential for the system's functionality and safety. Figure 1: Mechanical motor shaft vibrational signals. Upper figure: noise-free signal with added Gaussian noise; lower figure: noise-free signal with added Brownian noise. ### Transformer Denoising Transformer architecture advances the 1D and 2D signal processing techniques [16, 17]. The raw mechanical vibration signals often contain noise and disturbances. A careful preprocessing step, including normalization and segmenting the signals into appropriate windows, is performed. Exploratory data analysis helps in understanding the nature of the noise and the critical features of the mechanical vibration signals. Designing the Transformer Architecture. Transformers are a class of deep learning models particularly effective in handling sequential data. For the mechanical vibration signals, a custom transformer architecture is designed. This architecture consists of multiple layers of self-attention mechanisms, feed-forward neural networks, and normalization layers. The self-attention mechanism allows the model to weigh the importance of different parts of the signal sequence, thereby capturing the intricate relationships within the mechanical vibrations. The transformer model is designed to process sequences of mechanical vibration signals. The model expects input sequences with a specific length of 128, and each signal is embedded into a 64-dimensional vector space. The core of the model is the Multi-Head Attention layer, equipped with 8 attention heads. These multiple heads allow the model to simultaneously focus on different parts of the sequence, capturing intricate patterns and relationships within the mechanical vibrations. The attention mechanism employs key, query, and value vectors, all having a dimensionality of 64, aligning with the embedding size. Following the attention mechanism, a Layer Normalization is applied to stabilize the learning process. This layer normalizes the output across the features and uses an epsilon value of 1e-6 to ensure numerical stability. The next part of the transformer block consists of a Feed-Forward Neural Network. This network is constructed of two dense layers, each containing 64 hidden units. A ReLU activation function is placed between these layers to introduce non-linearity, allowing the model to learn more complex transformations of the attention output. Residual Connections are implemented around both the attention and feed-forward layers. These connections are simple additions of the input and output of the layers, enabling the training of deeper models by allowing gradients to flow more freely through the network. The model concludes with a final dense layer with 64 units, responsible for providing the denoised signal sequence. Figure 2: Performance of denoising algorithm in presence of different levels of noise. Left figure: variance of noise = 0.1; right figure: variance of noise = 0.2. The model is compiled with the Adam optimizer, a popular choice known for its adaptive learning rate. The loss function used to guide the training process is the Mean Squared Error (MSE), suitable for measuring the difference between the denoised output and the original clean signal. The model is trained for 50 complete passes through the dataset, with the data divided into batches of 32 examples each. A validation split of 20% is used to assess the model's performance on unseen data during the training process. In summary, the transformer architecture, specifically designed for mechanical vibration signal denoising, combines multi-head attention, feed-forward networks, layer normalization, and residual connections. The chosen numerical values for the sequence length, embedding size, attention heads, and other parameters are tailored to capture the underlying characteristics of mechanical vibrations and effectively filter out noise. The model is compiled with the Adam optimizer, a popular choice known for its adaptive learning rate. The loss function used to guide the training process is the Mean Squared Error (MSE), suitable for measuring the difference between the denoised output and the original clean signal. The model is trained for 50 complete passes through the dataset, with the data divided into batches of 32 examples each. A validation split of 20% is used to assess the model's performance on unseen data during the training process. ## 4 Conclusion and Future Work In this work, we have presented a novel approach to mechanical vibration signal denoising utilizing a transformer-based architecture. The complexity and characteristics of mechanical vibrations necessitate a robust and adaptive method capable of discerning relevant patterns while eliminating noise. Our model, designed with a sequence length of 128 and an embedding size of 64, is particularly tailored to these requirements. The innovative use of a Multi-Head Attention layer with 8 attention heads has enabled the model to simultaneously consider various aspects of the signal sequence, thereby enhancing its ability to recognize and extract essential features. The attention mechanism, paired with the Feed-Forward Neural Network and Residual Connections, provides flexibility and depth, facilitating the learning of intricate signal relationships. Further stability in training has been achieved through Layer Normalization, applied after both the attention and feed-forward layers, ensuring numerical stability and efficient convergence. The training process, guided by the Mean Squared Error loss function and optimized using the Adam optimizer, was carried out for 50 epochs, with a carefully selected batch size of 32 and a validation split of 20%. The final model has demonstrated effectiveness in denoising mechanical vibration signals, retaining crucial information while filtering out undesired noise. The architecture's specific design, coupled with the thoughtful selection of parameters, has resulted in a method capable of adapting to the complex nature of mechanical systems. Future work may explore further optimization of the architecture and parameters, integration with other denoising techniques, and extensive testing across various types of mechanical systems. The current results, however, lay a strong foundation for applying deep learning transformers in the critical field of mechanical vibration signal analysis and denoising, with promising implications for industrial maintenance, monitoring, and diagnostics.
2310.18374
Forum on immune digital twins: a meeting report
Medical digital twins are computational models of human biology relevant to a given medical condition, which can be tailored to an individual patient, thereby predicting the course of disease and individualized treatments, an important goal of personalized medicine. The immune system, which has a central role in many diseases, is highly heterogeneous between individuals, and thus poses a major challenge for this technology. If medical digital twins are to faithfully capture the characteristics of a patient's immune system, we need to answer many questions, such as: What do we need to know about the immune system to build mathematical models that reflect features of an individual? What data do we need to collect across the different scales of immune system action? What are the right modeling paradigms to properly capture immune system complexity? In February 2023, an international group of experts convened in Lake Nona, FL for two days to discuss these and other questions related to digital twins of the immune system. The group consisted of clinicians, immunologists, biologists, and mathematical modelers, representative of the interdisciplinary nature of medical digital twin development. A video recording of the entire event is available. This paper presents a synopsis of the discussions, brief descriptions of ongoing digital twin projects at different stages of progress. It also proposes a 5-year action plan for further developing this technology. The main recommendations are to identify and pursue a small number of promising use cases, to develop stimulation-specific assays of immune function in a clinical setting, and to develop a database of existing computational immune models, as well as advanced modeling technology and infrastructure.
Reinhard Laubenbacher, Fred Adler, Gary An, Filippo Castiglione, Stephen Eubank, Luis L. Fonseca, James Glazier, Tomas Helikar, Marti Jett-Tilton, Denise Kirschner, Paul Macklin, Borna Mehrad, Beth Moore, Virginia Pasour, Ilya Shmulevich, Amber Smith, Isabel Voigt, Thomas E. Yankeelov, Tjalf Ziemssen
2023-10-26T20:14:35Z
http://arxiv.org/abs/2310.18374v1
# Forum on immune Digital Twins: A Meeting Report ###### Abstract Medical digital twins are computational models of human biology relevant to a given medical condition, which can be tailored to an individual patient, thereby predicting the course of disease and individualized treatments, an important goal of personalized medicine. The immune system, which has a central role in many diseases, is highly heterogeneous between individuals, and thus poses a major challenge for this technology. If medical digital twins are to faithfully capture the characteristics of a patient's immune system, we need to answer many questions, such as: What do we need to know about the immune system to build mathematical models that reflect features of an individual? What data do we need to collect across the different scales of immune system action? What are the right modeling paradigms to properly capture immune system complexity? In February 2023, an international group of experts convened in Lake Nona, FL for two days to discuss these and other questions related to digital twins of the immune system. The group consisted of clinicians, immunologists, biologists, and mathematical modelers, representative of the interdisciplinary nature of medical digital twin development. A video recording of the entire event is available. This paper presents a synopsis of the discussions, brief descriptions of ongoing digital twin projects at different stages of progress. It also proposes a 5-year action plan for further developing this technology. The main recommendations are to identify and pursue a small number of promising use cases, to develop stimulation-specific assays of immune function in a clinical setting, and to develop a database of existing computational immune models, as well as advanced modeling technology and infrastructure. ## Introduction The concept of a _medical digital twin_ (MDT) represents a pivotal technology envisioned to make personalized medicine a reality. This entails using predictive computational models to harness diverse patient data over time, allowing for identification of optimal interventions and corresponding predictions of their effectiveness for an individual patient; see, e.g., [1][2][3][4]. Scaling up this concept into a widely used medical technology necessitates substantial coordinated advancements across several fields, including human biology, medicine, biochemistry, bioinformatics, and mathematical and computational modeling. A sign of increasing interest in this technology was evident in the workshop "Opportunities and Challenges for Digital Twins in Medicine," organized by the National Academies of Science, Engineering, and Medicine in January 2023 [5][6]. One possible long-term vision is a virtual replica of an entire patient that evolves with the patient over the course of their lives, as articulated by the Virtual Physiological Human Institute [7] and the European Virtual Human Twin Project [8]. The foundations for MDT technology, however, are yet to be developed. The Forum described here, and other efforts [9][10] have focused on digital twins for medical conditions related to the immune system. This provides a narrower focus, but at the same time addresses a wide range of diseases that involve the immune system in an essential way, such as infectious diseases, autoimmune diseases, and cancer, among others. For this article, we adopted a broad definition of an MDT: it comprises a patient, a set of temporal data collected from that patient, and a computational model calibrated with this data. This allows for conclusions to be drawn about the patient, either at one time point or in the form of outcome forecasting, with or without interventions. In essence, the patient is paired with a computational model that is personalized and may synchronize repeatedly with the patient over time. Alternatively, the MDT might represent a patient population for the purpose of virtual clinical trials. This ongoing linkage and exchange of data and information between the patient and the MDT is the most important characteristic that distinguishes an MDT from a model, even a personalized model. This concept applies to treatment of an existing health condition or preventing the emergence of one, in which case the data might come from electronic health records or wearable sensors [11]. This definition of personalization is rooted in the hypothesis that any two given patients will differ in underlying biology, disease trajectory, and hence optimal treatments over time. Even with this broader definition, there are relatively few instances of MDTs that have reached patient care. While there are many models in the literature that could be further developed into MDTs, the link to individual patients has not been fully explored in most cases, so that extensive further development is required. The Forum, the subject of this report, was primarily focused on such early stage MDTs and what is needed to progress to the clinical stage. The report highlights some projects of this kind as use cases: **computational models that are being used or could be further developed for use in informing the treatment of individual patients.** While the industrial application of the digital twin concept is instructive, it differs from medical digital twins in several key aspects. Most importantly, human biology is not the result of a planned design, but the outcome of an evolutionary process, with many emergent properties. We do not have a complete theoretical understanding of biological systems, providing a list of general principles that could form the basis of computational models, as we do for physical systems. Finally, two other characteristic features of biological systems are genotypic and phenotypic heterogeneity across individuals and stochasticity in system dynamics. All these features present massive challenges to mathematical modeling of individuals and populations. A wide range of mechanistic, phenomenological, and statistical models are being used for this purpose. Biological mechanisms cross scales as do therapeutic interventions. For instance, many drugs target intracellular mechanisms but have tissue- or organ-level effects. Therefore, many mechanistic MDTs will need to span multiple scales. This raises the question of whether our current repertoire of modeling paradigms is sufficient to form the basis of digital twins across various health conditions and how to choose the right type of model for each one. How can we effectively and credibly capture key features of human biology in a manner suitable for a specific clinical application while considering the diversity of patients and their individual characteristics? To begin addressing these questions, an international group of experts convened in Lake Nona, FL, February 23-24, 2023, for the "Forum on Precision Immunology: Immune Digital Twins" [12], supported by a grant from the Biomechanics Program at the U.S. Army Research Office. The aim was to discuss these questions and assess examples of ongoing modeling projects that are part of MDT development related to immunity. This report encapsulates a synthesis of these discussions, offering a sample of ongoing MDT projects at different stages of development, and an outline of challenges to be addressed over the next five years. The development of MDTs takes place at the interface of medicine, experimental biology, and mathematical modeling (see Fig. 1). The Forum participants are all authors of this article, and represent a cross-section of these fields, including clinicians, immunologists, experimental biologists, and mathematical modelers. The Forum served as a venue to discuss the different perspectives each of these communities has on the prospect of using personalized computational models in the clinic. To facilitate an exchange of ideas across these fields, the program consisted of a collection of 45-minute blocks, with a 15-minute presentation by a participant, followed by 30 minutes of discussion. The only audio-visual aid available to presenters was a whiteboard, favoring discussion over formal presentations. High-quality audio-visual recordings of the individual sessions are available through links at [12]. The reader is encouraged to view the presentations, as they contain many valuable ideas, viewpoints, and information not contained in this synopsis. Below are the titles of each of the discussion sessions. (The titles correspond to the links to video recordings on the website [12].) **Adler:** Summary of conceptual, scientific, practical and ethical challenges and opportunities discussed by other participants in developing medical digital twins. **An:** Axioms of personalized precision medicine. **Castiglione:** Constructing a computational representation of the Immune System: necessities, constituents, and operational aspects, along with proposed approaches for model development. **Eubank:** Lessons to be learned from other fields about data assimilation. Figure 1: The development of MDTs requires the collaboration and close communication of three communities: biologists, clinicians, and computational modelers. They are connected through an infrastructure that allows the flow of data and information connecting the digital twin with the patient. The end result is a digital replica that closely tracks the patient over time. **Glazier:** A theoretical framework for the construction of medical digital twins. **Helikar:** Towards a General Purpose Immune Digital Twin. **Jett-Tilton:** Digital twins for PTSD. **Kirschner:** Models and Tools for building beta versions of digital partners. **Laubenbacher:** Introduction to the Forum. **Macklin:** Integration of standardized, reusable descriptions of cell behaviors and interactions. **Mehrad:** The application of MDTs to the intensive care unit. **Moore:** Immunologic considerations for building MDT. **Pasour:** Funding opportunities. **Shmulevich:** Patient Digital Twin for Acute Myeloid Leukemia. **Smith:** Immune heterogeneity in the context of lung infection. **Yankeelov:** Imaging-based digital twins for oncology. **Ziemssen:** A digital twin for autoimmune diseases accessible to the patient. We now outline the general themes of the Forum discussions for each of the three pillars of MDTs: the clinic, immunology, and mathematical modeling. And we extract a collection of action items for a 5-year plan to further MDT development. **Human Immune SYSTEM Biology** The human immune system is highly specialized and has evolved to have exquisite specificity for defending its host from injury and infection. During health, the immune response is tightly orchestrated to respond to threats without inducing significant tissue damage, but dysregulation can occur, contributing to cancer or autoimmunity [13]. The complexity, specificity, and regulation are all challenges to creating effective MDTs involving the immune response. A few of these challenges and opportunities are highlighted below. The list is not comprehensive and serves to illustrate some key directions for research and data collection. 1. [topsep=0pt,itemsep=0pt] 2. Genetic Diversity and Immune Cell Activation: Self-specificity of the adaptive immune response in any individual is governed by the ability of the host cell to display peptides derived from foreign threats (e.g., microbes) in the context of human leukocyte antigens (HLA). Every person inherits 6 major HLA alleles from each parent (HLA- A, B, C, DP, DQ and DR for 12 total) and there are over 37,000 HLA and related alleles characterized to date [14]. If this was relevant information for a particular MDT, then we could simply tissue-type individuals and feed their alleles into our MDT. However, while there are good computational tools to predict what peptide will fit into the groove of the appropriate HLA molecule of each person [15], we do not yet have good methods to predict which of the many possibilities is likely to be immunodominant within a person. The immunodominance of the response will be related to the cadre of T and B cell receptors that are present in each individual. It is estimated that 10\({}^{13}\)-10\({}^{18}\) different T and B cell immune receptors are generated through genetic rearrangements, reassortments and editing; however, many of the cells carrying autoreactive receptors (or those that don't recognize self at all) are deleted, leaving approximately 10\({}^{11}\)-10\({}^{12}\) different specificities in circulation [16]. The ability of a particular T cell to encounter a particular antigen-presenting cell with the correct HLA and the correct peptide for activation is stochastic in nature and involves the probability of the two cells encountering one another and the strength of the interactions and co-receptor signaling to activate the cells. Once activated, a further level of complexity involves the cytokine stimulations that will direct the T cells into particular subsets with unique attributes [17]. Thus, the sheer complexity in terms of genetics, random interactions, cytokine profiles, receptor diversity and outcomes are daunting when considering how to model individualized immune responses within an MDT. 3. Phenotype \(\neq\) Function: Despite the immense complexity described above, there are emerging machine learning and artificial intelligence algorithms that can predict peptide binding to HLA, T cell binding to peptide-HLA and T helper cell differentiation programs [18][19][16][20][21], but it may not be necessary to build all the inherent complexity into an MDT. For example, RNA sequencing technology and multi-parameter flow cytometry have given us the opportunity to finely phenotype immune cell subsets (with flow cytometry being a potentially real-time source of data), but the molecular phenotype of the cells does not necessarily connote function. Thus, an area to focus on for the future is to develop rapid functional assays that will assess a desired output (_e.g._, production of interferon gamma in response to a viral antigen stimulation), which would indicate that antigen-specific T cell responses had occurred. Identification of one or a few critical functional assays of relevance to the disease process being modeled may be an effective way to aggregate the many variables of immune activation into a single continuous variable for modeling immune response. 3. _Stochasticity:_ In medicine, when patients respond differently to various infections or treatments, we often attribute this to differences in age, genetics, or other lifestyle factors. However, even in the research laboratory where experiments can be conducted in genetically identical individuals (_e.g._, mice), housed in identical conditions and infected with identical pathogens and doses, we see variations in the response. For example, for any infection, researchers can generate an LD\({}_{50}\) which is defined experimentally as the dose at which 50% of the animals die. How is it possible for there to be a dichotomy in the response between identical individuals? This is generally attributed to stochastic events such as deposition of the infectious agent in particular regions of the body (_e.g._, different lung lobes), and it may relate to the number of immune cells patrolling that particular area of deposition. Given that this is a known biological phenomenon, such _stochasticity will need to be built into the MDT._ 4. _Source of samples:_ Another hurdle for MDTs is that the most easily accessible source of biological material is from the bloodstream; however, the response patterns in circulation often do not mirror the tissue-specific responses. Finding ways to safely sample tissues like the brain or lung or heart remain a challenge, and therefore surrogate measures are likely needed. MDTs can be built from electronic health records, but these will be incomplete for many tissue compartment responses. ### Five-year action plan for human immune system biology A major goal for the next 5 years should be the development of enhanced _in vitro_ and _ex vivo_ assays to accurately predict immune function in response to relevant stimulation. Additionally, creating workflows that can swiftly provide this crucial information in a clinical setting will aid in prognostication in the MDT. Achieving this will likely necessitate improved modeling of _in vivo_ conditions, including aspects such as oxygen saturation, tissue architecture and tissue-specific compartmentalization of responses. One potential approach could involve monitoring an individual's "immune baseline" through routine blood work at homeostasis. This may sufficiently inform the development of MDTs. While this could help predict some clinically relevant responses (_e.g._, someone with high basal IL-4/IL-13 levels and high circulating eosinophil levels might be predicted to have more exuberant allergic responses), the post-stimulation response beyond the baseline may be more pertinent. Therefore, we strongly advocate for the development of stimulation-specific assays of immune functions. Other recommendations for improved modeling of the human immune response via MDTs include: * Development of _ex vivo_ culture systems that rapidly measure relevant cell behaviors and interactions (proliferation, cytokine secretion, phagocytosis, etc.) in response to disease-relevant stimuli to provide data for MDT modeling. * Identification of culture systems and animal models that more closely mimic human _in vivo_ biology (e.g., 3-D cultures with multiple cell types, organoids, hypoxic environments) to provide more relevant insight for modeling. The inclusion of immune cells in _ex vivo_ tissue models, which is currently rare, should be encouraged. * Identification of ways to sample tissue-specific compartment responses (_e.g._, using implanted sensors or scaffolds for sampling). Alternatively, identification of surrogate markers for tissue-specific responses in circulation. ## The Clinic The "twin" component of an MDT explicitly ties the digital object to an individual patient, and therefore inherently incorporates a translational purpose of the MDT. As such, the potential clinical role of an MDT will drive its development. Clinical practice can be divided into a series of distinct, but related tasks: 1) diagnosis of a potential disease state (this includes monitoring a state of health to identify divergences); 2) prognosis, which attempts to predict or forecast a particular disease trajectory; 3) personalization/optimization of existing therapies; and 4) the discovery/testing of novel treatments. Items 1-3 form the basis of current clinical practice, with a mixture of basic pathophysiology, evidence-based (ideally) practice guidelines and an individual physician's expertise and intuition. Conversely, Item 4, the discovery/testing of novel treatments, is traditionally the purview of research. These tasks can also be grouped into types: 1) a classification task ("What illness is the medical team dealing with?"); 2) a forecasting task ("What is going to happen to my patient in the future?"); and 3) a control task ("What is the best course of action to make my patient better?"). Classifying a particular use-case for potential MDTs can aid in determining what sort of data is necessary and available (or not) for a particular purpose, what the time scale might be for the updating between the MDT and the real-world twin, and what type of computational method(s) would be needed to propagate the MDT forward in time (this aspect will be covered in more detail in the "Mathematical and Computational Modeling" section below). Another application one could envision is for an MDT to serve as a benchmarking tool to evaluate current therapy. It is worth stating explicitly that MDT technology will likely follow the same path as other new technologies. Initial prototypes will have a limited range of capabilities and modest performance and serve perhaps more as a proof-of-concept than fully functional products. The minimum bar any MDT will have to clear, of course, is that it needs to perform at least as well as the standard of care for a given application, without any additional risks to patients. The experience gained from initial development and data collected from its use will then drive the development of increasingly more sophisticated versions. As an example, we present a cascading set of increasingly powerful potential use cases of MDTs in the treatment of sepsis, one of the largest sources of morbidity, mortality and health care costs world-wide (WHO). 1. Early detection of sepsis is a health-monitoring classification task. This could employ an MDT trained on physiological signals, electronic medical record data and standard laboratory values to deliver an "early warning system" for sepsis. 2. Predicting the trajectory of sepsis. This could be related to the diagnosis task, as certain features might suggest a clinical trajectory that leads to sepsis. It could also be applied to patients already diagnosed with sepsis, to attempt to risk-stratify patients to identify those at risk for clinical deterioration. 3. Optimization of existing therapies for sepsis. The mainstay of current treatment of sepsis involves early administration of antibiotics, source control of potential sources of infection, and physiological support, which includes fluid resuscitation, the use of vasopressors to support blood pressure, and mechanical devices to support failing organs (_i.e._, ventilators and dialysis machines). The combinations of applications, both in time and in degree, could be guided by a sufficiently trained MDT. 4. Discovery and deployment of new therapies. The unfortunate fact of sepsis is that, to date, there is no generally accepted means of interrupting the underlying inflammatory/immune biology that drives sepsis and its subsequent organ failure. Major contributing reasons for this are the overall heterogeneity of the septic population (reflected in a gap between the means of "diagnosing" sepsis and the degree of knowledge regarding the cellular-molecular mechanisms that drive the disease) and the complexity, both in terms of the underlying biological mechanisms and their dynamics in given different insults, of the disease course. In short, effective treatment/control requires identifying the right patient at the right time for the right set of therapies, and the current means of doing these tasks for a septic patient are woefully inadequate. It is here that MDTs can play an invaluable role in personalizing the characterization of a septic patient so that "right patient, right time, right drug(s)" can be achieved. ### Five-year action plan for the clinic A Five-Year plan for the development and deployment of MDTs needs to integrate capabilities that can improve patient health within a decade, with aspirational capabilities that will allow MDTs to reach their full potential. With this in mind, we propose the following actions: 1. Clear identification of specific disease processes to be targeted for development (some candidates are identified below). 2. Explicit definition of specific use-cases/tasks for a given disease. 3. Identification of specific data types required for each use case, whether that data currently exists in some form or will be available in the future to meet the capabilities of an aspirational MDT. Of note, obtaining time series/ongoing collected data is essential to this step, as the concept of time-evolution of the MDT is inherent to its definition. 4. These first three steps should be integrated into a detailed "roadmap" for the development and deployment of the MDT. 5. Use of this roadmap to engage collaborators and stakeholders (_i.e._, clinicians and clinical researchers, assay developers, mathematical modelers, and biologists) to facilitate the collection of existing data and develop the capability to acquire new types of data as needed. 6. Deployment of an initial MDT with diagnostic and prognostic utility for clinical decision-support. Ideally, there should be enough preliminary data such that reasonable planning could be implemented after a short period for clinical trials to demonstrate their utility. ## 2 Mathematical and computational modeling The engine of any MDT is a computational model. Depending on the application and available data, it may include mechanistic information about the relevant human biology, and it may take as input information specific to either an individual patient or a patient population. In all cases, the output is information that can be used in the treatment of an individual patient. Figure 2 depicts the role of the computational model in the workflow of MDT applications. For instance, a deep learning model might be trained on clinical data from a large patient cohort of gastric cancer patients, and is then used to determine a patient's response to an immunotherapy treatment [22]. Such models may or may not include any mechanistic information about the relevant tumor biology, such as mutated signaling pathways and their downstream effects, and predictions for the specific patient are based on correlation between the patient's data and those of the reference population used in the model. At the other end of the spectrum, a computational model may capture all known features of human biology relevant to a given application and may make treatment recommendations based on a model analysis, informing clinical trials, without using any data from a specific patient [22]. An MDT that most closely adheres to the industrial concept of a digital twin needs to do both. It will use a mechanistic model of some aspect of human biology and also provide individual treatment recommendations based on that patient's data. **The focus of Forum participants was primarily on MDTs based on a mechanistic computational model.** This preference stems from the ability of mechanistic MDTs to link outcomes to mechanisms, thereby informing treatment. Additionally, these models allow for the performance of uncertainty quantification in relation to their predictions. Many mechanistic models of human biology are now available, particularly those incorporating aspects of the immune system. For numerous applications, the underlying model of an MDT will need to encompass various mechanisms, spanning several spatial and temporal scales. For example, while most drug mechanisms are intracellular, their effects manifest at the tissue or organ scale, necessitating cross-scale integration. The immune response to an infection is multifaceted, coordinating diverse mechanisms and cell types. Consequently, computational models for MDTs will likely be high-dimensional, multi-scale, multi-physics, hybrid, and stochastic, containing numerous parameters. Integrating heterogeneous data types, from molecular to physiological, will be essential for their parameterization and application. Most crucially, these models should be adaptable to individual patient data. Very few such models have been constructed for clinical use or new biology discovery, leading us into uncharted territory in their construction, analysis, validation, and application. Below is a proposed 5-year plan to develop the necessary technology for building credible MDTs for applications involving the immune system. **Five-year action plan for mathematical and computational modeling.** 1. The biomedical modeling community has spent decades building complex models of different medical and disease processes in humans from cancer to infections. These are all potentially usable as drivers of MDTs or components thereof. As a first step, we need to develop and curate a repository of model templates (_i.e._, accepted model structures) Figure 2: The computational model at the heart of an MDT serves several purposes. It integrates human biology, clinical data, data characterizing reference populations, and patient-specific data. It is personalized to the patient and is periodically re-calibrated. Control algorithms attached to the Model can be used to optimize available patient treatments. and specific model modules (e.g., peer-reviewed models of specific signaling networks) that can be used in the construction of MDTs, ranging from intracellular to physiological scales. Existing repositories include, e.g., Biomodels [23], Cell Collective [24] and GinSim [25]. These can be built upon for a more comprehensive curated collection. 2. The most important criterion for models underlying MDTs is their credibility. Much effort has gone into developing criteria and rules for model credibility [26][27][28]. For models that are used in the treatment of patients, the set of rules will need to be modified. The final judgment whether a model is credible lies with the clinician who uses the MDT as a decision support tool. This requires a higher standard than for models used to discover new biology or even for models used in drug development, where the clinical trial is the final arbiter. 3. Existing techniques for the validation, calibration, and analysis of computational models, most importantly sensitivity and identifiability of model parameters, are not always directly applicable to stochastic multiscale hybrid models or can be computationally expensive to apply. Research is needed to develop appropriate model analysis techniques for MDTs. 4. For many applications, MDTs will be used to forecast the future health trajectory of a patient, as well as the effect of available interventions to change it. Existing approaches to forecasting and data assimilation, such as methods based on Kalman filters used in numerical weather prediction, have several limitations when applied to high-dimensional hybrid models. Existing control and optimization methods (_e.g.,_[29]) mostly apply only to ordinary differential equations models. Research is needed to develop novel forecasting and control approaches suitable for complex MDTs. 5. There are many existing models of disease processes and immune system function that can be used to build MDTs, as mentioned above. Research is needed to develop a platform for the modular construction of complex MDT models from component models. Such a platform is essential for achieving the long-term vision of a virtual patient. A possible approach includes [30]. 6. For applications, ensuring that MDT simulations are conducted within a clinically relevant time frame is crucial. This often necessitates the use of high-performance computing resources. Additionally, it may require the development of approximate models that, while offering rapid simulation capabilities, still maintain a high level of accuracy. 7. If MDTs are to be used in patient care, they will need to be accessible by clinicians and patients through appropriate user interfaces. The user will also need to be able to evaluate the trustworthiness of MDT forecasts and recommendations. ## Examples of ongoing MDT Projects The wide range of MDT projects and application areas would require a comprehensive review of the subject. Here, we present a selection of ongoing projects by some of the Forum participants. The selection illustrates several disparate types of applications, methodologies, and uses. They are at different stages of development and collectively illustrate the issues we have raised in this meeting report/perspective. A summary of the projects presented at the Forum can be seen in Table 1. Generally, projects can be characterized as follows: 1. Whether the underlying computational model/specification is generated using an existing modeling toolkit/format (which would allow for potentially greater community level expansion) or a "custom" model specific to a particular research laboratory. 2. The disease process addressed by the nascent MDT project. 3. The data types and sources that are available for the data interface between the patient and the digital twin. This ranges from demographic and clinical descriptive data, as found in electronic medical records, the results of diagnostic imaging and tests, and more specific assays that are currently mostly available in the research context (_i.e._, gene expression, multiplexed mediator assays or highly granular cell type characterization). 4. Whether such a data interface currently exists for the nascent MDT at its current level of development. 5. The modeling method used for the current computational model/specification of the MDT. This includes whether a mechanism-based dynamic model is used, whether a machine learning/artificial intelligence component is part of that dynamic model, or whether the specification is in its early development stages. 6. The approach by which the MDT computational model/specification is personalized (_i.e._, the "twinning" process) to an individual patient in the real world. A precursor to the actual personalization would be the generation of virtual populations, which represents a theoretical distribution of real-world individuals, but have not yet reached a point of development where there can be a direct mapping/connection to an individual patient in the real world. 7. The ostensible clinical goal of the MDT. This could range from diagnosis/surveillance, prognosis/disease trajectory forecasting, optimization and personalization of existing therapies, or the discovery of novel therapies, be they new therapeutic agents, new combinations of existing drugs, or the repurposing of existing drugs into new disease contexts. 8. Whether the MDT project has a patient-facing/engaging interface. This step informs whether, based on the context of its use, such a patient-engagement capability would increase the willingness of potential patients to participate in the MDT project, and helps establish a context for dealing with ethical issues such as patient privacy, data ownership/stewardship and participatory medical decision-making. ### A host model for tuberculosis that spans the molecular to the whole host scale (D. Kirschner) Tuberculosis (TB) continues to be a global disease threat, even compared to the COVID pandemic. Approximately one-fourth of the world is infected with _Mycobacterium tuberculosis_ (Mtb) in their lungs; however, most patients are classified as having latent tuberculosis (~90%) with only a small percentage with clinically active disease (~10%) (WHO). While patients are categorized within what seems as binary states, recent work has shown that TB manifests as a spectrum of outcomes within both humans and non-human primates (NHPs) [31][32][33][34]. Importantly, latently-infected individuals may undergo reactivation events and thus serve as a potential reservoir for transmission [35][36]. Much remains unknown about the biology that drives disease states in pulmonary TB. Understanding what drives different infection outcomes is important as it will inform development and approaches for treatment and prevention. The hallmark of TB is the formation of lung granulomas, which are organized immune structures that immunologically constrain and physically contain Mtb. These develop in the lungs of infected hosts after inhalation of mycobacteria [31]. NHP data have shown that a single mycobacterium is sufficient to begin the formation of a granuloma and that each granuloma has a unique trajectory. Granulomas are composed of bacteria and various immune cells, such as macrophages and T cells (primarily CD4+ and CD8+ T cells, although other unconventional T cell phenotypes are also present). T cells have well-known critical functions against Mtb [37], but unlike other infections, T cells are slow to be recruited to the lungs, arriving approximately a month after infection. Lung-draining lymph nodes (LN) serve as the sites for initiating and generating an adaptive immune response against most infections, including Mtb. We developed a novel whole-host scale modeling framework that captures key elements of the immune response to Mtb within three physiological compartments - LNs, blood and lungs of infected individuals. Together, this model platform, called _HostSim_[38][39], represents a whole-host framework for tracking Mtb infection dynamics within a single host across multiple length scales and long time scales (days to months to years). We calibrated and validated the model using multiple datasets from published NHP studies and humans. _HostSim_ offers a computational tool that can be used in concert with experimental approaches to understand and predict events about various aspects of TB disease and therapeutics. Recently, we have generated hundreds to thousands of _HostSim_ "virtual patients" that are infected with TB at different times and have slightly unique immune characteristics. We refer to this collection of virtual hosts as a "virtual cohort". This virtual cohort can serve as a bank of digital "partners" that can be closely associated with an actual patient. Initially, a large group of partners (_i.e._, a 'digital family') would be assigned to that patient. Then, as more data become available, the family of partners that are associated with this patient would narrow until a single digital twin remains. ### Virtual Patient Cohorts for Virus Infections (A.M. Smith) Respiratory viruses cause a significant number of illnesses and deaths each year, with considerable health and economic burden. Infections with viruses like influenza or SARS-CoV-2 yield a variety of outcomes that range from asymptomatic to fatal. Numerous viral and host factors in addition to complications from other pathogens and underlying diseases can result in heterogeneity in the severity of infection, but their contribution or those from other, hidden mechanisms is unknown. This makes predicting a patient's disease trajectory and the potential for efficacious vaccination or antiviral therapy challenging. The goal of this project is to build virtual patient cohorts (VPCs), with each patient having a personalized immune trajectory [40] to define immunologic processes that initiate diverse outcomes. We construct mechanistic and experimentally validated computational models of the host response and define immune correlates of disease. A focus is on establishing the nonlinearities that drive many immune processes and their connections to disease [41][42]. Within this approach, models are iteratively updated with new data, as immunological knowledge evolves, and as smaller models are validated with targeted experimentation alongside generating diverse VPCs to evaluate underlying comorbidities. ### The Digital Twin Innovation Hub (T. Helikar) The Digital Twin Innovation Hub [43], established in August, 2022, is leading the development of a general purpose immune digital twin that will be contextualizable and applicable to many, and eventually any, immune-related pathology. A comprehensive cellular-level model and map of the immune system, consisting of nearly 30 cell types, over 30 cytokines and immunoglobulins spanning both innate and adaptive immunity has been developed to form a "blueprint" of the general purpose immune digital twin [44]. Detailed sub-cellular models of signal transduction and genome-scale metabolism for each of the 30 cell types have also been developed (e.g., dendritic cells, CD4+ T cells [45][46]. Work to integrate these sub-cellular models into a comprehensive multi-scale, multicellular model of the immune system is under way. Digital Twin Innovation Hub is also developing a software infrastructure to enable the construction, contextualization, personalization, analysis, and simulation of the general purpose immune digital twin. To accomplish this, the Hub is leveraging and building atop of Cell Collective, a web-based collaborative modeling platform [24]. To this end, Cell Collective supports several modeling approaches, including logical, kinetic, and constraint-based models, and will soon also support physiologically-based pharmacokinetic/pharmacodynamic models and virtual clinical trials. Cell Collective also provides a repository of computational models, which will provide a gateway to features that will enable their integration into multiscale systems - medical digital twins. A key principle of Cell Collective is its broad accessibility. To fully leverage the potential of medical digital twins, it will be critical that the technology is accessible to a wide range of user audiences, including translational researchers, clinicians, and patients. As such, in Cell Collective, no mathematical or programming skills are required for users to build, modify, simulate, or analyze models. It also allows users to focus on the mechanistic information used to build and simulate the models rather than dealing with the technicality of formalisms used to build and modify the models. #### C-IMMSIM, a generic immune system simulation platform (F. Castiglione) The computer model C-IMMSIM can be seen as the outcome of a collaborative effort between a biologist, who provides insights into mechanisms and actions, and a mathematician, who translates that knowledge into a quantitative framework [47]. Developing an accurate computer model that represents the complexity of the immune system and produces meaningful outcomes is a challenging task. However, by accepting necessary approximations and building upon solid theoretical mathematical and biological assumptions, along with personalized data to infer the model parameters [28], the C-IMMSIM model can be considered as an underlying generic model of an individual's immune digital twin. The essential components and prerequisites that have influenced the development of C-IMMSIM are: diversity in specific repertoires; probabilistic actions capturing the inherent stochasticity of many mechanisms; cooperation between different cell types; cell movement and global control; specific cell-cell and cell-molecule interactions; competition and memory cells; clonal selection and proliferation; controls and memory. All these elements have been incorporated into the C-IMMSIM model using specific mathematical or algorithmic choices. The model can be categorized as an Agent-Based Model (ABM), where individual cells are represented with their unique attributes, such as position, age, membrane receptors, activation status, or differentiation state. ABMs are well-suited for simulating the immune system due to their ability to handle stochastic actions, cell movement, and individual dynamics, while allowing large populations to be simulated and tracked. During the simulation, cells undergo transitions between activation or differentiation states, influenced by stochastic events that rely on the compatibility of their binding sites. While simulating billions of agents and incorporating anatomical variations and an individual's immunological history remains impractical, even with high-performance computers, the overall state of the system in the simulation can still be considered a representative immunological state for an individual. In essence, by adopting a digital twin perspective, the model can be tailored to match a patient's physical attributes and current health condition. Consequently, it can offer valuable insights into an individual's immune status and potential outcomes when encountering specific stimuli. Toward a medical digital twin for pneumonia patients in the Intensive Care Unit (R. Laubenbacher, B. Mehrad) Doctors in intensive care units (ICUs) make decisions in a complex environment, bombarded with thousands of pieces of data, and often under intense time pressure and heterogeneity of patient response to treatment. Available ICU risk calculators provide highly accurate predictions of a patient's length of stay and likelihood of death, but do not provide actionable information about what interventions could be applied to an individual patient to improve the outcome. A common condition of ICU patients is pneumonia. It is the second most common cause of hospital admissions (after admissions for childbirth), with up to 10% of patients requiring an ICU stay. And up to 5% of hospital patients contract pneumonia. It is the leading cause of death worldwide for children under 5. The goal of this project is to build a pneumonia digital twin for ICU patients that serves as a decision support tool for the doctor. The aim of this project is to construct a personalized computational model that encodes disease-relevant biological mechanisms and is dynamically recalibrated as new patient data become available. The computational model underlying the pneumonia MDT will be an extension and modification of a model of the early immune response to a respiratory fungal infection, using the fungus _Aspergillus fumigatus_ as the model pathogen [48]. Ongoing work includes a study of the early immune response to viral and bacterial pathogens. These studies will be used to expand the computational model for fungal pneumonia, covering all major pathogens causing pneumonia. A tissue culture platform, combined with a cryopreservation technique that keeps human lung tissue functional over several days is being used with lung tissue obtained from surgeries. Collecting heterogenous data from infected tissue from a range of donors allows us to "personalize" the computational model to different donors and investigate heterogeneity in disease progression and response to drugs. This represents the next step in developing the computational model to a state where it can be personalized to actual patients. A part of future work to be done is to integrate this tissue/organ-scale model with a physiological model that allows implementation of all standard treatments available to a pneumonia patient, allowing the comprehensive simulation of patient trajectories under treatment. The final product will be an MDT that is based on a mechanistic computational model, is calibrated dynamically to a pneumonia patient in the ICU and can be used to optimize the patient's treatment. **A breast cancer digital twin** (T. Yankeelov) Yankeelov and colleagues have developed mechanism-based mathematical models that are initialized and calibrated with patient-specific, quantitative imaging data for a variety of cancers, especially the breast. The imaging data has included both quantitative positron emission tomography [49] and magnetic resonance imaging (MRI) [50], with a particular emphasis on dynamic contrast enhanced MRI to report on blood flow, and diffusion weighted MRI to report on cellularity. (Using medical imaging data has the advantage of being able to report on anatomical, physiological, cellular, and molecular data non-invasively and at multiple time points to update a digital twin throughout the course of therapy [51]. Given a high-resolution anatomical image to establish the computational domain, reaction-diffusion equations accounting for tissue mechanical properties and therapeutic regimens are solved over the breast to establish patient specific parameters related to tumor cell migration, tumor proliferation, and response to therapy. Once the model system is calibrated, it can be run forward in time to predict the spatio-temporal response of the tumor to the specific treatment with high accuracy [52]. Given that the model can faithfully predict the spatial and temporal dynamics of an individual tumor, it is natural to use it to form the backbone of an MDT designed to predict and, ultimately, identify therapeutic regimens to optimize the tumor response. In fact, preliminary simulation results indicate that merely delivering the same total dose in a patient-specific way can potentially improve outcomes [53]. With the recent inclusion of pembrolizumab into the standard-of-care for the treatment of triple negative breast cancer, the ability to simulate which patients would benefit from immunotherapy and which patients should avoid it (and the associated side effects) is difficult to overstate. It is important to note that only by employing mechanism-based models can one simulate a range of therapeutic options, including new emerging therapeutics without large clinical trials to use as training data. When using a strictly data-driven approach, one can only search for responses to therapeutic regimens that are included in the training set. By using a mechanism-based model, one is not limited to only the therapeutic regimens included in a historical training set. **A leukemia digital twin** (I. Shmulevich) The Acute Myeloid Leukemia Digital Twin (AML-DT) project is an initiative funded by the National Cancer Institute (NCI) and the Academy of Finland. It aims to develop a comprehensive digital twin system for AML. This project is characterized by its unique approach that combines disease specific knowledge graphs instantiated with patient data, machine learning, and mechanistic models. These include gene regulatory network models and multicellular models of hematopoiesis and leukemogenesis, which are designed to incorporate key mechanisms of cancer progression. The overarching goal of this project is to predict disease progression and optimize response to therapies, thereby revolutionizing the way we understand and treat AML. The project is a collaborative effort, bringing together diverse fields such as modeling, machine learning, human-computer interaction, and clinical practice. The development of the AML digital twin necessitates a variety of patient data, including clinical data, flow cytometry measurements, cytogenetics, and mutation panels. These data are utilized to individualize each digital twin, creating a personalized representation of the patient's disease state. Alongside patient-specific data, the project also incorporates public datasets for the construction of knowledge graphs. These datasets include _ex vivo_ drug sensitivity data and molecular profiling, both linked with clinical outcomes. The integration of individual patient data and public datasets enhances the digital twin's ability to predict disease progression and drug response, which is the primary objective of this project. The key aspects of the immune system relevant for AML are captured by the digital twin through the integration of detailed domain-specific knowledge graphs with multiscale dynamical models of the tumor microenvironment. These models incorporate key mechanisms of cancer progression, which can aid in the development of new therapies. The digital twin approach goes beyond being just a model. Each AML patient will have a digital twin individually tailored using information produced in a clinical laboratory. This is combined with a model-based approach for making personalized predictions. An important aspect of this approach is the learning-cycle, where patient outcomes are continuously utilized to improve predictions. Over time, the system will improve as discoveries are made related to the biological aspects that are most important for accurate prediction of patient outcomes. This approach allows for a dynamic and evolving representation of the patient's disease state, providing a more accurate and personalized prediction of disease progression and treatment response. \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline **Presenter** & **Modeling** & **Disease** & **Data Structure of** & **Data Link** & **Specification** & **"Personalizat** & **Purpose** & **Patient facing** \\ **framework** & **Disease** & **the real world** & **b/w in silico** & **a** & **ion** & **ion** & **ion** & **ion** & **ion** \\ \hline **Yankeeelov** & Custom & **Cancer** & **Multimodal -** & **Mechanis** & **Yes, from** **date Eq** & **Ye, from** **date Eq** & **Prognosis,** **off** & **Ogtimizati** & **on of** **a** \\ **Shmulevich** & **AML** & **Clinical/epidemiology** & **Yes** & **m f for** **feature** & **date from** **date from **date patient** & **therapy, control discovery** & **c** \\ \hline **Kirschner** & Custom & **TB** & **Molecular, imaging & **dinical** & **Yes** & **Mechanis** & **Entitization** & **Prognosis,** **off** & **Ogtimizati** & **No,** **e** \\ **Kirschner** & Custom & **TB** & **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** ** **m** **m** **m** **m** **m** **m** **m** **m** **m** **m** ** **m** ** **m** **m** **m** **m** ** **m** **m** **m** **m** ** **m** **m** **m** **m** **** **m** **m** **m** ** **m** **m** **m** ** **m** **m** ** **m** **m** ** **m** ** **m** **m** ** **m** **m** **** **m** ** **m** ** **m** **m** **m** ** **m** **m** **m** **** **m** **m** **m** **** **m** ** **m** ** **m** **** **m** ** **m** ** **m** **** **m** ** **m** ** **m** **m** ** **m** **** **m** **** **m** **** **m** **m** **** **m** ** **m** **m** **** **m** ** **m** **** **m** ** **m** **** **m** ** **m** ** **m** ** **m** ** **m** **** **m** **** **m** ** **m** **** **m** **** **m** **** **m** **** **m** ** **m** ** **m** ****** **m** **** **m** **** **m** **** **m** ****** **m** **** **m** ****** **m** ****** **m** ****** **m** ******** **m** **** **m** ****** **m** **** **m** **** **m** **** ****m** **** **m** **** **m** **** ****m** ** ****** **m** **** ****m** **** ******** **m** ****** **m** **** ******** **m** **** ********** ********** ************ ****************** \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{Smith} & \multirow{3}{*}{Custom} & Immune & \multirow{3}{*}{Molecular \& \& \& \& \& \& \& \& \& \\ & & system & (Viral & clinical & & & & & \\ & infection) & & & & & & & & \\ \hline \multirow{3}{*}{Helikar} & \multirow{3}{*}{Custom} & Immune & Molecular \& & \& & & \& & \\ & system & clinical & & & & & & & \\ \hline \multirow{3}{*}{Ziemssen} & \multirow{3}{*}{Custom} & \multirow{3}{*}{MS} & Molecular, & \& & & & \& & \\ & imaging \& clinical & & & & & & \\ & & & & & & & & \\ \hline \multirow{3}{*}{Castigione} & \multirow{3}{*}{C-IMMSIM} & Immune & Molecular \& & \& & & & \& \\ & system & clinical & & & & & & \\ \cline{1-1} & & & & & & & & \\ \cline{1-1} & & & & & & & & \\ \hline \multirow{3}{*}{Macklin} & \multirow{3}{*}{Physicell} & \multirow{3}{*}{Molecular, & \& \& \& & & & \\ & & cellular, and & & & & & & \\ \cline{1-1} & & tissue & & & & & & \\ \cline{1-1} & & & & & & & & \\ \cline{1-1} & & General & & & & & & \\ \cline{1-1} & purpose & & & & & & & \\ \hline \multirow{3}{*}{Glazier} & \multirow{3}{*}{CompuCel} & \multirow{3}{*}{Molecular} & \multirow{3}{*}{Evolving} & \multirow{3}{*}{Mechanis} & \multirow{3}{*}{Evolving} & \multirow{3}{*}{Evolving} & \multirow{3}{*}{on \& \& \\ & & & & & & & \\ \cline{1-1} & & & & & & & & \\ \hline \multirow{3}{*}{Jett-Tilton} & \multirow{3}{*}{PTSD} & Genomic \& & \& & & & \\ \cline{1-1} & & clinical & & & & & & \\ \cline{1-1} & & & & & & & & \\ \hline \multirow{3}{*}{Jett-Tilton} & \multirow{3}{*}{CTSO} & \multirow{3}{*}{CTSO} & \multirow{3}{*}{CTSO} & \multirow{3}{*}{CTSO} & \multirow{3}{*}{CTSO} & \multirow{3}{*}{CTSO} & \multirow{3}{*}{CTSO} & \multirow{3}{*}{CTSO} & \multirow{3}{*}{CTSO} \\ & & clinical & & & & & & \\ \cline{1-1} & & & & & & & & \\ \hline \multirow{3}{*}{Laubenbacher,M} & \multirow{3}{*}{Custom} & \multirow{3}{*}{ASpergill} & Molecular \& & \& & & & \\ \cline{1-1} & & & & & & & & \\ \cline{1-1} & & & & & & & & \\ \cline{1-1} & & & & & & & & \\ \hline \end{tabular} \end{table} Table 1: MDT projects in different stages of progress by Forum participants. The columns contain information about the technical features of the MDT, the type of data required for calibration, and the application specifications. ## Conclusion The data from an individual patient captures different aspects of their characteristics and health status. We have genomic data, gene expression measurements, protein, and metabolite concentrations in different tissues under different conditions, imaging data of everything from immune cells in lymph nodes to functional MRI data in the brain, electronic health records, to lifestyle and behavioral data. They all provide information about some aspect of a person, and the challenge is to integrate them in a meaningful way to provide a holistic representation. A computational model of the patient that is dynamically updated with all this information is a natural, and maybe the only, way to accomplish the data integration required. The confluence of several simultaneous developments has created an environment in which this promise of personalized medicine is taking on shape: vastly increased availability of data, from the molecular to the population scale, leading to a deeper understanding of human biology and its role in health and disease, and, finally, an expansion of our computational and modeling tools. A well-designed funding program for MDT research by the public sector is crucial if substantial progress is to be made over the next decade. Public sector agencies like the National Institutes of Health and the National Science Foundation, as well as the U.S. Department of Defense, can play a crucial role in creating an interdisciplinary research ecosystem that brings together the needed expertise. New funding paradigms should be considered for this purpose. For instance, the structure of the Horizon research programs funded by the European Union is well-suited to MDT projects that rely on the development of a collection of parts that together assemble to an MDT, but do not necessarily have a research rationale of their own. There can also be an important role for the business community and philanthropic organizations in providing funding for this effort and collaborating on the myriad research problems that will need to be tackled and solved. The Forum we are reporting on here is intended to support a dialog around this topic. Collectively, a community is emerging around this effort that can, with the right resources, help make rapid progress on bringing MDTs to patients at a large scale. ## Acknowledgements RL: U.S. Army ACC- APG- RTP W911NF, NIH 1 R01 HL169974-01, U.S. DoD DARPA HR00112220038, NIH 1 R011AI135128-01, NIH 1 R01 HL169974-01 FA: None GA: NIH U01EB025825, Department of Defense CDMRP- W81XWH-22-MBRP-CTRA, DARPA HR00111950027 FC: None SE: None LF: U.S. DoD DARPA HR00112220038 TH: NIH Grant #R35GM119770, University of Nebraska-Lincoln Grand Challenges Catalyst Award MJT: None DK: NIH R01 AI50684 PM: NSF 1720625, Jayne Koskinas Ted Giovanis Foundation for Health and Policy, Leidos Biomedical Research contract no. 75N91019D00024 BMe: NIH 1 R01 HL169974-01, NIH 1 R011AI135128-01, NIH 1 R01 HL169974-01 BMo: NIH R35HL144481, NIH DK124290 VP: None IS: NIH NCI R01CA270210 AS: NIH R01 AI170115, NIH R01 AI139088 TY: NCI 1U01CA253540, NCI 1R01CA240589 TZ: None The authors are grateful to Ms. Lisa Oppel who was in charge of the Forum organization and all logistics. ## Competing Interests None: RL, FA, GA, FC, SE, LF, JG, MJT, DK, PM, BMe, BMo, VP, IS, AS, TY, TZ TH: majority stakeholder in ImmuNovus, Inc and Discovery Collective, Inc. ## Author Contributions RL secured funding and organized the Forum, coordinated, and contributed to writing the manuscript. All other authors contributed to the scientific content of the meeting and participated in preparing the manuscript.
2305.13097
High Sensitivity Observations of the Water Megamasers of NGC 1068: Precise Astrometry and Detailed Kinematics
We present High Sensitivity Array observation of the water megamasers of NGC 1068. We obtain absolute astrometry with 0.3 mas precision that confirms the association of the disk masers with the nuclear radio continuum source S1. The new observations reveal two new blueshifted groups of disk masers. We also detect the 22 GHz continuum on short interferometric baselines. The position-velocity diagram of the disk masers shows a curve consistent with a nonaxisymmetric distribution of maser spots. The curve is probably the result of spiral arms with a constant pitch angle of roughly 5 degrees. The disk kinematics are consistent with Keplerian rotation and low turbulent speeds. The inferred central mass is 17 million solar masses. On the basis of disk stability arguments, the mass of the molecular disk is roughly 110 thousand solar masses. The disk masers further resolve into filamentary structures suggesting an ordered magnetic field threading the maser disk. The magnetic field strengths must be greater than 1.6 mG to withstand turbulent motions in the partially ionized molecular gas. We note apparent asymmetries in the molecular disk that might be explained by anisotropic heating by a misaligned inner accretion disk. The new observations also detect the fainter jet masers north of the disk masers. The distribution and kinematics of the jet masers are consistent with an expanding ring of molecular gas.
Jack F. Gallimore, C. M. Violette Impellizzeri
2023-05-22T15:00:13Z
http://arxiv.org/abs/2305.13097v1
High Sensitivity Observations of the H\({}_{2}\)O Megamasers of NGC 1068: Precise Astrometry and Detailed Kinematics ###### Abstract We present High Sensitivity Array observations of the H\({}_{2}\)O megamasers of NGC 1068. We obtain absolute astrometry with 0.3 mas precision that confirms the association of the disk masers with the nuclear radio continuum source S1. The new observations reveal two new blueshifted groups of disk masers. We also detect the 22 GHz continuum on short interferometric baselines. The position-velocity diagram of the disk masers shows a curve consistent with a nonaxisymmetric distribution of maser spots. This curve is probably the result of spiral arms with a constant pitch angle \(\sim 5^{\circ}\). The disk kinematics are consistent with Keplerian rotation and low turbulent speeds. The inferred central mass is \(17\times 10^{6}\)\(M_{\odot}\). On the basis of disk stability arguments, the mass of the molecular disk is \(\approx 110\times 10^{3}\)\(M_{\odot}\). The disk masers further resolve into filamentary structures suggesting an ordered magnetic field threading the maser disk. The magnetic field strengths must be \(\gtrsim 1.6\) mG to withstand turbulent motions in the partially ionized molecular gas. We note apparent asymmetries in the molecular disk that might be explained by anisotropic heating by a misaligned inner accretion disk. The new observations also detect the fainter jet masers north of the disk masers. The distribution and kinematics of the jet masers are consistent with an expanding ring of molecular gas. + Footnote †: journal: ApJ 0000-0002-4882-8885]Jack F. Gallimore 0000-0002-4882-7885]C. M. Violette Impellizzeri ## 1 Introduction Extragalactic H\({}_{2}\)O megamasers are found to be associated with molecular accretion disks and nuclear outflows (H\({}_{2}\)O megamasers are reviewed in Lo, 2005). The most famous case is NGC 4258 (Herrnstein et al., 1999), in which H\({}_{2}\)O masers trace Keplerian rotation in a sub-pc scale, warped disk, and new megamaser sources have been discovered and studied in recent years (Kuo et al., 2011). H\({}_{2}\)O megamaser disks are of broader astrophysical importance because they provide tight constraints on the centrally concentrated masses of galaxies, presumably supermassive black holes (e.g., Gao et al., 2017), and, with the measurement of centripetal accelerations, they afford direct, geometrical measurement of distances to galaxies (e.g., Gao et al., 2016). NGC 1068 is the archetypal hidden type 2 Seyfert galaxy (Antonucci & Miller, 1985) and one of the first AGNs shown to harbor a circumnuclear H\({}_{2}\)O megamaser disk (Gallimore et al., 1996; Greenhill et al., 1996). Two different sources of H\({}_{2}\)O megamaser emission were identified in NGC 1068, one associated with the compact radio source "C," which appears to mark the interaction between the radio jet and a molecular cloud; and the other with the nuclear radio source "S1" (Gallimore et al., 1996, 2001). In the commonly accepted interpretation, the S1 megamasers trace a nearly edge-on, geometrically thin annular disk surrounding the central engine. This interpretation is partly inspired by their resemblance to the H\({}_{2}\)O megamasers of NGC 4258. Primarily, the S1 megamasers show the classic triply-peaked spectrum expected from an edge-on rotating disk (or annulus) of molecular gas (Watson & Wallin, 1994; Gallimore et al., 1996, 2001), although the redshifted masers are consistently brighter in monitoring observations (Gallimore et al., 2001). For the purpose of discussion, we refer to the masers associated with radio component C as jet masers, and those with component S1 as disk masers. Using the Very Long Baseline Array (VLBA) augmented by the phased Very Large Array (VLA), Greenhill and Gwinn (1997) (GG97) presented the first VLBI observations of the disk masers that cover the full 800-1500 km s\({}^{-1}\) velocity range of the maser spectrum. Discussed in further detail in Section 2, the disk maser spots roughly align along PA \(-50^{\circ}\) and span about 1.75 pc.1 Based on a preliminary model for the maser kinematics, GG97 estimated the systemic recessional velocity2 of the maser disk, \(V_{LSR}=1119\) km s\({}^{-1}\) (optical convention). The redshifted masers show decreasing recessional velocities with distance from the center of the maser spot distribution in the sky and outside the inferred inner radius of \(R_{in}\approx 0.6\) pc. The fainter, blueshifted masers are more tightly grouped on the sky and therefore do not sample the velocity gradient as well as the redshifted masers. Recently, Morishima et al. (2022) presented global VLBI observations made in February 2000. Their results largely confirm those of GG97, although they claim the detection of faint disk maser spots displaced northeast and southwest of the molecular disk, i.e., along the outflow axis. Footnote 1: The distance to NGC 1068 is \(13.97\pm 2.1\) Mpc (Anand et al., 2021). For convenience and consistency with previous papers, we adopt the scale 1′′= 70 pc, appropriate for a distance of 14.4 Mpc. Footnote 2: To avoid confusion between the disk-frame and sky-frame velocities, we denote motion relative to the observer as recessional velocity, and radial velocities refer to radial motions in the disk-frame. The conventional interpretation for the position-velocity diagram has been that the high-velocity maser spots trace molecular clumps that are viewed along sight lines nearly tangential to their orbits. In other words, their observed recessional velocities trace the actual rotational velocities relatively unaffected by projection, so the declining velocities are a direct measure of the disk rotation curve. With this assumption, Greenhill et al. (1996) reported a sub-Keplerian3 rotation curve, \(v\propto r^{-0.31}\), where they assumed the major axis lies along position angle \(-90^{\circ}\). Lodato and Bertin (2003), analyzing the data of GG97, find that the rotation curve varies from \(v\propto r^{-0.35}\) at the inner edge to \(v\propto r^{-0.30}\) at the outer edge with the major axis assumed to lie along position angle \(-45^{\circ}\). Performing a similar analysis, Morishima et al. (2022) argue for a flatter rotation curve, \(v\propto r^{-0.24}\). Footnote 3: Here, sub-Keplerian means having rotation curves falling more slowly than \(r^{-0.5}\), which translates to rotational velocities greater than the expected Keplerian velocity with increasing radius. In other words, sub-Keplerian means a rotation curve that is flatter than Keplerian. Kumar (1999), Hure (2002), and Lodato and Bertin (2003) took the apparent sub-Keplerian rotation curve as evidence of disk self-gravity. Kumar argued that a standard \(\alpha\)-disk analysis requires the disk to have a mass of nearly \(10^{8}\) M\({}_{\odot}\), greatly exceeding the inferred black hole mass. Hure (2002) inverted the rotation curve to infer the disk mass; in this analysis, the disk mass is about 75% of the central black hole mass. Lodato and Bertin (2003) presented a model of a self-gravitating disk that self-regulates against the Jeans instability to explain the sub-Keplerian rotation curve. In their model, the central black hole mass and the accretion disk mass roughly balance with \(M_{bh}\approx M_{disk}\approx 8\times 10^{6}\) M\({}_{\odot}\). Unlike the rotation curve model for the disk masers, recent ALMA observations of HCN kinematics (\(J=3\to 2\)) show a tangential velocity curve that is consistent with Keplerian (counter) rotation on the radial scales 1.4 to 7 pc from the kinematic center, just outside the maser disk (Impellizzeri et al., 2019). Further to the point, the HCN tangential velocity curve is not consistent with the proposed flatter rotation curve of the H\({}_{2}\)O maser disk; the extrapolated rotation speeds exceed the HCN tangential velocities by up to 60%. To reconcile the difference, it seems that one or the other, the redshifted H\({}_{2}\)O masers or the HCN tangential velocity curve, does not directly reflect the rotation curve. We have observed the H\({}_{2}\)O megamasers of NGC 1068 using the High Sensitivity Array, a combination of the Very Long Baseline Array (VLBA), the phased Karl G. Jansky Very Large Array (VLA), and the Robert C. Byrd Green Bank Telescope (GBT). Our main goal was to try to recover fainter disk maser spots in hopes of better constraining the subparsec rotation curve, but we were also able to determine absolute astrometry of the H\({}_{2}\)O maser spots and recover low-surface-brightness radio continuum emission. We also detected jet masers associated with the radio continuum component C. Section 2 describes the observations and astrometric analysis. Section 3 presents the primary results, including recovery of the 22 GHz continuum and the distribution and kinematics of the H\({}_{2}\)O maser spots. We consider three different kinematic models to explain the peculiarities of the position-velocity diagram; the details are provided in Section 4. The radically improved astrometry of the maser spot positions relative to the nuclear radio continuum source affects the interpretation of infrared observations of the nuclear obscuring region; the implications of the improved astrometry are discussed in Sec. 5. In Section 6, we consider how magnetic fields may affect the kinematics and structures of the H\({}_{2}\)O maser disk. The kinematics of the jet masers is discussed in Section 7. The primary conclusions are summarized in Section 8. ## 2 Observations, Calibration, and Data Reduction We observed NGC 1068 with the HSA on 8-9 February 2020 (BG262D) and again on 21-22 March 2020 (BG262J). Each observation, including the calibrators and overhead, spanned 6 hours. For both observations, the receivers were tuned to the H\({}_{2}\)O maser transition, \(\nu_{0}=22235.080\) MHz, and adjusted to \(V_{R}=1150\) km s\({}^{-1}\) recessional velocity (LSRK, optical convention). The HSA maintains fixed frequencies in the topocentric frame during the course of the observation, so the receivers were tuned to this recessional velocity at the midpoint of the observation. For reference, the systemic velocity of the host galaxy is \(v_{\rm sys}({\rm host})=1132\pm 5\) km s\({}^{-1}\)(Paturel et al., 2003), and the systemic velocity of the molecular disk of the pc scale surrounding the maser ring is \(v_{\rm sys}({\rm host})\simeq 1133\pm 3\) km s\({}^{-1}\)(Impellizzeri et al., 2019). We used a single IF band with bandwidth \(\Delta\nu=64\) MHz, corresponding to a radial velocity range \(\Delta V_{R}\approx 870\) km s\({}^{-1}\). The channel widths are determined at the time of correlation. For BG262D, the IF was divided into 1024 channels (0.85 km s\({}^{-1}\) channels), and for BG262J, 2048 channels (0.42 km s\({}^{-1}\) channels). Unfortunately, a software error affected the BG262D observations. Roughly one-third of the observing time with the phased VLA was lost, and only one polarization of the remaining observations was recovered. We also found that the data that included the Hancock antenna were not usable. As a result, we focus our analysis on the results of BG262J, although we also processed the BG262D data and used them to check astrometric uncertainties. The observations consisted of scans of fringe calibrators (3C454.3 and 3C84, which are also used as bandpass calibrators), and alternating scans of NGC 1068 and the phase reference calibrator J0239\(-\)02, located 2\(\fdg\)7 away. The cadence was typically 9 minutes at the source and 3 minutes at the phase reference. However, this cadence was periodically interrupted to allow calibration of the GBT (roughly once per hour), phasing of the VLA (up to four times per hour), and pointing calibration of the VLA (twice per hour). While the VLA or GBT performed calibrations, the remaining antennas continued to observe the source and phase reference. Phased VLA observations were also interrupted for a 5 minute scan of 3C48 to establish the flux scale. Data reduction and calibration followed standard procedures in AIPS (Wells, 1985; Greisen, 1990, 2003). First, the data were corrected for terrestrial effects, including ionospheric Faraday rotation, dispersive delay, and updated Earth Orientation Parameters. The data were then corrected for digital sampling artifacts, and a bandpass calibration was generated based on the observations of 3C454.3 and 3C84. After these frequency-dependent calibrations were applied, the observations were shifted in frequency to the LSRK reference frame. Next, we performed fringe fitting on J0239\(-\)02 to determine the initial calibration of the phase rates and delays. After this calibration, the masers were apparent on the cross-power spectrum (a spectrum derived from a time average of visibilities), and the brightest masers were found at recessional velocities between \(V_{R}=1409.5\) km s\({}^{-1}\) and 1415.4 km s\({}^{-1}\). We determined a phase-only self-calibration based on the brightest maser spots identified in these channels and applied the correction back to the phase reference, J0239\(-\)02. ### Astrometric Calibration The data were phase-referenced to the brightest maser spots, and, prior to this work, the absolute astrometry of these spots was accurate to about 5 mas (Gallimore et al., 2001). This phase calibration introduces an offset to the position of the phase reference relative to the coordinates of the pointing center. The inverse of this offset corrects the positions of the maser spots with respect to the astrometric frame. To this end, we made images of the phase reference source and measured the sky offset from the pointing center. The images of J0239\(-\)02 are presented in Figure 1. For reference, the astrometric precision of J0239\(-\)02 is reported as 0.03 mas in the VLBA Calibrator Catalog. In the BG262J observations, J0239\(-\)02 was offset by \(\Delta({\rm RA})=+7.03\) mas, \(\Delta({\rm Dec})=-7.57\) mas. We repeated the measurement for the BG262D observations, and we measured sky offsets \(\Delta({\rm RA})=+6.73\) mas, \(\Delta({\rm Dec})=-7.88\) mas. Due to data loss that affected the BG262D observations, we adopted the corrections derived from the BG262J observations but used the BG262D offsets to estimate the systematic uncertainties of the phase calibration transfer between the source and phase reference. The estimated systematic uncertainties of this new absolute astrometric calibration are 0.4 mas in RA and 0.3 mas in Dec. For astrometric experiments, the characteristic uncertainty is \(\sim 0.05\) mas, but poor weather and data loss can degrade the uncertainty to \(\sim 0.2\) mas (Pradel et al., 2006). Furthermore, phase referencing was not a primary goal of the BG262 observations, and the observing cadence was not designed for the highest possible astrometric accuracy. Nevertheless, the astrometric uncertainties are an order of magnitude improvement compared to previous work. The uncertainties are small compared to the distribution of masers, approximately 30 mas in extent, and the size of the continuum source, roughly 10 mas. The centroid position of the 22 GHz continuum source (Section 3.1) agrees to within 0.4 mas of the 5 GHz VLBA continuum position, comparable to the statistical uncertainty of the centroid position (Gallimore et al., 2004). For purposes of comparing the continuum morphology and maser spot distribution, the new, 22 GHz continuum image was produced using the same data cube as the maser spot measurements, and signal-to-noise ratio primarily limits the relative astrometric measurements between the continuum and maser spots rather than systematic calibration effects. For consistency with previous publications, the absolute positions of the NGC 1068 data have been referenced as offsets relative to the VLBA 5 GHz centroid of radio continuum source S1, RA(J2000) = 02h 42m 40\(\fdg\)70905, Dec(J2000) = \(-00^{\circ}\) 00\({}^{\prime}\) 47\(\farcs\)945 (Gallimore et al., 2004). ### Maser Spot Measurements Final processing and imaging were performed using DIFMAP (Shepherd, 1997). Additional self-calibration of the maser data cubes was performed using the bright maser spot at \(V_{R}\) = 1414.3 km s\({}^{-1}\) as a reference. We produced naturally weighted images of each spectral channel. The restoring beam is \(1.12\times 0.36\) mas, PA \(-9\fdg 9\). The characteristic background rms is 1.3 mJy beam\({}^{-1}\) in a single channel; masers as faint as about 5 mJy beam\({}^{-1}\) could be detected. Based on the background rms of the channel with the brightest maser emission, the dynamic range is about 60. As a result, in channels containing the brightest masers, the limiting point source sensitivity is about 20 mJy beam\({}^{-1}\). We identified the channels that show clear evidence of compact maser emission within a region \(82\times 82\) mas centered on the position of the continuum source S1. Similarly, we searched for maser spots in the region of the continuum source C, located roughly 60 mas east and 290 mas north of S1. The position and brightness of each detected maser spot was determined by a two-dimensional Gaussian fit to the visibility data (DIFMAP task modelfit). To illustrate, Figure 2 shows the "dirty" and restored channel maps of three disk maser sources. In some channels, two or more maser spots were present, and so multiple Gaussians were included in the model. We specifically avoided adding model components to regions roughly 20 mas north or south of a bright maser spot to avoid sidelobe artifacts. The best-fit photocenters were then shifted to the absolute coordinate frame. The results are provided in Tables 1 and 2. The sky map of the disk maser spots is shown in Figure 3, and the jet maser spots are shown in Figure 4. The disk maser spots follow the pattern found in GG97, with the redshifted masers located northwest of the systemic masers and the blueshifted masers to the southeast. Our astrometric data confirm the results of Gallimore et al. (2001): the S1 masers extend across the resolved radio continuum source, and the near-systemic masers pass within 1 mas of the radio continuum centroid. Even though the new HSA observations are about 2.5 times more sensitive, we do not find evidence for disk masers along the jet / outflow axis as reported by Morishima et al. (2022). However, those reported outflow masers would appear at positions adversely affected by sidelobe residuals (cf. Figure 2) and so were avoided in our analysis. Figure 1: Naturally-weighted 22 GHz continuum images of the phase reference source, J0239\(-\)02. The image derived from the BG262D observations is shown on the left, and BG262J is shown on the right. The coordinates are offsets relative to the VLBA Calibrator Catalog position, \(\alpha\)(J2000) = 02h 39m 45\(\fdg\)472272, \(\delta\)(J2000) = \(-02^{\circ}\) 34\({}^{\prime}\) 00\(\fdg\)99146. The displacement from the nominal position results from initial phase calibration based on the brightest masers of NGC 1068, which we use to calibrate the absolute astrometry of the maser spots. As depicted in the colorbar, the contours are \(\pm\)2.3, 10, 45, and 200 mJy beam\({}^{-1}\). The BG262D image is noisier primarily due to a loss of time on the phased VLA and the Hancock antenna. The restoring beams are shown as filled blue ellipses. The disk maser spots form distinct groups in space and radial velocity. For the purposes of discussion, we identified nine groups, R1-4 (redshifted maser groups), G1 (near systemic) and B1-4 (blueshifted); the group labels appear as annotations in Figure 3. Groups B1 and B4 were not detected in GG97. The masers with the highest recessional velocities are found in group R4, and the lowest velocity masers are located in groups B2 and B3. There is a nearly continuous distribution of spots between the velocity extremes of R4 and G1; we distinguish these groups only by a small gap in the sky distribution and recessional velocities. The jet masers are divided into four distinct spot groups oriented roughly east-west and spanning \(\sim 10\) mas; see Fig. 4. We label the spot groups C1 (west) through C4 (east). Interestingly, the jet masers appear to be significantly offset south of the local continuum peak. More specifically, the C2 maser group is located about 5.4 mas south of component C. In contrast to the nuclear groups, the jet maser groups show no significant filamentary substructure on the sky. Furthermore, the east-west distribution of the groups is very different from the pattern reported by Morishima et al. (2022); in their analysis, the jet masers are distributed in a ring with diameter \(\sim 20\) mas. ## 3 Results and Analysis ### 22 GHz Continuum The surface brightness of the 22 GHz continuum emission is too low to detect on spectral line channels. However, weak continuum emission appears on short baselines after averaging line-free channels. To produce a continuum \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Channel & \(V\)(LSRK) & Flux Density & East Offset & North Offset & Group \\ & (km s\({}^{-1}\)) & (mJy) & (mas) & (mas) & \\ \hline 479 & 1381.63 & 22.8\(\pm\) 1.8 & \(-\)8.853\(\pm\) 0.015 & 13.379\(\pm\) 0.064 & R1 \\ 480 & 1381.20 & 29.6\(\pm\) 2.1 & \(-\)8.906\(\pm\) 0.011 & 13.560\(\pm\) 0.042 & R1 \\ 481 & 1380.78 & 22.8\(\pm\) 1.8 & \(-\)8.873\(\pm\) 0.015 & 13.323\(\pm\) 0.068 & R1 \\ 482 & 1380.36 & 13.5\(\pm\) 1.6 & \(-\)8.871\(\pm\) 0.030 & 13.537\(\pm\) 0.130 & R1 \\ 483 & 1379.93 & 7.8\(\pm\) 1.0 & \(-\)8.900\(\pm\) 0.021 & 13.629\(\pm\) 0.060 & R1 \\ 437 & 1399.46 & 21.5\(\pm\) 2.3 & \(-\)9.193\(\pm\) 0.019 & 10.523\(\pm\) 0.052 & R2a \\ \hline \end{tabular} Note. – Table 1 is published in its entirety in machine-readable format. A formatted sample of the data is provided here. The offset positions are relative to the VLBA 5 GHz continuum position of S1, RA(J2000) = 02h 42m 40\(\farcs\)70905, Dec(J2000) = \(-\)00\({}^{\circ}\) 00\({}^{\prime}\) 47\(\farcs\)945. \end{table} Table 1: Disk Maser Spot Positions and Velocities \begin{table} \begin{tabular}{c c c c c c} \hline \hline Channel & \(V\)(LSRK) & Flux Density & East Offset & North Offset \\ & (km s\({}^{-1}\)) & (mJy) & (mas) & (mas) \\ \hline 1321 & 1024.44 & 7.2\(\pm\) 0.8 & 65.86\(\pm\) 0.02 & 280.99\(\pm\) 0.08 \\ 1325 & 1022.74 & 5.9\(\pm\) 0.8 & 65.88\(\pm\) 0.03 & 281.20\(\pm\) 0.10 \\ 1327 & 1021.90 & 6.8\(\pm\) 0.7 & 65.86\(\pm\) 0.03 & 281.03\(\pm\) 0.09 \\ 1328 & 1021.47 & 5.8\(\pm\) 0.7 & 55.82\(\pm\) 0.03 & 281.00\(\pm\) 0.10 \\ 1329 & 1021.05 & 5.6\(\pm\) 0.7 & 65.81\(\pm\) 0.03 & 281.20\(\pm\) 0.10 \\ 1330 & 1020.62 & 6.7\(\pm\) 0.8 & 65.84\(\pm\) 0.03 & 281.11\(\pm\) 0.09 \\ \hline \end{tabular} Note. – Table 2 is published in its entirety in machine-readable format. A formatted sample of the data is provided here. The offset positions are relative to the VLBA 5 GHz continuum position of S1, RA(J2000) = 02h 42m 40\(\farcs\)70905, Dec(J2000) = \(-\)00\({}^{\circ}\) 00\({}^{\prime}\) 47\(\farcs\)945. \end{table} Table 2: Jet Maser Spot Positions and Velocities Figure 2: Naturally-weighted channel maps of the disk masers. Each row is a separate channel; the recessional velocities of each channel are labeled in the left panel. The first column is the “dirty” map created by Fourier inversion, and the second column is the restored image produced by fitting Gaussian source models. The image stretch in mJy beam\({}^{-1}\) is shown as a colorbar inset in each figure. For the dirty maps, the stretch covers the entire range of flux densities in the map; for the restored images, the stretch is truncated at \(\pm 8\) mJy beam\({}^{-1}\), or roughly \(\pm 5\sigma\), to illustrate residual sidelobe artifacts in the restored image. image, we averaged the line-free channels (16.3 MHz total bandwidth) and applied a Gaussian taper with 50% weight at 30 M\(\lambda\) during Fourier inversion. The data were naturally weighted during inversion, and the resulting image was deconvolved using the clean task in DIFMAP. The resulting image, first published in Gamez Rosas et al. (2022), is shown in Figure 5. The restoring beam is \(4.3\times 3.3\) mas, PA \(-21\fdg 8\), and rms image noise is 0.16 mJy beam\({}^{-1}\). The flux density of the recovered continuum is \(S_{\nu}=13.8\pm 0.3\) mJy. The centroid position of the resolved continuum is RA(J2000) = 02h 42m 40\(\fdg\)70901, Dec(J2000) = \(-00^{\circ}\ 00^{\prime}\ 47\farcs 9448\). This 22 GHz position agrees well with the VLBA position of S1 measured at 5 and 8 GHz (Gallimore et al., 2004). ### Substructures and Filaments within the S1 Maser Spot Distribution GG97 reported the appearance of arcuate and linear substructures among the distribution of maser spots in the sky. As evident in Figure 3, we also find linear and arcuate groupings of maser spots on the sky. Shown in Fig. 6, the R4 group, in particular, breaks down into nearly parallel filaments. We identified and labeled 34 such subgroups by means of a clustering algorithm. The details of the analysis and magnified plots of the other subgroups are provided in Appendix A. For the purposes of discussion, we labeled each subgroup by its parent group name and a lower-case suffix increasing alphabetically to the east (e.g., R2a, R2b, R3a, R3b, etc.). Many subgroups, particularly those belonging to group R4, orient nearly north-south, roughly 50\({}^{\circ}\) in PA from the overall distribution of maser spots on the sky. On the one hand, the nuclear jet and molecular outflow axes also point roughly north-south on the sky (Gallimore et al., 1996a, 2016), so the H\({}_{2}\)O maser filaments may be tracing gas Figure 3: Sky map of the nuclear H\({}_{2}\)O maser spots. The locations of the spots are plotted as color-filled circles. The size of the symbols scales with the flux density of the maser spot, and the symbols are color-coded by recessional velocity as shown in the colorbar. The spots are plotted atop the 5 GHz continuum contours from Gallimore et al. (2004). The contour levels are \(\pm 0.11\), 0.16, 0.24, 0.35, 0.51, 0.75 mJy beam\({}^{-1}\). The 5 GHz beam is shown as the blue ellipse on the lower right; the 22 GHz beam is the white ellipse. The sky coordinates are offsets relative to the VLBA 5 GHz continuum position of S1, RA(J2000) = 02h 42m 40\(\fdg\)70905, Dec(J2000) = \(-00^{\circ}\ 00^{\prime}\ 47\farcs 945\). participating in the molecular outflow. However, the synthetic beam also orients roughly north-south. The concern then is whether the apparent filaments are artifacts resulting from channel-to-channel measurement uncertainties or calibration errors. Put another way, perhaps the substructures belong to spatially unresolved clumps, but random measurement errors or calibration (or other systematic) errors introduce an apparent scatter of maser spots comparable to the size and orientation of the beam. However, such errors should not introduce significant velocity gradients within the subgroups. In Fig. 7, we plot the orientation and velocity gradients of the subgroups. There are nine (9) subgroups of 34 that have (1) major axis position angles within 3\(\sigma\) of the synthetic beam position angle and (2) velocity gradients within 3\(\sigma\) of zero: R2a, R4h, R4i, R4j, R4k, G1b, G1e, B1a, and B1b. Furthermore, the scatter of maser spots within these subgroups is comparable to or smaller than the size of the beam. We conclude that these subgroups, in particular, are likely unresolved. The other 25 subgroups are significantly rotated from the major axis of the beam or have significant and varying velocity gradients. Additionally, there is no consistent pattern in the velocity gradients that might otherwise hint at calibration errors. Any systematic errors should equally have affected the jet masers, for which we find no north-south (or any) filamentary structure. We conclude that most of the subgroups among the disk masers trace real substructures within the overall distribution of maser spots on the sky. Figure 4: Sky map of the jet H\({}_{2}\)O maser spots. The locations of the spots are plotted as color-filled circles. The size of the symbols scales with the flux density of the maser spot, and the symbols are color-coded by recessional velocity as shown in the colorbar. The spots are plotted atop the 5 GHz continuum contours from Gallimore et al. (2004). The contour levels are \(\pm\)0.11, 0.16, 0.24, 0.35, 0.51, 0.75, 1.10, and 1.50 mJy beam\({}^{-1}\). The 5 GHz beam is shown as the blue ellipse on the lower right; the 22 GHz beam is the white ellipse. The sky coordinates are offsets relative to the VLBA 5 GHz continuum position of S1, RA(J2000) = 02h 42m 40:70905, Dec(J2000) = \(-\)00\({}^{\circ}\) 00\({}^{\prime}\) 47\(\aas@@fstack{\prime\prime}\)945. ### The Position-Velocity Diagram of the Disk Masers The position-velocity (\(p\)-\(v\)) diagram is shown in Figure 8. To estimate the position angle of the major axis, we fit a line to the spot positions of the maser groups R4 and G1; the best-fit PA is \(-50^{\circ}\pm 1^{\circ}\). The \(p\)-\(v\) diagram broadly agrees with that of GG97. Groups R1-3 show falling velocities as expected for masers tracing the receding side of a rotating disk. This pattern is not matched on the blueshifted side; groups B2, B3, and B4 show a more complex arrangement in positions and velocity. Maser disks commonly show a linear region between the maximum velocities on the \(p\)-\(v\) diagram (e.g., Moran et al., 1995; Herrnstein et al., 1999; Lo, 2005; Kuo et al., 2011; Gao et al., 2016, 2017). The linear region traces the inner radius of the maser region of the molecular disk (e.g., Watson & Wallin, 1994). Specifically, if the rotational velocity, \(v_{\rm rot}\), depends only on the radius of the disk \(r\), then the observed recessional velocities follow \(V_{R}=v_{\rm sys}+v_{\rm rot}(x/r)\sin i\), where \(v_{\rm sys}\) is the systemic velocity, \(x\) is the displacement on the sky along the disk midline (the offset along the projected major axis of the spot distribution), and \(i\) is the inclination of the disk. For a ring of constant radius \(r\), the \(p\)-\(v\) diagram is linear. The disk masers of NGC 1068, however, show a curved pattern of spots between the maximum observed velocities on the diagram \(p\)-\(v\), that is, between groups R4 and B4. To illustrate, we fit a line to the \(p\)-\(v\) coordinates of the spots in groups G1 and B1 and extrapolated the fit to cover the range of observed maser velocities. From inspection of Figure 8, the R4 group curves away from the best-fit line, displacing to higher recessional velocities with distance Figure 5: Tapered 22 GHz continuum image of the nuclear radio source S1, recovered from line-free channels. As depicted in the scale bar on the right, the contours are \(\pm\)0.3, 0.6, 1.2, 2.4, and 2.85 mJy beam\({}^{-1}\). The maser spots are plotted as colored dots. The sky coordinates are offsets relative to the VLBA 5 GHz continuum position of S1, RA(J2000) = 02h 42m 40\(\fdg\)70905, Dec(J2000) = \(-\)00\({}^{\circ}\) 00\({}^{\prime}\) 47\(\farcs\)945. The restoring beam for the continuum image is shown as the filled blue ellipse in the lower right. from the G1 masers. The masers of groups B2-4 also tend to higher recessional velocities than the extrapolated trend line. This apparent curvature between the R4 and B4 masers in the \(p\)-\(v\) diagram indicates that the orbital geometry is not a simple, rotating ring as expected for the inner radius of the H\({}_{2}\)O maser region. ## 4 Kinematic Models for the Disk Masers In this section, we consider three kinematic models to explain the apparent curvature between the velocity extremes on the \(p\)-\(v\) diagram (i.e., between the R4 and B4 maser groups). In the first model, the expanding ring model, radial motions (infall or outflow) contribute to the observed recessional velocities for spot groups R4-B4. The second model assumes that the R4-B4 masers follow a common elliptical orbit. In the third model, we explore the possibility that the masers arise from spiral arms in the molecular accretion disk. In all three models, we assume that the brightest maser spots originate from molecular gas on the near side of the disk midline and the continuum source. The main argument is that the continuum source provides seed radiation that is amplified by the H\({}_{2}\)O masers. Furthermore, attenuation through ionized gas inside the molecular disk may suppress maser emission from the far side (e.g., Watson & Wallin, 1994). We describe in turn the model fitting and selection techniques (Section 4.1), particular details of each model (Sections 4.2-4.4), and we provide a discussion comparing the merits of the three kinematic models (Section 4.5). ### Model Fitting Techniques We used the Markov Chain Monte Carlo (MCMC) code PyDREAM (Shockley et al., 2017) to fit kinematic models to the maser spot data. A more complete explanation of the MCMC technique is provided in Appendix B. Briefly, the algorithm generates random walks through the parameter space. For a trial set of parameters, a discretized model orbit was calculated in the disk frame and then projected onto the sky based on the model inclination (\(i\)), position angle (\(\Omega\)), and kinematic center (\(X_{0},Y_{0}\)). Maser spots were then matched to the nearest point on the projected model orbit, and the posterior probability of the fit was calculated. To account for systematic uncertainty, we included the (fitted) error floors \(\delta X\), \(\delta Y\), and \(\delta V\), which are added in quadrature to the measurement uncertainties (cf. Humphreys et al., 2013). At each step, a set of parameters is accepted or rejected according to the Metropolis criterion (Metropolis et al., 1953). The integrated autocorrelation time (IAT) was used to evaluate convergence (Foreman-Mackey et al., 2013). Figure 6: A close-up of the R4 maser group. Subgroups are denoted by lower-case suffixes and are color-coded as shown in the legend. _Left panel:_ the sky distribution of maser spots. The gray-colored ellipse represents the synthetic beam. _Right panel:_ the declination-velocity diagram of the maser spots. The subgroups form a pattern of nearly north-south, linear features that show velocity gradients ranging from about \(-8\) to \(+20\) km s\({}^{-1}\) mas\({}^{-1}\) along the linear axis. We used a mixture model to accommodate kinematic outliers. Using the formalism of mixture models, the kinematic model is the foreground, and outliers belong to the background model. The posterior probability includes a sum of probabilities: effectively, the probability that a given maser spot belongs to the foreground and the probability that the maser spot belongs to the background. The background is modeled as a Gaussian distribution in sky coordinates and recessional velocity. Therefore, the background model introduces seven additional parameters: \(P_{fg}\), the probability that a maser spot belongs to the foreground (the kinematic model under consideration); the background centroids \(X_{bg}\), \(Y_{bg}\), and \(Z_{bg}\); and the standard deviations of the background (\(\delta X_{bg}\), \(\delta Y_{bg}\), \(\delta V_{bg}\)). We used the marginal likelihood, sometimes called the _evidence_, for model selection. The marginal likelihood is the probability of observing the data given the model, and so models with a greater (log) marginal likelihood are favored as better representing the data. We used two estimators of the marginal likelihood, the widely applicable Bayes information criterion (WBIC) (Friel et al., 2017) and the thermodynamic integration estimator (TIE) (Gelman and Meng, 1998; Neal, 2000). The comparison of two models is summarized by the Bayes factor (BF), which is the ratio of the probability of obtaining the data under two different models: \(\log\mathrm{BF}_{1,2}=\log\left[\mathrm{evidence(model1)}\right]-\log\left[ \mathrm{evidence(model2)}\right]\) (see Figure 7: Properties of the maser subgroups plotted vs. the major axis offset. The principal groups are color-coded as shown in the legend. The upper plot shows the position angle of each subgroup compared to the synthetic beam position angle, which is indicated by the red dashed line. The lower plot shows the mean velocity gradient along the major axis of each subgroup. The red dashed line traces zero velocity gradient. Kass & Raftery 1995, for a review). The estimates of the marginal likelihoods and the corresponding Bayes factors for the expanding ring, elliptical orbit, and spiral arm models are provided in Table 3. ### Expanding Ring Model In this model, we assume that the R4-B4 maser spots occupy a narrow annulus of radius \(r_{0}\) and the R1-R3 maser spots fall along the disk midline. The kinematics are defined by constant radial velocity \(v_{r}\) (representing infall or outflow) and constant azimuthal (i.e. rotational) velocity \(v_{0}\). To summarize, for a single circular orbit with uniform radial motion, there are eleven (11) parameters in the foreground model: \(X_{0}\), \(Y_{0}\), \(v_{\rm sys}\), \(r_{0}\), \(v_{r}\), \(v_{0}\), \(i\), \(\Omega\), and error floors \(\delta X\), \(\delta Y\), and \(\delta V\). The results of the expanding ring model are provided in Table 4 and illustrated in Figure 9. ### Elliptical Orbit Model As in the expanding-ring model, the elliptical orbit model is a single orbit model applied to maser groups R4 - B4. In addition to the sky projection angles \(i\) and \(\Omega\), elliptical orbits require an additional angle, \(\omega\), the argument of periapsis, which is the azimuthal angle between midline and the periapsis (cf. Humphreys et al. 2013). The shape of the orbit is determined by the semimajor axis \(a\), and the eccentricity \(e\). In cylindrical coordinates (\(r\), \(\phi\)), the shape of Figure 8: Position-velocity diagram of the nuclear H\({}_{2}\)O masers. Position offsets are taken along PA \(-50^{\circ}\). The data have been colored and annotated based on the maser group label. The dashed line shows a linear fit to the G1 and B1 maser groups, extrapolated to span the observed maser velocities. The fit is not intended as a physical model but rather is intended to highlight the curvature of the distribution of maser spots in the position-velocity plane. \begin{table} \begin{tabular}{l l r r l r l r l r l r l r l r l r l r l r l r l r l r l r l r l r l r l r l r l r l r l r l l r l r l r l l r l l r l r l l r l l r l l r l l r l l r l l r l l r l l l r l l r l l l r l l l r l l l r l l l r l l l r l l l r l l l l r l l l r l l l l r l l l l r l l l l l r l l l l l r l l l l l r l l l l l r l l l l l l r l l l l l l l r the orbit is given by \[r=\frac{a(1-e^{2})}{1+e\cos(\phi-\omega)}\,. \tag{1}\] In the disk rest frame, the radial and azimuthal components of the orbital velocity, \(v_{r}\) and \(v_{\phi}\), are \[v_{r} = v_{a}\,\frac{e\sin(\phi-\omega)}{\sqrt{1-e^{2}}},\,\text{and}\] \[v_{\phi} = v_{a}\,\frac{1+e\cos(\phi-\omega)}{\sqrt{1-e^{2}}}\,\] where \(v_{a}\) is the Keplerian circular speed at radius \(a\). The foreground model requires twelve (12) parameters: \(X_{0}\), \(Y_{0}\), \(v_{\text{sys}}\), \(a\), \(e\), \(v_{a}\), \(\omega\), \(i\), \(\Omega\), \(\delta X\), \(\delta Y\), \(\delta V\). The results of the fit are provided in Table 5 and Fig. 10. ### Spiral Arms In the spiral arms model, the H\({}_{2}\)O masers trace molecular gas at a smoothly changing distance from the dynamical center, which introduces the curve on the \(p\)-\(v\) diagram between the maser groups R4 and B4. We adopted a logarithmic spiral model for the distribution of maser spots, \[r=r_{0}\exp\left(\phi\tan\theta_{P}\right), \tag{2}\] where \(r_{0}\) is a length scale parameter and \(\theta_{P}\) is the pitch angle of the arms. We further assume that there are two symmetric arms, a main arm intended to fit the R4-G1 masers and a symmetric opposite arm rotated by \(\Delta\phi=180^{\circ}\) in the disk plane. To simplify the model, we assume that Keplerian rotation dominates the kinematics, but we also allow for uniform streaming motion locally parallel to the arms. The inclusion of streaming along the arm introduces an additional Figure 9: A representative fit of the expanding ring model, in sky coordinates, for the maser disk of NGC 1068. In both panels, the maser spot positions and velocities are shown as shaded circles. The circle diameters are scaled to the flux of the maser spots. The shading depends on the foreground probability for individual spots: lighter shades indicate maser spots that are likely part of the model orbit (the foreground model), and darker shades indicate outliers. Left panel: sky plot of best-fit circular orbit, traced by the red lines; the near side of the midline is plotted as a solid line, and the far side as a dashed line. The kinematic center is shown as a cyan circle. The blue dotted line traces the disk midline. Right panel: the position-velocity diagram. The blue dotted lines trace the Keplerian circular speed curve. parameter, the streaming speed \(v_{S}\). Radial and azimuthal velocities in the disk frame are given by \[v_{r} =v_{S}\,\sin\theta_{P}\text{, and} \tag{3}\] \[v_{\phi} =v_{0}\,\sqrt{\frac{r_{0}}{r}}+v_{S}\,\cos\theta_{P}\text{,}\] where \(v_{0}\) is the Keplerian rotation speed at \(r=r_{0}\). In summary, the spiral arm model requires twelve (12) parameters: \(x_{0}\), \(y_{0}\), \(r_{0}\), \(v_{\text{sys}}\), \(v_{0}\), \(v_{S}\), \(\theta_{P}\), \(i\), \(\Omega\), \(\delta X\), \(\delta Y\), \(\delta V\). The results are summarized in Tables 6 and plotted in Figure 11. ### Kinematic Model Selection Overall, the spiral arms model better fits the data compared to the expanding ring and elliptical orbit model. Formally, the spiral arms model has the highest marginal likelihood by a factor of \(\sim\exp(1000)\) relative to the other models (Table 3). However, this result should be viewed critically because, from inspection of Figs. 9 and 10, the disk midline model for groups R1-R3 is partly responsible for the poorer fits. To that point, the R1 masers are identified \begin{table} \begin{tabular}{l l r r l r l} \hline \hline \multicolumn{1}{c}{ Parameter} & \multicolumn{1}{c}{Prior\({}^{a}\)} & \multicolumn{1}{c}{Value} & \multicolumn{1}{c}{Units} & \multicolumn{1}{c}{\(N\)/IAT} \\ \hline \multicolumn{6}{c}{Foreground Model} \\ \hline Foreground Prob., \(P_{fg}\) & \(\mathcal{U}(0,1)\) & \(0.984\pm\) & \(0.006\) & \(\cdots\) & \(4865\) \\ Kinematic center, \(X_{0}\) & \(\mathcal{U}(-10,10)\) & \(0.49\,\pm\) & \(0.04\) & mas & \(3973\) \\ Kinematic center, \(Y_{0}\) & \(\mathcal{U}(-10,10)\) & \(0.74\,\pm\) & \(0.03\) & mas & \(4021\) \\ Systemic velocity, \(v_{\text{sys}}\) & \(\mathcal{N}(1132,5)\) & \(1070\) & \(\pm\) & \(1\) & km s\({}^{-1}\) & \(4043\) \\ Semimajor axis, \(a\) & \(\mathcal{U}(5,25)\) & \(10.95\,\pm\) & \(0.05\) & mas & \(3919\) \\ Circular speed at \(a\), \(v_{a}\) & \(\mathcal{U}(300,500)\) & \(330.3\,\pm\) & \(0.9\) & km s\({}^{-1}\) & \(4060\) \\ Eccentricity, \(e\) & \(\mathcal{U}(0,1)\) & \(0.258\pm\) & \(0.004\) & \(\cdots\) & \(4235\) \\ Argument of periapsis, \(\omega\) & \(\mathcal{U}(0,360)\) & \(0.07\,\pm\) & \(0.08\) & degrees & \(2599\) \\ Inclination, \(i\) & \(\mathcal{U}(90,110)\) & \(80.7\,\pm\) & \(0.1\) & degrees & \(4193\) \\ Position Angle, \(\Omega\) & \(\mathcal{U}(270,360)\) & \(309.5\,\pm\) & \(0.1\) & degrees & \(3967\) \\ Systematic error, \(\delta X\) & \(\mathcal{U}(0,4)\) & \(0.228\pm\) & \(0.005\) & mas & \(4204\) \\ Systematic error, \(\delta Y\) & \(\mathcal{U}(0,4)\) & \(0.58\,\pm\) & \(0.01\) & mas & \(3915\) \\ Systematic error, \(\delta V\) & \(\mathcal{U}(0,20)\) & \(4.7\,\pm\) & \(0.1\) & km s\({}^{-1}\) & \(4449\) \\ \hline \multicolumn{6}{c}{Background Model} \\ \hline Center, \(X_{bg}\) & \(\mathcal{U}(-20,20)\) & \(-9.19\,\pm\) & \(0.09\) & mas & \(4129\) \\ Center, \(Y_{bg}\) & \(\mathcal{U}(-20,20)\) & \(11.5\,\pm\) & \(0.4\) & mas & \(4171\) \\ Center, \(V_{bg}\) & \(\mathcal{N}(1132,400)\) & \(1388\,\pm\) & \(2\) & km s\({}^{-1}\) & \(4487\) \\ Width, \(\delta X_{bg}\) & \(\mathcal{H}(0,20)\) & \(0.36\,\pm\) & \(0.07\) & mas & \(3733\) \\ Width, \(\delta Y_{bg}\) & \(\mathcal{H}(0,20)\) & \(1.5\,\pm\) & \(0.3\) & mas & \(3221\) \\ Width, \(\delta V_{bg}\) & \(\mathcal{H}(0,100)\) & \(8\,\pm\) & \(2\) & km s\({}^{-1}\) & \(3694\) \\ \hline \multicolumn{6}{c}{Derived Parameters\({}^{b}\)} \\ \hline Semimajor axis & \(\cdots\) & \(0.744\pm\) & \(0.004\) & pc & \(\cdots\) \\ Central Mass & \(\cdots\) & \(18.9\,\pm\) & \(0.2\) & \(10^{6}\)\(M_{\odot}\) & \(\cdots\) \\ Orbital Period & \(\cdots\) & \(13.83\,\pm\) & \(0.04\) & kyr & \(\cdots\) \\ \hline \({}^{a}\mathcal{U}(x_{l},x_{h})\) is a uniform distribution with lower bound \(x_{l}\) and upper bound \(x_{h}\). \(\mathcal{N}(\mu,\sigma)\) is a normal distribution with mean \(\mu\) and standard deviation \(\sigma\). \(\mathcal{H}(\mu,\sigma)\) is a half-normal distribution with lower limit \(\mu\). \({}^{b}\)Assumes \(1^{\prime\prime}=70\) pc. & \({}^{a}\mathcal{U}(x_{l},x_{h})\) is a uniform distribution with lower bound \(x_{l}\) and upper bound \(x_{h}\). \(\mathcal{N}(\mu,\sigma)\) is a normal distribution with mean \(\mu\) and standard deviation \(\sigma\). \(\mathcal{H}(\mu,\sigma)\) is a half-normal distribution with lower limit \(\mu\). & \({}^{b}\)Assumes \(1^{\prime\prime}=70\) pc. & \({}^{a}\mathcal{U}(x_{l},x_{h})\) is a uniform distribution with lower bound \(x_{l}\) and upper bound \(x_{h}\). \(\mathcal{N}(\mu,\sigma)\) is a normal distribution with mean \(\mu\) and standard deviation \(\sigma\). \(\mathcal{H}(\mu,\sigma)\) is a half-normal distribution with lower limit \(\mu\). & \({}^{b}\)Assumes \(1^{\prime\prime}=70\) pc. & \({}^{a}\mathcal{U}(x_{l},x_{h})\) is a uniform distribution with lower bound \(x_{l}\) and upper bound \(x_{h}\). \(\mathcal{N}(\mu,\sigma)\) is a normal distribution with mean \(\mu\) and standard deviation \(\sigma\). \(\mathcal{H}(\mu,\sigma)\) is a half-normal distribution with lower limit \(\mu\). as outliers in both the expanding-ring and elliptical orbit models, and the R2 masers are also identified as outliers in the elliptical orbit model. Even so, both the expanding ring and elliptical orbit models are problematic in other ways. First, the expanding-ring model also fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model fails to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model to fit the B1 and B4 masers, which are identified as outliers. Second, the expanding-ring model to fit the B1 and B4 masers, which are identified as outliers. best-fit systemic velocity of the elliptical orbit model is \(1070\pm 1\) km s\({}^{-1}\), roughly 60 km s\({}^{-1}\) blueshifted relative to the systemic velocity of the host galaxy. For comparison, the spiral arm model produces a best-fit systemic velocity within \(2\sigma\) of the recessional velocity of the host galaxy and the surrounding molecular disk. Only the B4 maser group is identified as an outlier in the spiral arms model, although it appears that the R1 masers are also poorly fitted. Insofar as the spiral arms model provides, formally, the best goodness-of-fit, the best match to the systemic velocity of the host galaxy, and the best fit to most of the maser spots, we maintain the conclusion that the spiral arms model provides a better description of the data. In contrast to previous interpretations of the \(p\)-\(v\) diagram, we have assumed a Keplerian rotation curve in our kinematic models. To assess this assumption, we modified the spiral arms model to include a power law rotation curve: \(v_{\phi}=v_{0}(r/r_{0})^{\alpha}\), where \(\alpha\) is a fitted parameter. As earlier interpretations concluded \(\alpha\sim-0.3\)(Lodato & Bertin, 2003), we adopted a generous and uniform prior, \(\alpha\sim\mathcal{U}(-1,0)\). The results are provided in Table 7. For this modified spiral arm model, \(\mathrm{WBIC}=-3607\pm 2\) and \(\mathrm{TIE}=-3538\pm 2\), indicating a fit goodness comparable to the unmodified model. The best-fit power-law index is \(\alpha=-0.51\pm 0.01\), consistent with Keplerian rotation. Based on a comparison of Tables 6 and 7, the other parameters do not show significant changes. \begin{table} \begin{tabular}{l l r r l r l} \hline \hline \multicolumn{1}{c}{ Parameter} & \multicolumn{1}{c}{Prior\({}^{a}\)} & \multicolumn{1}{c}{Value} & \multicolumn{1}{c}{Units} & \multicolumn{1}{c}{\(N\)/IAT} \\ \hline \multicolumn{6}{c}{Foreground Model} \\ \hline Foreground Prob., \(P_{fg}\) & \(\mathcal{U}(0,1)\) & 0.990\(\pm\) & 0.003 & & 2699 \\ Kinematic center, \(X_{0}\) & \(\mathcal{U}(-10,10)\) & 1.579\(\pm\) & 0.007 & mas & 2472 \\ Kinematic center, \(Y_{0}\) & \(\mathcal{U}(-10,10)\) & 0.07 & 0.02 & mas & 2180 \\ Systemic velocity, \(v_{\mathrm{sys}}\) & \(\mathcal{N}(1132,5)\) & 1125.8 & \(\pm\) & 0.2 & km s\({}^{-1}\) & 2174 \\ Scale radius, \(r_{0}\) & \(\mathcal{U}(5,25)\) & 6.70 & \(\pm\) & 0.02 & mas & 2711 \\ Circular speed at \(r_{0}\), \(v_{0}\) & \(\mathcal{U}(300,500)\) & 404 & \(\pm\) & 1 & km s\({}^{-1}\) & 1688 \\ Streaming speed, \(v_{S}\) & \(\mathcal{U}(0,100)\) & 0.4 & \(\pm\) & 0.5 & km s\({}^{-1}\) & 1439 \\ Pitch angle, \(\theta_{P}\) & \(\mathcal{U}(4,24)\) & 4.89 & \(\pm\) & 0.03 & degrees & 2426 \\ Inclination, \(i\) & \(\mathcal{U}(90,110)\) & 75.5 & \(\pm\) & 0.1 & degrees & 1856 \\ Position Angle, \(\Omega\) & \(\mathcal{U}(270,360)\) & 313.4 & \(\pm\) & 0.1 & degrees & 1975 \\ Systematic error, \(\delta X\) & \(\mathcal{U}(0,4)\) & 0.153\(\pm\) & 0.004 & mas & 2508 \\ Systematic error, \(\delta Y\) & \(\mathcal{U}(0,4)\) & 0.74 & \(\pm\) & 0.02 & mas & 2873 \\ Systematic error, \(\delta V\) & \(\mathcal{U}(0,20)\) & 2.60 & \(\pm\) & 0.06 & km s\({}^{-1}\) & 2699 \\ \hline \multicolumn{6}{c}{Background Model} \\ \hline Center, \(X_{bg}\) & \(\mathcal{U}(-20,20)\) & 10.87 & \(\pm\) & 0.02 & mas & 2014 \\ Center, \(Y_{bg}\) & \(\mathcal{U}(-20,20)\) & \(-4.99\) & \(\pm\) & 0.07 & mas & 2404 \\ Center, \(V_{bg}\) & \(\mathcal{N}(1132,400)\) & 821 & \(\pm\) & 1 & km s\({}^{-1}\) & 2276 \\ Width, \(\delta X_{bg}\) & \(\mathcal{H}(0,20)\) & 0.04 & \(\pm\) & 0.02 & mas & 1823 \\ Width, \(\delta Y_{bg}\) & \(\mathcal{H}(0,20)\) & 0.19 & \(\pm\) & 0.07 & mas & 2218 \\ Width, \(\delta V_{bg}\) & \(\mathcal{H}(0,100)\) & 4 & \(\pm\) & 1 & km s\({}^{-1}\) & 1779 \\ \hline \multicolumn{6}{c}{Derived Parameters\({}^{b}\)} \\ \hline Scale radius & \(\cdots\) & 0.455\(\pm\) & 0.002 & pc & \(\cdots\) \\ Central Mass & \(\cdots\) & 17.2 & \(\pm\) & 0.1 & 10\({}^{6}\)\(M_{\odot}\) & \(\cdots\) \\ Orbital Period at \(r=r_{0}\) & \(\cdots\) & 6.92 & \(\pm\) & 0.04 & kyr & \(\cdots\) \\ \hline \end{tabular} \({}^{a}\)\(\mathcal{U}(x_{l},x_{h})\) is a uniform distribution with lower bound \(x_{l}\) and upper bound \(x_{h}\). \(\mathcal{N}(\mu,\sigma)\) is a normal distribution with mean \(\mu\) and standard deviation \(\sigma\). \(\mathcal{H}(\mu,\sigma)\) is a half-normal distribution with lower limit \(\mu\). \({}^{b}\) Assumes 1\({}^{\prime\prime}=70\) pc. \end{table} Table 6: Spiral Arms Model To assess the fit of the H\({}_{2}\)O maser rotation curve further, we compare the extrapolated model rotation curves with the tangential velocity curve of HCN (\(J=3\to 2\)) emission measured by Impellizzeri et al. (2019). The results are shown in Fig. 12. Note that the molecular gas traced by the HCN emission counterrotates relative to the H\({}_{2}\)O maser disk, so the extrapolated rotation curves have been inverted. The HCN tangential velocity curve traces the kinematics on scales larger than the H\({}_{2}\)O maser disk, \(\sim\) 20-100 mas (\(\sim\) 1.4-7 pc). Therefore, if the mass of the maser disk is a substantial fraction of the enclosed mass (see, e.g., Lodato & Bertin, 2003), the HCN velocities should exceed the extrapolated rotation curves. On the contrary, the extrapolated rotation curve of the spiral arm model agrees well with the HCN tangential velocity curve. This result leads to several important conclusions. Most obviously, the spiral arm model better predicts the HCN tangential velocity curve, further supporting it as a better model compared to the expanding ring and elliptical orbit models. Second, the close agreement between the HCN tangential velocity curve and the extrapolated H\({}_{2}\)O maser rotation curve is likely not coincidental. Rather, it appears that counterrotation dominates the observed tangential velocities of the outer molecular disk. Third, the rotation curve of the H\({}_{2}\)O maser disk of NGC 1068 is almost certainly Keplerian. A flatter rotation curve would extrapolate to rotation velocities that exceed those observed in the outer molecular disk. Putting these conclusions together, within a \begin{table} \begin{tabular}{l l c c c} \hline \hline Parameter & Prior\({}^{a}\) & Value & Units & \(N\)/IAT \\ \hline \multicolumn{5}{c}{Foreground Model} \\ \hline Foreground Prob., \(P_{fg}\) & \({\cal U}(0,1)\) & 0.990\(\pm\) & 0.003 & \(\cdots\) & 2744 \\ Kinematic center, \(X_{0}\) & \({\cal U}(-10,10)\) & 1.581\(\pm\) & 0.007 & mas & 2165 \\ Kinematic center, \(Y_{0}\) & \({\cal U}(-10,10)\) & 0.08 & \(\pm\) & 0.03 & mas & 2172 \\ Systemic velocity, \(v_{\rm sys}\) & \({\cal N}(1132,5)\) & 1125.8 & \(\pm\) & 0.3 & km s\({}^{-1}\) & 1912 \\ Scale radius, \(r_{0}\) & \({\cal U}(5,25)\) & 6.71 & \(\pm\) & 0.03 & mas & 2325 \\ Circular speed at \(r_{0}\), \(v_{0}\) & \({\cal U}(300,500)\) & 406 & \(\pm\) & 2 & km s\({}^{-1}\) & 2345 \\ Rotation curve power-law index, \(\alpha\) & \({\cal U}(-1,0)\) & \(-\)0.51 & \(\pm\) & 0.01 & \(\cdots\) & 2468 \\ Streaming speed, \(v_{S}\) & \({\cal U}(0,100)\) & 0.5 & \(\pm\) & 0.5 & km s\({}^{-1}\) & 1649 \\ Pitch angle, \(\theta_{P}\) & \({\cal U}(4,24)\) & 4.88 & \(\pm\) & 0.03 & degrees & 2289 \\ Inclination, \(i\) & \({\cal U}(90,110)\) & 75.5 & \(\pm\) & 0.1 & degrees & 1839 \\ Position Angle, \(\Omega\) & \({\cal U}(270,360)\) & 313.4 & \(\pm\) & 0.1 & degrees & 1839 \\ Systematic error, \(\delta X\) & \({\cal U}(0,4)\) & 0.153\(\pm\) & 0.004 & mas & 2420 \\ Systematic error, \(\delta Y\) & \({\cal U}(0,4)\) & 0.73 & \(\pm\) & 0.02 & mas & 2471 \\ Systematic error, \(\delta V\) & \({\cal U}(0,20)\) & 2.59 & \(\pm\) & 0.06 & km s\({}^{-1}\) & 2859 \\ \hline \multicolumn{5}{c}{Background Model} \\ \hline Center, \(X_{bg}\) & \({\cal U}(-20,20)\) & 10.87 & \(\pm\) & 0.02 & mas & 2224 \\ Center, \(Y_{bg}\) & \({\cal U}(-20,20)\) & \(-\)5.00 & \(\pm\) & 0.07 & mas & 2188 \\ Center, \(V_{bg}\) & \({\cal N}(1132,400)\) & 821 & \(\pm\) & 1 & km s\({}^{-1}\) & 2513 \\ Width, \(\delta X_{bg}\) & \({\cal H}(0,20)\) & 0.04 & \(\pm\) & 0.02 & mas & 1738 \\ Width, \(\delta Y_{bg}\) & \({\cal H}(0,20)\) & 0.19 & \(\pm\) & 0.07 & mas & 2018 \\ Width, \(\delta V_{bg}\) & \({\cal H}(0,100)\) & 4 & \(\pm\) & 1 & km s\({}^{-1}\) & 1726 \\ \hline \multicolumn{5}{c}{Derived Parameters\({}^{b}\)} \\ \hline Scale radius & \(\cdots\) & 0.456\(\pm\) & 0.002 & pc & \(\cdots\) \\ Central Mass & \(\cdots\) & 17.4 & \(\pm\) & 0.2 & 10\({}^{6}\)\(M_{\odot}\) & \(\cdots\) \\ Orbital Period at \(r=r_{0}\) & \(\cdots\) & 6.90 & \(\pm\) & 0.04 & kyr & \(\cdots\) \\ \hline \end{tabular} \({}^{a}\)\({\cal U}(x_{l},x_{h})\) is a uniform distribution with lower bound \(x_{l}\) and upper bound \(x_{h}\). \({\cal N}(\mu,\sigma)\) is a normal distribution with mean \(\mu\) and standard deviation \(\sigma\). \({\cal H}(\mu,\sigma)\) is a half-normal distribution with lower limit \(\mu\). \({}^{b}\) Assumes 1\({}^{\prime\prime}=70\) pc. \end{table} Table 7: Modified Spiral Arms Model projected distance of \(\sim 100\) mas (7 pc) of the kinematic center, the gravitational potential is dominated by a compact mass of \((17.2\pm 0.1)\times 10^{6}\)\(M_{\odot}\). ### An Examination of the Keplerian Rotation Curve An important question follows: why do the kinematic models favor Keplerian rotation while the _p-v_ diagram suggests sub-Keplerian rotation? It should be emphasized that the _p-v_ diagram analysis uses projections of the redshifted maser spot coordinates, only about 12% of the data, to estimate the rotation curve, but our kinematic models use all the maser spots. Unlike spots confined to a single radius, maser spots within spiral arms sample a range of radii and help constrain the rotation curve. More specifically, the redshifted masers sample \(r\) between approximately 9 and 15 mas (0.6 to 1 pc), and the spiral arms sampled by groups R4, B2 and B3 redundantly sample the range 9 to 12 mas (0.6 to 0.8 pc). Streaming motions associated with spiral arms might affect the _p-v_ diagram analysis; however, we find that the streaming motions are negligible compared to the rotation speeds, \(v_{S}=0.4\pm 0.5\) km s\({}^{-1}\) (Table 6). Rather, the answer lies in the assumption made in the interpretation of the _p-v_ diagram, namely, that the maser groups R1-3 and the highest velocity masers of R4 lie along the disk midline. Certainly, that is a reasonable assumption as enhanced maser self-amplification is expected along the midline of an edge-on disk (Watson and Wallin, 1994). However, the plane of the maser disk is not viewed edge-on, but is tilted by nearly 15\({}^{\circ}\) from the line-of-sight (\(i=105^{\circ}\); Table 6), which reduces the path length of the line of sight through the molecular disk. It seems more likely then that the masers also sample local density enhancements within the disk rather as well as favorable coherent path lengths through the disk. Fig. 13 shows the spiral arm model in the disk frame. We caution that the G1 masers suffer deprojection artifacts because, along the sight line to the kinematic center, the recessional velocity is insensitive to the rotation curve, and the Figure 12: A comparison of the extrapolated rotation curves of the H\({}_{2}\)O maser kinematic models H\({}_{2}\)O and the tangential velocity curve of HCN (\(J=3\to 2\)) tangential velocity curve from Impellizzeri et al. (2019). Note that the gas traced by HCN counterrotates relative to the H\({}_{2}\)O maser disk, and so the rotation curves have been inverted. deprojection of the nearly edge-on disk produces degenerate solutions in disk radius. Putting aside the deprojection of the G1 maser spots, two results stand out. First, the model favors displacing the highest-velocity masers (within the R4 masers) closer to the observer than the disk midline, the R3 masers just crossing the midline, and the R2 masers roughly symmetrically distributed across the midline. As a result, the recessional velocities sampled by the R3 and R4 masers tend to fall just below the circular speed rotation curve. The displacement especially of the high-velocity R4 masers away from the midline creates the illusion of a flatter rotation curve on the _p-v_ diagram. The second result is that the spiral arm model places most of the maser spots in opposite quadrants of the disk, with the blueshifted masers behind the disk midline (farther away from the observer) and the near-systemic and redshifted masers before the midline (closer to the observer). This pattern resembles ionization cones seen in the NLRs of Seyfert galaxies, which result from selective obscuration (e.g., Wilson & Tsvetanov, 1994). In the case of the maser disk, it seems more likely that the asymmetry results from a warped disk or outflows that might elevate molecular clouds out of the disk plane and expose them to the central engine. Alternatively, the central engine could produce an intrinsically anisotropic, polar radiation field (cf. Netzer, 2015), and the masers occur in regions that view the central accretion disk more nearly pole-on. Figure 13: The spiral arm model in the frame of the maser disk. The line of sight is towards positive \(y\). Note that the G1 masers appear to sample arms at a range of distances, but this result is likely an artifact of projection of the near-side maser spots onto the flat disk model. To look for evidence of a warped disk, we deprojected the disk assuming that minor-axis residuals on the sky result only from the vertical structure of the disk. In this case, the disk frame \(z\)-coordinate of a maser spot is given by \[z=r\left(\frac{\sin\phi}{\tan i}\right)+(X-X_{0})\cos\Omega-(Y-Y_{0})\sin\Omega, \tag{4}\] where \(r\) and \(\phi\) are the polar coordinates of the model maser spot. The resulting vertical structure of the disk is shown in Fig. 14. We find that, by design, most of the maser spots lie close to \(z=0\), but there are notable deviations over the G1-R4 region (disk frame \(x\) between toughly 0 and 10 mas). One possibility is that these deviations result from an unaccounted for warp in the molecular accretion disk. However, the displacement of the G1-R4 masers is comparable to the length of the maser substructures. To that point, the filamentary substructures align nearly perpendicular to the best-fit disk plane, suggesting outflow along poloidal magnetic field lines (see Sec. 6). The morphology suggests that the asymmetry of the H\({}_{2}\)O maser spot distribution does not result from warps or vertical structure in the molecular disk; rather, it seems more likely that the molecular disk is anisotropically heated. ### An Anisotropic Heating Model for the Molecular Accretion Disk So far, H\({}_{2}\)O megamaser disks are found mainly in active galactic nuclei (cf. Kuo et al., 2018), and the central engine likely plays a role in powering H\({}_{2}\)O megamaser emission. Toward understanding the origin of H\({}_{2}\)O masers from circumnuclear disks, Neufeld & Maloney modeled the effects of X-ray heating on molecular gas (Neufeld et al., 1994; Neufeld & Maloney, 1995); they demonstrated that X-ray heating produces a region of enhanced H\({}_{2}\)O abundance with temperatures suited to pump the 22 GHz maser transition through collisions. Turning to observational evidence, Gallimore et al. (2001) demonstrated that the fluxes of the blueshifted and redshifted disk masers of NGC 1068 are correlated over nearly 15 years of monitoring. They also detected a simultaneous flare of blueshifted and redshifted maser features that lasted less than 84 days; however, the projected distance between the maser spots is \(\sim 20\) mas (see Fig. 8), corresponding to a separation \(>4\) LY. The likeliest explanation is that the masers respond to variations of the (unfortunately hidden) central engine (see also Neufeld, 2000, for a theoretical treatment of H\({}_{2}\)O maser reverberation). Motivated by the X-ray heating model for disk megamasers, we consider an anisotropic heating model to explain the distribution of maser spots in the deprojected spiral arms model (Fig. 13). Specifically, we consider the radiation pattern produced by a geometrically thin accretion disk with a Thomson-thick atmosphere (Netzer, 1987). A cloud located at distance \(R\) from the accretion disk receives flux, \[F(r,\theta)\propto\frac{\cos\theta(1+2\cos\theta)}{3R^{2}}\,, \tag{5}\] where \(\theta\) is the angle between the polar axis of the accretion disk and the radius vector to the cloud. To constrain the model, we assume that the small-scale radio jet along PA \(\sim 11^{\circ}\) marks the projection of the accretion disk axis on the sky. We further assume that the maser spots occupy quadrants of the molecular disk that receive more AGN continuum flux than the adjacent quadrants. We calculated the illumination of the molecular disk over a grid of accretion disk inclinations and searched for illumination patterns that preferentially heat the maser quadrants of the molecular disk. From inspection of the resulting model grids, the best matches result for accretion disk inclinations Figure 14: The edge-on view of the deprojected spiral arm model. The disk-vertical (\(z\)) coordinates were estimated assuming that the minor axis residuals of the model fit result solely from the vertical structure. The disk-frame \(x\) axis coordinate is in the plane of the disk and aligns with the projected major axis on the sky. between \(\approx 56^{\circ}\) to \(66^{\circ}\) (i.e., the northern radio jet axis tilts toward the observer \(\approx 24^{\circ}\) to \(34^{\circ}\) from the plane of the sky). Figure 15 shows the illumination pattern for accretion disk inclination \(60^{\circ}\). Note that, for this illustration, we have not included the effects of radiative transfer through the molecular disk; rather, our goal was to illustrate the disk heating pattern that results when the larger molecular accretion disk does not align with the inner accretion disk. Based on the maser data alone, it is unclear which of the following has a greater effect on the distribution of maser spots in the molecular accretion disk: anisotropic heating or the vertical structure of the disk. However, it is unclear why vertical structures preferentially occur in opposite quadrants of the molecular disk unless the disk is anisotropically heated. We revisit to the anisotropic heating model in Sec. 5. ### Constraints on the Mass of the Molecular Accretion Disk The presence of spiral arms implies self-gravitation in the disk, which, in turn, can be used to estimate the mass of the molecular disk (see Maoz 1995 for a similar analysis of NGC 4258) The Toomre \(Q\) parameter evaluates the Figure 15: The illumination pattern of the central accretion disk on the surrounding molecular accretion disk in the anisotropic heating model. In this model, the central accretion disk has inclination \(60^{\circ}\), and the polar axis projects onto the sky along PA \(11^{\circ}\). The color scale indicates the flux received by a cloud in the molecular accretion disk; fluxes are normalized to the flux received by a cloud located \(r=5\) mas (\(0.35\) pc) above the accretion disk (i.e. along the polar axis of the accretion disk). The central 5 mas has been masked for the purpose of illustration. The deprojected maser spot positions are plotted as in Fig. 13. N.B. this illustration does not include radiative transfer effects within the disk. stability of a differentially rotating disk against fragmentation and collapse (Toomre, 1964). For a Keplerian disk in orbit around a black hole, \[Q\approx 2\frac{M_{bh}}{M_{d}}\frac{c}{v_{\phi}}\,, \tag{6}\] where \(M_{bh}\) is the central black hole mass, \(M_{d}\) is the disk mass, and \(c\) is the characteristic wave speed in the gas, whether the sound speed \(c_{s}\approx 0.08\sqrt{T}\,\mathrm{km\,s^{-1}}\)(cf. Kratter and Lodato, 2016), or, in a magnetized disk, the Alfven speed, \(v_{A}\)(Kim and Ostriker, 2001). Spiral structure appears when \(1\lesssim Q\lesssim 2\). Normalizing to values appropriate for the disk masers, we find \[M_{d}\approx 110\times 10^{3}\,M_{\odot}\,\left(\frac{M_{bh}}{17\times 10^{6} \,M_{\odot}}\right)\left(\frac{Q}{1.5}\right)^{-1}\left(\frac{v_{c}}{2.6\, \mathrm{km\,s^{-1}}}\right)\left(\frac{v_{\phi}}{300\,\mathrm{km\,s^{-1}}} \right)^{-1}\,. \tag{7}\] Here, we have scaled the characteristic speed, \(v_{c}\), to the best-fit error floor (Table 6), which is close to the sound speed in warm molecular clouds. We find that the disk mass inferred from the stability arguments is \(\lesssim 1\%\) of the central mass, and the effect on the rotation curve should be small. The exact effect on the rotation curve depends on the radial profile of the surface mass density, \(\Sigma(r)\), which is unknown. Assuming a Mestel disk profile for simplicity, \(\Sigma(r)\propto r^{-1}\)(Mestel, 1963), the characteristic circular speed due only to disk self-gravitation is \(\lesssim 20\) km s\({}^{-1}\). Combined in quadrature with a characteristic rotation speed \(v_{\phi}=300\) km s\({}^{-1}\), the effect on the rotation curve is \(\lesssim 0.2\%\). Based on this estimate of disk mass (Eq. 7) and the geometry of the disk defined by the H\({}_{2}\)O maser spots, the mean gas density within the molecular disk is \(\bar{\rho}\approx 7\times 10^{-14}\,\mathrm{kg\,m^{-3}}\) (\(\bar{n}(\mathrm{H_{2}})\approx 9\times 10^{7}\,\mathrm{cm^{-3}}\)). For comparison, H\({}_{2}\)O masers trace molecular gas with density \(n(\mathrm{H_{2}})\approx 10^{8}\) - \(10^{11}\,\mathrm{cm^{-3}}\)(Kylafis and Norman, 1987, 1991; Neufeld et al., 1994; Moran et al., 1995; Neufeld, 2000), or \(\rho\approx 3\times 10^{-13}\) to \(3\times 10^{-10}\,\mathrm{kg\,m^{-3}}\). The inferred density contrast between the arm and the disk average is \(\rho_{A}/\rho_{d}\gtrsim 4\), comfortably within the range of contrasts produced in swing amplification studies (Toomre, 1981; Maoz, 1995). Based on this simplified stability analysis, the density enhancement produced by the spiral arms may have been necessary to generate H\({}_{2}\)O masers in the molecular accretion disk of NGC 1068. ## 5 Comparison with Infrared Continuum Images The nucleus of NGC 1068 has been observed at roughly mas resolution in the near-infrared and mid-infrared with the Very Large Telescope Interferometer (VLTI): \(2.2\,\mu\)m continuum with the GRAVITY instrument (GRAVITY Collaboration et al., 2020), and \(3.7\)-\(12\,\mu\)m with the MATISSE instrument (Gamez Rosas et al., 2022). The infrared continuum images are provided in Fig. 16. The absolute astrometric calibration for these images is imprecise, so the comparison with radio and mm-wave data has relied on interpretation. The GRAVITY image shows a handful of compact sources that roughly align with the orientation of the disk masers. Brighter sources appear to form a partial ring with radius \(3.5\) mas (\(0.25\) pc). In their preferred interpretation, GRAVITY Collaboration et al. (2020) model near-infrared emission as arising from hot, dusty clouds surrounding the central engine and located near the dust sublimation radius. In that registration, the hot dust and masers are coplanar, and the hot dust traces the inner edge of the molecular accretion disk. Turning to the MATISSE data, Gamez Rosas et al. (2022) derived astrometry based on a cross-correlation between the \(12\,\mu\)m continuum image and the ALMA 256 GHz continuum image of Impellizzeri et al. (2019). The astrometric precision is about 3 mas across all of the MATISSE bands. Putting some confidence in this alignment, Gamez Rosas et al. (2022) found a morphological agreement between the brightest emission on the MATISSE images and resolved northern and northeastern extensions on the 22 GHz continuum image. Given the proximity in wavelength, the compact sources on the \(2.2\mu\)m GRAVITY image likely associate with the brightest emission on the \(3.7\mu\)m MATISSE image; the GRAVITY image shown in Fig. 16 was registered based on this assumption. To place the positions of the maser spots measured by GG97, Gamez Rosas et al. (2022) assumed that the 22 GHz continuum peak marks the kinematic center of the maser disk. In their registration, the masers fall along a dark lane in the \(3.7\,\mu\)m image, and the brighter infrared emission is located mainly north of the maser disk. Their interpretation is that the mid-infrared continuum traces warm dust in the molecular outflow. Because they derive from the same data, our new observations provide a precise registration of the maser spots relative to the 22 GHz continuum (Fig. 5). We find that, counter to the registration of Gamez Rosas et al. (2022), the radio continuum peak is actually located southwest of the kinematic center of the maser disk (Sec. 2.1; Fig. 5). As shown in Fig. 16, the corrected placement of the H\({}_{2}\)O maser spots more closely aligns the H\({}_{2}\)O maser disk with the GRAVITY near-infrared sources as originally proposed by GRAVITY Collaboration et al. (2020) but shifts the infrared sources to a larger distance from the kinematic center. The inferred dust temperatures are consistent with this registration. Based on modeling of the GRAVITY-MATISSE infrared SEDs, the dust temperatures of the brightest infrared sources are \(\sim 700\) K, well below the sublimation temperature for graphite (\(T_{sub}\approx 1800\) K) and silicate (\(T_{sub}\approx 1500\) K) grains (Gamez Rosas et al., 2022). Scaling from the sublimation radii estimates of Netzer (2015), the expected grain temperature is \[T_{gr}\approx 1000\,\left(\frac{L_{AGN}}{10^{45}\,\mathrm{ergs\ s^{-1}}} \right)^{(1/5.2)}\left(\frac{f(\theta)\times R}{0.8\,\mathrm{pc}}\right)^{(1/2.6)}\mathrm{K}, \tag{8}\] where \(L_{AGN}\) is the (unknown) luminosity of the AGN, \(R\) is the distance between the AGN and the dusty cloud, \(\theta\) is the polar angle between the accretion disk axis and the dusty cloud, and \(f(\theta)\) is a correction for the anisotropy of the AGN radiation field. For an isotropic source, \(f(\theta)=1\), and for a thin accretion disk with a Thomson-thick atmosphere, \(f(\theta)=\left[\cos\theta(1+2\cos\theta)/3\right]^{1/2}\) (cf. Eqn. 5). From the anisotropic heating model presented in Sec. 4.7, \(\theta\approx 60^{\circ}\) in this region of the molecular accretion disk, and \(f(\theta)\approx 0.58\). The predicted dust temperature at \(R=0.8\) pc is, \[T_{gr}\approx 800\,\left(\frac{L_{AGN}}{10^{45}\,\mathrm{ergs\ s^{-1}}} \right)^{(1/5.2)}\left(\frac{f(\theta)}{0.58}\right)^{(1/2.6)}\left(\frac{R}{0.8\,\mathrm{pc}}\right)^{(1/2.6)}\mathrm{K}. \tag{9}\] Dust temperatures \(T_{gr}=700\) K result for \(L_{AGN}\approx 5\times 10^{44}\,\mathrm{ergs\ s^{-1}}\), comfortably within the wide range of estimates for NGC 1068 (see the discussion in GRAVITY Collaboration et al., 2020). We caution that this estimate carries many assumptions and is not intended to be a precise measure of the (hidden) AGN luminosity. Rather, the result shows that the dust temperatures associated with the brightest VLTI sources are consistent with \(R\approx 0.8\) pc and the Figure 16: Comparison of HSA nuclear continuum and maser positions with VLTI infrared images. In both panels, the maser spots are plotted as filled dots color-coded by recessional velocity as in Fig. 3. The infrared beams are shown as blue-filled ellipses in the lower right corner. _Left panel:_ In reverse grayscale, the MATISSE 3.7 \(\mu\)m image from Gómez Rosas et al. (2022). The 22 GHz continuum is plotted as magenta contours; the contour levels are identical to Fig. 5. _Right panel:_ In inverse grayscale, the GRAVITY 2.2 \(\mu\)m image from GRAVITY Collaboration et al. (2020). The spiral arms model for the maser kinematics is plotted as magenta and cyan curves. The astrometry of the infrared images is based on a cross-correlation between the ALMA 256 GHz continuum image (Impellizzeri et al., 2019) and the MATISSE 12 \(\mu\)m continuum image (Gamez Rosas et al., 2022). Since the masers and 22 GHz continuum are produced from the same data, the relative astrometry is arbitrarily precise. astrometric registration shown in Fig. 16. We conclude that the brightest infrared sources on VLTI images trace warm dust in the molecular accretion disk at roughly the orbital radii of the brightest H\({}_{2}\)O maser sources. Fainter infrared continuum sources are found north and south of the molecular disk and are probably associated with nuclear outflow (cf. Gallimore et al., 2016). Extinction poses a challenge for the Gamez Rosas et al. (2022) registration. To promote H\({}_{2}\)O maser emission, the mean molecular gas density is \(n(\mathrm{H_{2}})\gtrsim 10^{8}\,\mathrm{cm^{-3}}\)(Kylafis & Norman, 1987, 1991; Neufeld et al., 1994; Moran et al., 1995; Neufeld, 2000). For a disk with 1 mas (0.07 pc) scale-height (Fig. 14) and inclination \(\sim 76^{\circ}\) (Table 6), the characteristic path-length through the disk to the midplane is about 0.3 pc. The inferred column density to the disk midplane is therefore \(N(\mathrm{H_{2}})\gtrsim 8\times 10^{25}\,\mathrm{cm^{-2}}\), corresponding to a K-band extinction of about 9 mag (Cardelli et al., 1989; Guver & Ozel, 2009, and references therein). As a result, we should not be able to see near-infrared continuum directly associated with H\({}_{2}\)O masers. We can see two ways to reconcile this issue. First, the extinction might be patchy, and we see near-infrared continuum leaking through gaps in maser clouds. Alternatively, the proposed alignment needs to be adjusted by a few mas, perhaps placing the infrared continuum north or northeast of the maser disk, in which case the infrared continuum might trace extraplanar dust associated with the molecular outflow. For the registration of the 22 GHz and infrared data presented here, there is a remarkable asymmetry between regions northwest and southeast of the kinematic center: the infrared continuum is brighter to the northwest, as is the Figure 17: The illumination pattern of the central accretion disk on the surrounding molecular accretion disk in the anisotropic heating model as in Fig. 15 but projected onto the sky. The MATISSE 3.7 \(\mu\)m continuum image is plotted as cyan contours; the contour levels are 0.07, 0.14, 0.26, 0.51, and 0.97 of the infrared continuum peak. H\({}_{2}\)O maser emission. At least part of the asymmetry might be caused by anisotropic heating, as discussed in Sec. 4.7. To illustrate, Fig. 17 shows the model illumination pattern of Fig. 15 projected onto the sky with the inclination and position angle of the H\({}_{2}\)O maser disk (Table 6). The 3.7 \(\mu\)m continuum peak appears to be associated with the more illuminated region of the molecular disk at the northwest. A fainter extension to the northeast and a faint, isolated source to the southwest are more closely aligned with the molecular disk axis and perhaps trace dust in the molecular outflow. However, the anisotropic heating model predicts an infrared continuum peak to the southwest assuming that molecular gas is distributed at least roughly symmetrically in disk azimuth (i.e. allowing for spiral arms or other disk structures). This result supports the argument that the southeast region of the molecular accretion disk is selectively obscured by colder molecular gas in the torus on the pc scale (Gamez Rosas et al., 2022). ## 6 Constraints on Magnetic Fields The filamentary substructures observed throughout the nuclear H\({}_{2}\)O maser disk of NGC 1068, particularly the parallel filaments of the R4 masers, suggest the influence of ordered magnetic fields with size scales comparable to the disk radius (compare with star-forming regions, for example, Myers, 2009). We note in passing that the maser filaments qualitatively resemble molecular outflow features in the magnetic accretion disk model proposed by Emmering et al. (1992). This interpretation requires that the molecular gas be partially ionized, as predicted by X-ray heating models for H\({}_{2}\)O megamaser emission (Neufeld et al., 1994; Neufeld and Maloney, 1995). The nearly parallel filaments of the R4 region suggest that there are ordered magnetic fields threading the molecular disk and spanning 90\({}^{\circ}\) in disk azimuth (see Fig. 13). Gas motions will tend to stretch and disrupt filaments, but the organization of the R4 filaments hints that an equilibrium between gas motion and magnetic tension has been achieved; for the purpose of order of magnitude estimation, \(\rho v_{c}^{2}\approx B^{2}/8\pi\), where \(\rho\) is the mass density of the gas, \(v_{c}\) is a characteristic speed of the gas relative to magnetic field line, and \(B\) is the characteristic magnetic field strength. If the magnetic field lines are static as viewed by a distant observer, magnetic tension forces will introduce drag on the rotating disk and cause the filaments to curve in the direction of rotation and amplify the azimuthal component of the magnetic field (see, e.g., Bonanno and Urpin, 2007). Since the filaments show no strong curvature (except perhaps among the R3 masers), it seems more likely that the larger-scale magnetic field rotates with the molecular disk. In this case, turbulence introduces relative motion between the molecular gas and the large-scale magnetic field lines. The best-fit error floor, \(\delta V=2.6\) km s\({}^{-1}\) (Table 6), provides an estimate of turbulent motion in the molecular disk. For equilibrium between turbulent motions and magnetic tension, \[B_{ls}\approx 1.6\ {\rm mG}\,\left(\frac{v_{c}}{2.6\,{\rm km\,s^{-1}}}\right) \left(\frac{x_{e}}{10^{-5}}\right)^{1/2}\left(\frac{\rho}{\rho_{max}}\right) ^{1/2} \tag{10}\] where \(B_{ls}\) refers to the characteristic magnetic field strength of the large-scale magnetic field, \(x_{e}\) is the ionization fraction of the molecular gas (cf. Kylafis and Norman, 1987; Neufeld et al., 1994) and maser emission is quenched at densities exceeding \(\rho_{max}=3.2\times 10^{-10}\) kg m\({}^{-3}\)(Kylafis and Norman, 1987, 1991; Neufeld et al., 1994; Moran et al., 1995). For comparison, based on infrared polarimetry, Lopez-Rodriguez et al. (2015) find that the hottest dust grains are embedded in a magnetic field of strength \(B_{ls}\gtrsim 4\)\({\rm mG}\). The magnetic field strength drops below \(B_{ls}\sim 1\)\({\rm mG}\) outside \(r=3\) pc (from ALMA continuum polarimetry; Lopez-Rodriguez et al., 2020). We conclude that the strength of the magnetic fields on sub-pc scales is likely sufficient to stabilize the field lines against turbulent motions of a few km s\({}^{-1}\). Interestingly, it appears that the orientation of the projected magnetic field lines rotates from nearly poloidal to toroidal: PA \(\sim 0^{\circ}\) in the maser disk (Fig. 6) to PA\(\sim 105^{\circ}\) in the outer obscuring "torus" (Lopez-Rodriguez et al., 2015, 2020). Zeeman-induced polarization provide an independent check on this magnetic field estimate, but the expected polarization is weak. For H\({}_{2}\)O masers, the maximum fractional circular polarization produced by Zeeman-induced hyperfine transitions is, \[\frac{|V_{max}|}{I}=\frac{\mathcal{A}B_{||}}{\Delta v}, \tag{11}\] where \(|V_{max}|\) is the absolute Stokes-V peak flux density, \(I\) is the Stokes-I peak flux density, \(\mathcal{A}\approx 0.02\) km s\({}^{-1}\) gauss\({}^{-1}\) for this transition, \(\Delta v\) is the Stokes-I linewidth, and \(B_{||}\) is the strength of the magnetic field component parallel to the sight line (Fiebig and Guesten, 1989; Nedoluha and Watson, 1992). For the field strengths and turbulent velocities discussed above, \(|V|/I\gtrsim 0.0012\%\). The brightest single maser spot in our data has \(I=260\) mJy, so the prediction is \(|V|\gtrsim 3~{}\mu\)Jy. Unfortunately, the predicted Stokes-V falls below the detection limit on the channel maps, roughly 5 mJy beam\({}^{-1}\) (see Sec. 2.2). Accepting that our predictions are only rough estimates for the characteristic magnetic field strengths in the molecular disk, we produced Stokes-V maps for the redshifted channels, but no significant signal was detected. Formally, the non-detection places an upper limit on the magnetic field strength, \(B_{||}\lesssim 2.5\) gauss. Unfortunately, even if the magnetic fields are much greater than estimated here, future searches for Zeeman-induced circular polarization in NGC 1068 will likely be limited by confusion. The maser spots are crowded in the sky and in recessional velocity (Figs. 3 and 8). As a result, the integrated Stokes-V signal may be suppressed by overlapping and oppositely polarized Zeeman features in the spectrum. Unlike the maser disk of NGC 4258 (Herrnstein et al., 1998), NGC 1068 has no strong, isolated H\({}_{2}\)O maser features to search for Zeeman-induced circular polarization. On the other hand, linear polarization might probe at least the orientation of the magnetic fields. In particular, for propagation angles \(\sim 90^{\circ}\), linear polarization fractions approaching 50% or greater are expected (Lankhaar and Vlemmings, 2019). Unfortunately, our current data do not include linear polarization information, but follow-up observations with full polarization may provide the best constraints on the magnetic field properties of the molecular accretion disk of NGC 1068. ## 7 The Kinematics of the Jet Masers The jet masers at continuum component C are blueshifted relative to the systemic velocity of the host galaxy. The surprising result is that they are displaced about 5 mas south of the 5 GHz continuum peak (Fig. 4). In Fig. 18, we compare the location of the jet masers with ALMA maps of 256 GHz continuum and the HCN (\(J=3\to 2\)) line (ALMA data from Impellizzeri et al., 2019). The HCN emission resolves into a few compact sources with the brightest emission centered \(\sim 30\) mas (2 pc) west of the jet masers. For the purpose of discussion, we refer to the clumps of HCN line emission collectively as the component C molecular cloud complex. The east-west \(p\)-\(v\) diagram of the jet masers and molecular cloud complex is provided in Fig. 19. On this diagram, the jet masers show a U-shaped pattern, although the C3 group includes a highly blueshifted spot that overlaps faint HCN emission. The molecular cloud complex spans a similar velocity range but shows additional emission closer to the systemic velocity. No significant HCN emission appears east of the C4 masers. Assuming that the masers trace the molecular gas on the near side of some substructure within the molecular cloud complex, the \(p\)-\(v\) diagram is consistent with that expected from an expanding ring viewed nearly edge-on. In this scenario, the C1 and C4 masers move perpendicular to the sight line (that is, C1 moves west and C4 east). Consistent with this picture, the C1 and C4 masers have recessional velocities \(\sim 1025\) km s\({}^{-1}\), centered on the mean recessional velocity of the molecular cloud complex. The C2 and C3 masers arise from clumps on the near side of the expanding ring and approach the observer at the speed of expansion, roughly 60 km s\({}^{-1}\) relative to the molecular cloud complex. The C3 masers show the highest relative speeds with recessional velocities reaching 100 km s\({}^{-1}\) blueshifted relative to the molecular cloud complex. The continuum morphology and the jet maser kinematics might be explained by the impact of the radio jet on the molecular cloud complex in component C (cf. Gallimore et al., 1996). In this scenario, synchrotron continuum emission is enhanced at the resulting shock front; i.e., the 5 GHz continuum peak marks the shock front proper. The post-shock plasma expands, driving a second shock front into the surrounding molecular clouds. Assuming a constant expansion velocity, the kinematic age of the maser ring is \(0.35\) pc/\(60\) km s\({}^{-1}\approx 6000\) yr. As the maser ring expanded to its current diameter, the jet shock front has been advancing roughly northward, away from the central engine. Projected onto the sky, the 5 GHz continuum peak is about 5 mas (0.4 pc) north of the C2 masers. Therefore, assuming that the northward displacement is due to the advancement of the jet shock front over the age of the expanding ring, the average advancement speed is also \(\approx 60\) km s\({}^{-1}\). Relevant to this interpretation, May and Steiner (2017) proposed that the jet shock powers a secondary wind observed in infrared [Fe ii], [Si vi], and H\({}_{2}\) line emission throughout the narrow-line region. The characteristic outflow speeds are \(\sim 140\) km s\({}^{-1}\) over 100 pc scales, leading to a kinematic age \(\approx 7\times 10^{5}\) years. If these interpretations are correct, the jet masers trace dynamically young molecular clumps that accelerate away from the jet shock and feed the larger-scale and dynamically older outflow. We speculate that the higher-velocity C3 masers may trace accelerated gas located farther from the jet maser ring (i.e. closer to the observer than the expanding ring). It will be interesting to monitor the component C / jet maser region to look for acceleration and the generation of new maser spots as the source evolves. ## 8 Conclusions Based on new HSA observations of the H\({}_{2}\)O megamasers of NGC 1068, we have reinterpreted the kinematics of the disk masers and resolved the kinematics of the jet masers. We list the main results and conclusions. 1. Using reverse phase calibration on the phase reference source, we obtain astrometric positions of the maser spots and 22 GHz continuum with 0.3 mas precision. By comparing the positions of the maser spots with astrometric images of the 5 GHz continuum, we confirm the close agreement between the morphology of the S1 continuum and the distribution of the disk maser spots. Since it shares the phase reference solutions of the maser spots, the 22 GHz continuum also confirms the positional agreement. 2. Based on the astrometric alignment proposed by Gamez Rosas et al. (2022), the brightest H\({}_{2}\)O masers are associated with the brightest infrared sources on VLTI images. It seems likely that these infrared sources trace warm dust in the molecular accretion disk or its associated outflow. The dust temperatures are consistent with an AGN luminosity \(L_{AGN}\approx 5\times 10^{44}\,\rm ergs\,s^{-1}\). 3. In contrast to other megamaser disks, the \(p\)-\(v\) diagram of the disk masers shows a peculiar curve between the extrema of the recessional velocities. The masers appear to sample molecular gas in two symmetric spiral arms with pitch angle \(\theta_{p}=5^{\circ}\). 4. Based on a deprojection of the spiral arms model, the disk masers are located in an annulus located between roughly 5 and 15 mas (0.35 to 1 pc) from the kinematic center. 5. The disk masers preferentially fall in opposite quadrants of the molecular accretion disk. We speculate that the masers trace anisotropically heated regions of the molecular accretion disk. Figure 18: The location of the jet masers (blue dots) relative to the peak of the VLBA 5 GHz continuum (white dot), the ALMA 256 GHz continuum image (contours), and integrated HCN (\(J=3\to 2\)) emission (colorscale). The ALMA restoring beam is shown as the cyan ellipse on the lower right. The contour levels are 0.019, 0.037, 0.074, 0.15, and 0.30 mJy beam\({}^{-1}\). 6. The rotation curve is consistent with Keplerian rotation. The extrapolated rotation curve agrees well with the observed tangential velocity curve of HCN (\(J=3\to 2\)), indicating that the rotation curve remains Keplerian out to \(r\sim 100\) mas (7 pc). 7. The inferred mass inside \(r=5\) mas (0.35 pc) is \(17\times 10^{6}\)\(M_{\odot}\). 8. Based on disk stability arguments, the mass of the molecular disk is \(\approx 110\times 10^{3}\)\(M_{\odot}\). 9. The velocity error floor of the spiral disk model is \(\delta V=2.6\) km s\({}^{-1}\), much lower than the previous estimates of the turbulent velocity. We note that this value is comparable to the sound speed in warm molecular gas. 10. On mas scales, the disk masers arrange into linear, filamentary structures, suggesting the influence of pc-scale magnetic fields with characteristic field strengths \(\gtrsim 1.6\) mG. 11. The jet masers appear to trace an expanding ring with characteristic expansion velocity \(\sim 60\) km s\({}^{-1}\). The kinematic age is about 6000 years. 12. The radio continuum source C is displaced about 5 mas north of the jet masers. We propose that the continuum source traces an advancing shock as the radio jet penetrates the molecular cloud complex near component C. Assuming that the shock front has advanced northward from the jet masers over 6000 years, its advancement speed is also \(\sim 60\) km s\({}^{-1}\). Figure 19: The position-velocity diagram of the jet masers (color-filled circles) and HCN (\(J=3\to 2\)) emission (colorscale) at component C. The positional slice is east-west through the middle of the jet masers. The spatial resolution of the HCN image is 20 mas. The dashed line indicates the systemic velocity of the host galaxy. The cyan ellipse illustrates the pattern expected for an expanding ring of diameter 10 mas and expansion speed 60 km s\({}^{-1}\). The authors thank the participants of the TORUS 2018 workshop (Puerto Varas, Chile) for the helpful comments and conversations that motivated this work. We also thank the anonymous referee for their helpful and constructive review that greatly improved the manuscript. This paper makes use of data obtained from the NSF's VLBA and VLA, operated by the National Radio Astronomy Observatory. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The GBT is part of the Green Bank Observatory, which is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. VLBA, VLA, GBT, HSA AIPS (Wells, 1985; Greisen, 1990, 2003), astropy (Astropy Collaboration et al., 2013), emcee (Foreman-Mackey et al., 2013), DIFMAP (Shepherd, 1997), PyDREAM (Shockley et al., 2017) ## Appendix A Subgroup analysis and results Close-ups of the sky distribution of the individual disk maser groups are provided in Figures 6 and 20 - 22. Declination-velocity diagrams are included to highlight north-south velocity gradients within the maser groups. Group R4, in particular, breaks up into remarkable parallel filaments (Figure 6). We used k-means clustering (Forgy, 1965; MacQueen, 1967) to classify individual substructures for analysis. This clustering technique does not determine the optimal number of clusters to assign; instead, we iteratively varied the number of clusters and inspected the cluster assignments by eye. The goal was to find the minimum number of clusters required to separate clear groupings in position and velocity. Using this approach, we indentified 34 subgroups and labeled them by their parent group name and a lowercase suffix (e.g., R2a, R2b, etc.). No subgroups were found for groups R1, B3, and B4. We calculated the properties of the subgroups using flux-weighted moments of the sky coordinates, and we performed a linear fit to the major axis offset and recessional velocities to estimate the mean velocity gradient. Table 8 lists the summary properties of the subgroups. Uncertainties were determined using a Monte Carlo method. In each Monte Carlo trial, the spot coordinates and flux densities were drawn from a normal distribution scaled to the measurement uncertainties. The flux-weighted moments and mean velocity gradients were recalculated for each trial. After \(10^{4}\) trials, the uncertainties of the properties of the subgroups were estimated using the standard deviation for all the trials. ## Appendix B Markov chain Monte Carlo analysis We used a forward modeling approach and Markov Chain Monte Carlo (MCMC) analysis to fit kinematic models to the disk maser spot data. For simplicity, the masers are assumed to occupy a flat disk with inclination \(i\) and position angle \(\Omega\). Given a trial set of orbital parameter values, we calculated the orbit in cylindrical coordinates in the disk frame \((r,\phi,z=0)\) using discrete steps in azimuth, \(\delta\phi=0\fdg 1\). The models project onto the sky according to, \[\begin{split} X&=X_{0}+r\left(\sin\Omega\cos\phi- \cos\Omega\cos i\sin\phi\right),\\ Y&=Y_{0}+r\left(\cos\Omega\cos\phi+\sin\Omega\cos i \sin\phi\right),\,\text{and}\\ V&=v_{\text{sys}}+v_{r}\,\sin i\sin\phi+v_{\phi} \,\sin i\cos\phi,\end{split}\] (B1) where, following the notation of Humphreys et al. (2013), \(X\) and \(Y\) are the sky coordinates, measured as offset angles; \(V\) is the recessional velocity; \(X_{0}\) and \(Y_{0}\) are the sky coordinates of the kinematic center; \(v_{\text{sys}}\) is the systemic recessional velocity; and \(v_{r}\) and \(v_{\phi}\) are disk frame orbital speeds in the radial and azimuthal directions, respectively. After projecting the model to the sky coordinates, each maser spot was matched to the point on the model orbit that minimizes the Cartesian distance, \(\sqrt{(\Delta X/S_{x})^{2}+(\Delta Y/S_{y})^{2}+(\Delta V/S_{V})^{2}}\), where \(S_{X}\), \(S_{Y}\), and \(S_{V}\) are measurement uncertainties; see Chen et al. (2020) for a similar approach to modeling maser kinematics. The goodness of fit for a given source of uncertainty is shown in Fig. 21. The best fit for a source of uncertainty is shown in Fig. 22. The best fit for a source of uncertainty is shown in Fig. 23. The best fit for a source of uncertainty is shown in Fig. 24. The best fit for a source of uncertainty is shown in Fig. 25. The best fit for a source of uncertainty is shown in Fig. 26. The best fit for a source of uncertainty is shown in Fig. 27. The best fit for a source of uncertainty is shown in Fig. 28. The best fit for a source of uncertainty is shown in Fig. 29. The best fit for a source of uncertainty is shown in Fig. set of model parameters \(\theta\) is given by the unnormalized posterior probability, \[\log P(\mathbf{\theta}|\mathbf{Q})=-\frac{1}{2}\sum_{j}\left( \frac{(Q_{j}-\tilde{Q}_{j}[\mathbf{\theta}])^{2}}{S_{j}^{2}}+\log\left( 2\pi S_{j}^{2}\right)\right)+\log P(\mathbf{\theta}),\] (B2) where \(Q\) is the vector of measurements (i.e., sky positions and recessional velocities), \(\mathbf{Q}=(Q_{1},Q_{2},\ldots)\); \(\tilde{Q}_{j}\) are the best-matching model coordinates; \(S_{j}\) is the measurement uncertainty of \(Q_{j}\); \(\theta\) is the vector of model parameter values; and \(P(\mathbf{\theta})\) is the _prior_ probability, i.e., constraints on the model parameters prior to the model fit. In each iteration of the MCMC analysis, the parameters \(\theta\) are randomly updated, \(\mathbf{\theta}_{k}\rightarrow\mathbf{\theta}_{k+1}\), and the updated parameters are accepted or rejected according to the Metropolis criterion (Metropolis et al., 1953). The azimuthal grid spacing of the discretized model introduces some small systematic uncertainty. For \(\delta\phi=0\fdg 1\), the projected model coordinates are spaced by \(\sim 0.015\) mas in the sky and by \(\sim 0.5\) km s\({}^{-1}\) in recessional velocity. However, this projected gridding in the sky is much finer than the size of individual subgroups (Table 8), and the velocity gridding is only slightly larger than the channel width but smaller than the velocity range spanned by individual subgroups. Therefore, the intrinsic substructure within the maser groups is likely to have a greater systematic impact on the fit. To this end, we estimate systematic uncertainties by introducing the parameters \(\delta X\), \(\delta Y\), and \(\delta V\), which are added in quadrature to the measurement uncertainties. These terms are called "error floors" in the terminology of Humphreys et al. (2013). Note that, with the inclusion of these error floors as parameters, the uncertainties \(S_{j}\) in Eq. B2 depend on the parameter values: \(S_{j}\to S_{j}(\mathbf{\theta})\). For the expanding ring and elliptical orbit models, it is clear the R1 - R3 masers are not part of the common orbit proposed for the R4 - B4 maser groups. Recessional velocities decrease with offset of the major axis (Fig. 8), suggesting that the R1-R3 masers trace the molecular gas outside the orbit(s) traced by the R4-B4 masers. For simplicity and consistency with earlier interpretations (GG97, Kumar, 1999, Hure, 2002, Lodato & Bertin, 2003), we model the R1-R3 Figure 22: Close-up of the B2–4 maser groups, plotted as in Fig. 6. masers as tracing the disk midline. As the midline of a flat (unwarped) disk would trace a line on the sky, we anticipate a poor fit to the R1 masers, since they do not line up with the R2 and R3 masers (Fig. 3). Accepting that some maser groups might not trace orbital motion but rather, say, outflow, we used a mixture model to account for potential outliers (see, e.g., Geweke, 2007; Hogg et al., 2010). Using the formalism commonly employed for mixture modeling, the orbit model intended to fit most of the maser spots is the "foreground" (fg), and the model \begin{table} \begin{tabular}{l r for outliers is the "background" (bg). As we have no prior knowledge of the kinematic behavior of potential outliers, we model the background as a three-dimensional Gaussian distribution in sky positions and recessional velocity. The use of a non-physical Gaussian background model has proven to be a robust method of identifying and separating outliers from the foreground model in the absence of any prior constraints on the background (Press, 1997; Hogg et al., 2010). By itself, the background model introduces six (6) additional parameters: the means of the background coordinates \((X_{bg},Y_{bg},V_{bg})\) and their standard deviations (\(\delta X_{bg},\,\delta Y_{bg},\,\delta V_{bg}\)). The sums in the log posterior probability, Eq. 2, now include terms both for the foreground and background models, with the foreground terms weighted by \(P_{fg}\), the probability that any given data point belongs to the foreground, and the background terms weighted by \((1-P_{fg})\). Here, \(P_{fg}\) is an additional fitted parameter. Ideally, the model provides a good description of all of the data, so \(P_{fg}\approx 1\); however, we placed no prior constraints other than \(0\leq P_{fg}\leq 1\). In addition, the probability that any single maser spot belongs to the foreground model can be estimated _post hoc_ from the MCMC results (cf. Hogg et al., 2010). This estimate was used to identify the likely outliers for each kinematic model. We used the MCMC code PyDREAM (Shockley et al., 2017) to perform the model fitting analysis. For each model, we run PyDREAM with three parallel chains and \(N=2\times 10^{5}\) iterations. To evaluate convergence, we calculated the integrated autocorrelation time (IAT) for each chain of parameter values. The IAT is an estimate of the number of MCMC samples between uncorrelated samples in the chain; put another way, \(N\)/IAT is an estimate of the number of independent samples after accounting for autocorrelation within a chain of parameter values (Hogg & Foreman-Mackey, 2018). We used the function autocorr.integrated_time from the _emce_ software package to calculate the IAT (Foreman-Mackey et al., 2013). We consider models with \(N/\mathrm{IAT}>100\) for all parameters to have converged. In practice, the models converged in a few thousand iterations and \(N/\mathrm{IAT}>1000\) for all parameters. In Bayesian analysis, the marginal likelihood \(P(\mathbf{Q})\), sometimes called the _evidence_, is used for model selection. The marginal likelihood is the probability of observing the data \(\mathbf{Q}\) given the model, and the model with the greater marginal likelihood is preferred, as it better represents the data. We estimate the marginal likelihood using both the widely applicable Bayes information criterion (WBIC), following the convention of Friel et al. (2017), and the thermodynamic integration estimator (TIE; Gelman & Meng 1998; Neal 2000). As shown in Table 3, both estimators consistently and clearly favor the spiral arms model for the disk masers.
2310.18412
Homoclinic leaves, Hausdorff limits and homeomorphisms
We show that except for one exceptional case, a lamination on the boundary of a 3-dimensional handlebody H is a Hausdorff limit of meridians if and only if it is commensurable to a lamination with a 'homoclinic leaf'. This is a precise version of a philosophy called Casson's Criterion, which appeared in unpublished notes of A. Casson. Applications include a characterization of when a non-minimal lamination is a Hausdorff limit of meridians, in terms of properties of its minimal components, and a related characterization of which reducible self-homeomorphisms of the boundary of H have powers that extend to subcompressionbodies of H.
Ian Biringer, Cyril Lecuire
2023-10-27T18:08:27Z
http://arxiv.org/abs/2310.18412v1
# Homoclinic Leaves, Hausdorff Limits, and Homeomorphisms ###### Abstract. We show that except for one exceptional case, a lamination on the boundary of a \(3\)-dimensional handlebody \(H\) is a Hausdorff limit of meridians if and only if it is commensurable to a lamination with a 'homoclinic leaf'. This is a precise version of a philosophy called Casson's Criterion, which appeared in unpublished notes of A. Casson. Applications include a characterization of when a non-minimal lamination is a Hausdorff limit of meridians, in terms of properties of its minimal components, and a related characterization of which reducible self-homeomorphisms of \(\partial H\) have powers that extend to subcompressionbodies of \(H\). ## 1. Introduction Let \(H\) be a \(3\)-dimensional handlebody1 with genus \(g\geq 2\) and let \(S:=\partial H\). A simple closed curve \(m\) on \(S\) is called a _meridian_ if it bounds an embedded disk in \(H\) but not in \(S\). Equip \(S\) with an arbitrary hyperbolic metric, and consider the set of geodesic laminations on \(S\) with the Hausdorff topology. We refer the reader to [12] for more information on laminations. Footnote 1: The body of this paper is written in greater generality, with the pair \((H,S)\) replaced by a compact, orientable, \(3\)-manifold \(M\) with hyperbolizable interior, together with an essential connected subsurface \(S\subset\partial M\) such that the multi-curve \(\partial S\) is incompressible in \(M\). However, everything we do is just as interesting in the handlebody case. ### Homoclinic leaves In J.P. Otal's thesis [49] the following is stated; it is attributed to an unpublished manuscript of A. Casson. **Statement 1.0.1** ('Casson's Criterion').: _A geodesic lamination on \(S\) is a Hausdorff limit of meridians if and only if it has a homoclinic leaf._ We call it a'statement' here instead of a theorem because it is not true as written, as we'll see later on. However, the connection between homoclinic leaves and meridians has been well studied, partly motivated by this statement, see for example the papers [49, 35, 34, 31, 48, 26] and Long's earlier paper [36]. To define homoclinic, let \(\tilde{H}\) be the universal cover of \(H\), which is homeomorphic to a thickened infinite tree. A path \(\ell:\mathbb{R}\longrightarrow\tilde{H}\) is called _homoclinic_ if there are sequences \(s_{i},t_{i}\in\mathbb{R}\) such that \[|s_{i}-t_{i}|\to\infty,\text{ and }\sup_{i}d_{\tilde{H}}(\tilde{\ell}(s_{i}), \tilde{\ell}(t_{i}))<\infty.\] Here, distance in \(\tilde{H}\) is measured using the lift of any Riemannian metric on \(H\). Since \(H\) is compact, the choice of metric does not matter. A path \(\ell:\mathbb{R}\longrightarrow S\) is called _homoclinic_ if it has a lift to \(\partial\tilde{H}\) that is homoclinic as above, and a complete geodesic on \(S\) is called _homoclinic_ if it has a (possibly periodic, if the geodesic is closed) arc length parametrization that is homoclinic. As an example, any geodesic meridian \(m\) on \(S\) is homoclinic, for as \(m\) lifts to a simple closed curve in \(\partial\tilde{H}\), any lift of a periodic parameterization of \(m\) is also periodic, and therefore homoclinic. On the other hand, if \(\gamma\) is an essential simple closed curve that is not a meridian, any periodic parameterization of \(\gamma\) lifts to a properly embedded biinfinite path in \(\partial\tilde{H}\) that is invariant under a nontrivial deck transformation, and is readily seen to be non-homoclinic. We should mention that the definition introduced above is not quite what Casson and Otal called 'homoclinic' (in French, 'homoclinique'), but rather what Otal calls 'faiblement homoclinique', or 'weakly homoclinic'. However, the definition above has been adopted in most subsequent papers. While some of the discussion below is incorrect if we use Casson's definition, it can all be modified to apply. In particular, the same counterexamples show that Statement 1.0.1 is still false using Casson's original definition. See SS5.1. In his thesis [49], Otal showed that any Hausdorff limit of meridians has a homoclinic leaf. (This statement was later extended by Lecuire [33] from handlebodies to more general 3-manifolds.) However, the converse is not true. First of all, any Hausdorff limit of meridians is connected, and there are disconnected laminations on \(S\) that have homoclinic leaves, e.g. the union of two disjoint simple closed curves, one of which is a meridian. There are also connected laminations with homoclinic leaves that are not Hausdorff limits of meridians. For example, let \(\lambda=m\cup\ell\) be a lamination with two leaves, where \(m\) is a nonseparating meridian and the two ends of \(\ell\) spiral around \(m\) in the same direction, but from opposite sides, as on the left in Figure 1. Then \(\lambda\) Figure 1. Laminations on the boundary of a genus two handlebody that have a meridian \(m\) as a leaf, but are not Hausdorff limits of meridians. has a homoclinic leaf, but it is not a Hausdorff limit of simple closed curves: any simple geodesic that approximates \(\ell\) closely is trapped and forced to spiral forever around \(m\), so cannot be closed. As another example, let \(\lambda\) be the lamination on the right in Figure 1, which has three leaves, a meridian \(m\) and two leaves spiraling onto it. Then \(\lambda\) is a Hausdorff limit of simple closed curves, but it is not a limit of meridians. Indeed, given a simple closed curve \(\mu\) on \(S\), an arc \(\alpha\subset\mu\) is called an \(m\)_-wave disjoint from \(m\)_ if it is homotopic rel endpoints in \(H\) to an arc of \(m\) and \(int(\alpha)\cap m=\emptyset\). Any meridian \(\mu\) that intersects \(m\) has an \(m\)-wave disjoint from \(m\): one looks for an 'outermost' arc of intersection with a disk bounded by \(\mu\) in a disk bounded by \(m\). However, no simple closed curve that is Hausdorff-close to \(\lambda\) has an \(m\)-wave disjoint from \(m\), so \(\lambda\) is not a limit of meridians. See the discussion after the statement of Theorem 7.1 for more details. In both of these examples, the problem lies with the spiraling isolated leaves. One way to address this is as follows. We say that two laminations \(\mu_{1},\mu_{2}\) on \(S\) are _commensurable_ if they contain a common sublamination \(\nu\) such that for both \(i\), the difference \(\mu_{i}\setminus\nu\) is the union of finitely many isolated leaves. We say \(\mu_{1},\mu_{2}\) are _strongly commensurable_ if they contain a common \(\nu\) such that for both \(i\), the difference \(\mu_{i}\setminus\nu\) is the union of finitely many isolated leaves, none of which are simple closed curves. So, is Casson's Criterion at least true up to strong commensurability? It turns out the answer is still no: the lamination \(\lambda\) in Figure 3 contains a meridian as a leaf, but it is not strongly commensurable to a limit of meridians. Indeed, suppose that \(\lambda^{\prime}\) is a Hausdorff limit of meridians that contains \(\lambda\). In each component \(T\subset S\setminus m\), there is a _unique_ homotopy class rel \(m\) of \(m\)-waves in \(T\). So \(\lambda^{\prime}\) contains a leaf \(\ell\) that either intersects \(T\) in an arc in this homotopy class, or is contained in \(T\) and is obtained by spinning an arc in this homotopy class around \(\mu\). In either case, \(\ell\) intersects transversely the component of \(\lambda\) contained in \(T\), a contradiction. One can also make similar examples of laminations that contain a meridian but are not even _commensurable_ to a Hausdorff limit of meridians, by replacing the two curves on either side of \(m\) in Figure 3 with minimal laminations that fill the two components of \(S\setminus m\). Figure 2. Two meridians on a genus two handlebody. The two arcs of \(\mu\) that lie on the right side of \(m\) are \(m\)-waves. It turns out, though, that this is basically the only counterexample. Let's say that a lamination \(\lambda\) on \(S\) is _exceptional_ if \(S\) has genus \(2\), there is a separating meridian \(m\) on \(S\) that is either disjoint from \(\lambda\) or is a leaf of \(\lambda\), and there are minimal sublaminations of \(\lambda\) that fill the two components of \(S\setminus m\). **Theorem 1.1** (A weak Casson Criterion, see Theorem 7.1).: _If \(\lambda\) is a geodesic lamination on \(S\) that is not exceptional, then \(\lambda\) is strongly commensurable to a Hausdorff limit of meridians if and only if it is strongly commensurable to a lamination with a homoclinic leaf._ See Theorem 7.1 for a more general statement and for the proof. In our view, this is the strongest version of Casson's criterion that is likely to be true for arbitrary geodesic laminations. It may be, though, that the original Casson Criterion is true for _minimal_ laminations. Our methods only work up to strong commensurability, though. For instance, if \(\lambda\) is minimal filling on \(S\) and contains a homoclinic leaf, to prove that \(\lambda\) is a limit of meridians one would have to ensure that the meridians produced in Lemma 5.8 do not run across any diagonals of the ideal polygons that are components of \(S\setminus\lambda\). The main tool in the proof of Theorem 1.1 is a complete characterization of the minimal laminations onto which the two ends of a homoclinic simple geodesic on \(S\) can accumulate. **Theorem 1.2** (Limits of homoclinic geodesics, see Corollary 6.3).: _Suppose that \(h\) is a homoclinic simple biinfinite geodesic on \(S\) and that the two ends of \(h\) limit onto minimal laminations \(\lambda_{-},\lambda_{+}\subset S\). Then either_ 1. _the two ends of_ \(h\) _are asymptotic on_ \(S\)_,_ 2. _one of_ \(\lambda_{-},\lambda_{+}\) _is an intrinsic limit of meridians, or_ 3. \(\lambda_{-},\lambda_{+}\) _are contained in incompressible subsurfaces_ \(S_{-},S_{+}\subset S\) _that bound an essential interval bundle_ \(B\subset H\) _through which_ \(\lambda_{-}\) _and_ \(\lambda_{+}\) _are homotopic._ Here, a minimal lamination \(\lambda\subset S\) is _an intrinsic limit of meridians_ if it is strongly commensurable to the Hausdorff limit of a sequence of meridians that are contained in the smallest essential subsurface \(S(\lambda)\subset S\) containing \(\lambda\), see Proposition 5.11 for a number of equivalent definitions. We refer the reader to Figure 3. A lamination that contains a meridian as a leaf, but is not strongly commensurable to a Hausdorff limit of meridians. Theorem 6.1 and Corollary 6.3 for more precise and more general versions of the above that apply both to homoclinic biinfinite geodesics, and also to pairs of'mutually homoclinic' geodesic rays on \(S\). Examples of (3) are shows in Figure 4. On the left, \(\lambda_{-},\lambda_{+}\) are simple closed curves that bound an embedded annulus \(A\) in \(H\) and \(B\) is a regular neighborhood of \(A\). The geodesic \(h\) is homoclinic since the annulus \(A\) lifts to an embedded infinite strip \(\mathbb{R}\times[-1,1]\subset\tilde{H}\) and the two ends of a lift of \(h\) are asymptotic to \(\mathbb{R}_{+}\times\{-1\}\) and \(\mathbb{R}_{+}\times\{1\}\), respectively. On the right, we write \(H=Y\times[-1,1]\) where \(Y\) is a genus two surface with one boundary component. The laminations \(\lambda_{\pm}\) are minimal (in the picture they are drawn as 'train tracks') and fill \(Z\times\{\pm 1\}\), where \(Z\subset Y\) is a torus with two boundary components. Here, \(B=Z\times[-1,1]\). Interval bundles are essential to the study of meridians on handlebodies, and it is no surprise that they appear in Theorem 1.2. For example, subsurfaces bounding such interval bundles are the 'incompressible holes' studied by Masur-Schleimer [40], and interval bundles appear frequently in Hamenstadt's work on the disk set, see e.g. [20, 21, 22]. We note that the interval bundles \(B\) appearing in Theorem 1.2 may be twisted interval bundles over non-orientable surfaces, in which case \(\lambda_{-}=\lambda_{+}\) and \(S_{-}=S_{+}\). See SS4.5 for background on interval bundles. ### Hausdorff limits via their minimal sublaminations The previous two theorems suggest that if a lamination \(\lambda\) that is a Hausdorff limits of meridians, one might expect to see minimal sublaminations \(\lambda\) that are intrinsic limits of meridians, or pairs of components that are homotopic through essential interval bundles in \(H\). In fact, we show the following. Figure 4. Examples of homoclinic geodesics (in grey) satisfying (3) in Theorem 1.2. **Theorem 1.3** (see Theorem 7.1).: _Suppose that \(\lambda\subset S\) is a nonexceptional geodesic lamination that is a finite union of minimal components. Then \(\lambda\) is strongly commensurable to a Hausdorff limit of meridians if and only if either_ 1. \(\lambda\) _is disjoint from a meridian on_ \(S\)_,_ 2. _some component of_ \(\lambda\) _is an intrinsic limit of meridians, or_ 3. _there are components_ \(\lambda_{\pm}\subset\lambda\) _that fill incompressible subsurfaces_ \(S_{\pm}\subset S\)_, such that_ \(S_{\pm}\) _bound an essential interval bundle_ \(B\subset H\)_, the laminations_ \(\lambda_{\pm}\) _are essentially homotopic through_ \(B\)_, and there is a compression arc_ \(\alpha\) _for_ \(B\) _that is disjoint from_ \(\lambda\)_._ In (3), a _compression arc_ for \(B\) is an arc from \(\partial S_{-}\) to \(\partial S_{+}\) that is homotopic in \(H\), keeping its endpoints in \(\partial S_{\pm}\), to a fiber of the interval bundle \(B\). See SS2.7 and Figure 6 for more explanation. Note that Theorem 1.3 does not say anything interesting about which minimal filling laminations \(\lambda\) on \(S\) are strongly commensurable to Hausdorff limits of meridians, just that that happens if and only if (2) holds, which is trivial. Indeed, for minimal filling laminations, it is not clear that there should be an easy way to 'identify' Hausdorff limits of meridians. The point of Theorem 1.3, though, is that it reduces the characterization of Hausdorff limits of meridians to the minimal filling case. We note that it should be possible to replace the part of the proof of Theorem 1.3 that references homoclinic geodesics with arguments similar to those used in Masur-Schleimer's paper [40]. ### Extension of reducible maps to compression bodies As another application of our techniques, we consider extension properties of homeomorphisms \(f:S\longrightarrow S\). To motivate this, recall that a _subcompression body_ of \(H\) is a \(3\)-dimensional submanifold \(C\subset H\) with \(S\subset\partial C\) that is obtained by choosing a finite collection \(\Gamma\) of disjoint meridians on \(S\), taking a regular neighborhood of \(S\) and a collection of discs in \(H\) with boundary \(\Gamma\), and adding in any complementary components that are topological \(3\)-balls. We say \(C\) is obtained by _compressing_\(\Gamma\). We usually consider subcompression bodies only up to isotopy, and we allow the case that \(\Gamma=\emptyset\), in which case we recover the _trivial subcompression body_, which is just a regular neighborhood of \(S\). Two examples where \(\Gamma\) contains a single meridian are drawn in Figure 5. On the left, we compress a separating meridian \(m_{1}\) and obtain a subcompression body of \(H\) that has two 'interior' boundary components contained in \(int(H)\); these are the tori drawn in gray. On the right, we compress a nonseparating meridian \(m_{2}\) and obtain a subcompressionbody with a single torus interior boundary component. Note that compressing \(\Gamma=\{m_{1},m_{2}\}\) gives the same subcompression body as compressing \(m_{2}\), because we fill in complementary components that are balls. See SS2.4 for details. Biringer-Johnson-Minsky [2] showed2 that the attracting lamination \(\lambda_{+}\) of a pseudo-Anosov map \(f:S\longrightarrow S\) is strongly commensurable to a Hausdorff limit of meridians if and only if \(f\) has a nonzero power that extends to a homeomorphism of some subcompression body of \(H\). Both statements are 'genericity' conditions on \(f\) with respect to the structure of \(H\) and both were previously studied in the literature, see e.g. [47] and [32]. Footnote 2: Their condition was really that the measured lamination \(\lambda_{+}\) lies in the ‘limit set’ of the handlebody, i.e. that it is a limit of meridians in \(\mathcal{PML}(S)\), but that is equivalent to being strongly commensurable to a Hausdorff limit of meridians. See §2.8 for more information about the limit set. Here, we show that extension of powers of a homeomorphism \(f:S\longrightarrow S\) to subcompression bodies can be detected by looking at extension of powers of its components in the Nielsen-Thurston decomposition. More precisely, recall that \(f\) is _pure_ if there are disjoint essential subsurfaces \(S_{i}\subset S\), such that \(f=id\) on \(S_{id}:=S\setminus\cup_{i}S_{i}\), and where for each \(i\), if we set \(f_{i}:=f|_{S_{i}}\), then either 1. \(S_{i}\) is an annulus and \(f_{i}\) is a power of a Dehn twist, or 2. \(f_{i}\) is a pseudo-Anosov map on \(S_{i}\). It follows from the Nielsen-Thurston classification [16], that every homeomorphism of \(S\) has a power that is isotopic to a pure homeomorphism. **Theorem 1.4** (Partial extension of reducible maps, see Theorem 9.2).: _Let \(f:S\longrightarrow S\) be a pure homeomorphism. Then \(f\) has a power that extends to a nontrivial subcompressionbody of \(H\) if and only if either:_ 1. _there is a meridian in_ \(S_{id}\)_,_ 2. _for some_ \(i\)_, the map_ \(f_{i}:S_{i}\longrightarrow S_{i}\) _has a power that extends to a nontrivial subcompressionbody of_ \(H\) _that is obtained by compressing a set of meridians in_ \(S_{i}\)_, or_ 3. _there are (possibly equal) indices_ \(i,j\) _such that_ \(S_{i},S_{j}\) _bound an essential interval bundle_ \(B\) _in_ \(H\)_, such that some power of_ \(f|_{S_{i}\cup S_{j}}\) _extends to_ \(B\)_, and there is a compression arc_ \(\alpha\) _for_ \(B\) _whose interior lies in_ \(S_{id}\)_._ In (3), note that if (for simplicity) \(B\) is a trivial interval bundle, then \(f|_{S_{i}\cup S_{j}}\) extends to \(B\) exactly when \(f_{i},f_{j}\) become isotopic maps when \(S_{i},S_{j}\) are identified through \(B\). More generally, a power of \(f|_{S_{i}\cup S_{j}}\) extends to \(B\) when \(f_{j}\) is obtained from \(f_{i}\) by multiplying by a periodic map that commutes with \(f_{i}\). Figure 5. Compression bodies inside a genus 2 handlebody. For the proof of Theorem 1.4, it is necessary to extend the theorem of Biringer-Johnson-Minsky [2] referenced above to the case of pseudo-Anosovs on essential subsurfaces of \(S\). More precisely, we show that (2) above holds exactly when the attracting lamination of \(f_{i}\) is an intrinsic limit of meridians. This is done in Theorem 8.1; the proof is basically the same as theirs, although we reorganize it into separate topological and geometric arguments in a way that makes it clearer than the original version. In contrast to the pseudo-Anosov case, one cannot determine when a pure homeomorphism \(f:S\longrightarrow S\) has a power that extends by looking at its attracting lamination. Here, the 'attracting lamination' of \(f\) is the union of all its twisting curves and all the attracting laminations of its pseudo-Anosov components. Indeed, set \(H=Y\times[-1,1]\), where \(Y\) is a surface with boundary, and let \(f:Y\longrightarrow Y\) be a pseudo-Anosov map such that \(f=id\) on \(\partial Y\). Then \[S=Y\times\{-1\}\cup\partial Y\times[-1,1]\cup Y\times\{+1\}\] and if we let \(F:S\longrightarrow S\) be \(f\times id\) on \(Y\times\{\pm 1\}\) and the identity map on the rest of \(S\), while we let \(G:S\longrightarrow S\) be \(f\times id\) on \(Y\times\{1\}\), and \(f^{2}\times id\) on \(Y\times\{-1\}\), and the identity map on the rest of \(S\), then \(F,G\) have the same attracting lamination, but \(F\) extends to a homeomorphism of \(H\), while \(G\) does not. In fact, it follows from Theorem 1.4 that no power of \(G\) extends to a nontrivial subcompression body of \(H\). ### Other results of interest There are two other theorems in this paper that we should mention in the introduction. In SS3 we study the _disk set_\(\mathcal{D}(S,M)\) of all isotopy classes of meridians in an essential subsurface \(S\subset\partial M\) with \(\partial S\) incompressible, where here \(M\) is a compact, irreducible \(3\)-manifold with boundary. We show in Proposition 3.1 that either \(\mathcal{D}(S,M)\) is _small_, meaning that it is either empty, has a single element, or has a single non-separating element and infinitely many separating elements that one can explicitly describe, or \(\mathcal{D}(S,M)\) is _large_, meaning that it has infinite diameter in the curve complex \(\mathcal{C}(S)\). This result will probably not surprise any experts, but we have never seen it in the literature. In SS4 we show how essential interval bundles in a compact \(3\)-manifold with boundary \(M\) can be seen in the limit sets in \(\partial\mathbb{H}^{3}\) associated to hyperbolic metrics on \(int(M)\). This picture was originally known to Thurston [52], and was studied previously under more restrictive assumptions by Walsh [53] and Lecuire [33]. We need a more general theorem in the proof of Theorem 6.1: in particular, we need a version that allows accidental parabolics. This is Theorem 4.1. Our proof is also more direct and more elementary than those of [53, 33]. See SS4 for more context and details. ### Outline of the paper Section 2 contains all the necessary background for the rest of the paper. We discuss the curve complex, the disc set, compression bodies, interval bundles, the Jaco-Shalen and Johannson characteristic submanifold theory, compression arcs, and geodesic laminations. SS3 and SS4 are described in the previous subsection. SS5 contains a discussion of homoclinic geodesics, intrinsic limits of meridians, and some of their basic properties. The main point of SS6 is Theorem 6.1, which is the more precise and general version of Theorem 1.2 above. SS7 is devoted to Theorem 7.1, which characterizes Hausdorff limits of meridians and combines Theorems 1.1 and 1.3 above. SS8 contains our extension of Biringer-Johnson-Minksy [2] to partial pseudo-Anosovs, and SS9 contains the proof of Theorem 9.2, which generalizes Theorem 1.4 above. ### Acknowledgements The authors would like to thank Jeff Brock, Juan Souto and Sebastian Hensel for useful conversations. The first author was partially supported by NSF grant DMS-1308678. ## 2. Preliminaries ### Subsurfaces with geodesic boundary Suppose \(S\) is a finite type hyperbolic surface with geodesic boundary. A _connected subsurface with geodesic boundary_ in \(S\) is by definition either 1. a simple closed geodesic \(X\) on \(S\), which is the degenerate case, or 2. an immersed surface \(X\longrightarrow S\) such that the restriction to \(int(X)\) and to each component of \(\partial X\) is an embedding, and where each component of \(\partial X\) maps to a simple closed geodesic on \(S\). In (2), the point is that our surface is basically an embedding, except that we allow two boundary components of \(X\) to map to the same geodesic in \(S\). We will usually suppress the immersion and write \(X\subset S\), abusing notation. We consider \(X,Y\subset S\) to be _equal_ if they are either the same simple closed geodesic, or if they are both immersions as in (2) and the interiors of their domains have the same images. We say \(X,Y\) are _essentially disjoint_ if either: * \(X,Y\) are disjoint simple closed geodesics, * \(X\) is a simple closed geodesic, \(Y\) is not, and \(X\) is disjoint from \(int(Y)\), or vice versa with \(X,Y\) exchanged, or * \(X,Y\) have nonempty disjoint interiors. More generally, we define a (possibly disconnected) _subsurface with geodesic boundary_ in \(S\) to be a finite union of essentially disjoint connected subsurfaces with geodesic boundary. Any connected essential subsurface \(T\subset S\) that is not an annulus homotopic into a cusp of \(S\) determines a unique connected subsurface with geodesic boundary \(X\) such that the images of \(\pi_{1}T\) and \(\pi_{1}X\) in \(\pi_{1}S\) are conjugate. Here, we say that \(X\) is obtained by _tightening_\(T\). More generally, we can tighten a disconnected \(T\) to a disconnected \(X\) by tightening all its components. Tightening is performed as follows. If \(T\) is an annulus, then we let \(X\) be the unique simple closed geodesic homotopic to the core curve of \(T\). Otherwise, we obtain \(X\) by homotoping \(T\) so that every component of \(\partial T\) is either geodesic or bounds a cusp in \(S\setminus T\), and then adding in any components of \(S\setminus T\) that are cusp neighborhoods. Alternatively, let \(\tilde{T}\) be a component of the pre-image of \(T\) in the universal cover \(\tilde{S}\), which is isometric to a convex subset of \(\mathbb{H}^{2}\), let \(\Lambda_{T}\subset\partial\mathbb{H}^{3}\) be the set of limit points of \(\tilde{T}\) on \(\partial_{\infty}\mathbb{H}^{2}\), and let \(\tilde{X}\) be the convex hull of \(\Lambda_{T}\) within \(\tilde{S}\). Then \(\tilde{X}\) projects to an \(X\) as desired. Conversely, suppose \(X\) is a subsurface with geodesic boundary in \(S\). Then there is a compact essential subsurface \(T\hookrightarrow S\), unique up to isotopy and called a _resolution_ of \(X\), that tightens to \(X\). When \(X\) is a simple closed geodesic, we take \(T\) to be a regular neighborhood of \(X\). Otherwise, construct \(T\) by deleting half-open collar neighborhoods of all boundary components of \(X\), and deleting open neighborhoods of all cusps of \(T\). Note that subsurfaces with geodesic boundary \(X,Y\) are essentially disjoint if and only if they admit disjoint resolutions. ### The curve complex Let \(S\) be a compact orientable surface, possibly with boundary, and assume that \(S\) is not an annulus. **Definition 2.1**.: The _curve complex_ of \(S\), written \(\mathcal{C}(S)\), is the graph whose vertices are homotopy classes of nonperipheral, essential simple closed curves on \(S\) and whose edges connect homotopy classes that intersect minimally. When \(S\) is a \(4\)-holed sphere, minimally intersecting simple closed curves intersect twice, while on a punctured torus they intersect once. Otherwise, edges in \(\mathcal{C}(S)\) connect homotopy classes that admit disjoint representatives. Masur-Minsky [41] have shown that the curve complex is Gromov hyperbolic, when considered with the path metric in which all edges have unit length. Klarreich [30] (see also [19]) showed that the Gromov boundary \(\partial_{\infty}\mathcal{C}(S)\) is homeomorphic to the space of _ending laminations_ of \(S\): i.e. filling, measurable geodesic laminations on \(S\) with the topology of Hausdorff super-convergence. ### The disc set Suppose that \(S\subset\partial M\) is an essential subsurface of the boundary of a compact, irreducible \(3\)-manifold \(M\), and that \(\partial S\) is incompressible in \(M\). An essential simple closed curve \(\gamma\) on \(M\) is called a _meridian_ if it bounds an embedded disc in \(M\). By the loop theorem, \(\gamma\) is a meridian if and only if it is homotopically trivial in \(M\). **Definition 2.2**.: The _disc set_ of \(S\) in \(M\), written \(\mathcal{D}(S,M)\), is the (full) subgraph of \(\mathcal{C}(S)\) whose vertices are the meridians of \(S\) in \(M\). When convenient, we will sometimes regard \(\mathcal{D}(S,M)\) as a subset of the space of projective measured laminations \(\mathcal{PML}(S)\), instead of as a graph. The following is an extension of a theorem of Masur-Minsky [42, Theorem 1.1], which they prove in the case that \(S\) is an entire component of \(\partial M\). **Theorem 2.3** (Masur-Minsky).: _The subset \(\mathcal{D}(S,M)\) of \(\mathcal{C}(S)\) is quasiconvex._ To prove Theorem 2.3 as stated above, one follows the outline of [42]: given \(a,b\in\mathcal{D}(S,M)\), the goal is to construct a _well-nested curve replacement sequence_ from \(a=a_{1},\dots,a_{n}=b\) consisting of meridians, which must be a quasi-geodesic by their Theorem 1.2. The sequence \((a_{i})\) is created by successive surgeries along innermost discs, and the only difference here is that one needs to ensure that none of the surgeries create peripheral curves. However, the surgeries create meridians and \(S\) has incompressible boundary. ### Compression bodies We refer the reader to SS2 of [3] for a more detailed discussion of compression bodies, and state here only a few definitions that will be used later on. A _compression body_ is a compact, orientable, irreducible 3-manifold \(C\) with a \(\pi_{1}\)-surjective boundary component \(\partial_{+}C\), called the _exterior boundary_ of \(C\). The complement \(\partial C\smallsetminus\partial_{+}C\) is called the _interior boundary_, and is written \(\partial_{-}C\). Note that the interior boundary is incompressible. For if an essential simple closed curve on \(\partial_{-}C\) bounds a disk \(D\subset C\), then \(C\smallsetminus D\) has either one or two components, and in both cases, Van Kampen's Theorem implies that \(\partial_{+}C\), which is disjoint from \(D\), cannot \(\pi_{1}\)-surject. Suppose \(M\) is a compact irreducible 3-manifold with boundary, let \(\Sigma\) be a component of \(\partial M\) and let \(S\subset\Sigma\) is an essential subsurface. A _subcompression body of \((M,S)\)_ is a compression body \(C\subset M\) with exterior boundary \(\Sigma\) that can be constructed as follows. Choose a set \(\Gamma\) of disjoint, pairwise nonhomotopic simple closed curves on \(S\) that are all meridians in \(M\). Let \(C^{\prime}\subset M\) be the union of \(\Sigma\) with a set of disjoint disks in \(M\) whose boundaries are the components of \(\Gamma\), and define \(C\subset M\) to be the union of a regular neighborhood of \(C^{\prime}\subset M\) together with any components of the complement of this neighborhood that are topological 3-balls. Here, we say that \(C\subset M\) is obtained _by compressing_\(\Gamma\). Note that the irreducibility of \(M\) implies that no component of \(\partial C\) is a 2-sphere, and hence that \(C\) is irreducible, and therefore a compression body. See [3, SS2] for details about constructing compression bodies via compressions. When the set \(\Gamma\) above is empty, we obtain the _trivial subcompression body_ of \((M,S)\), which is just a regular neighborhood of \(\Sigma\subset\partial M\). At the other extreme, we can compress a maximal \(\Gamma\), which gives the 'characteristic compression body' of \((M,S)\), defined via the following fact. **Fact 2.4** (The characteristic compression body).: _Suppose \(M\) is an irreducible compact \(3\)-manifold, that \(\Sigma\) is a component of \(\partial M\) and that \(S\subset\Sigma\) is an essential subsurface such that the multicurve \(\partial S\) is incompressible in \(M\). Then there is a unique (up to isotopy) subcompression body_ \[C:=C(S,M)\subset M\] _of \((M,S)\), called the characteristic compression body of \((M,S)\), such that a curve \(\gamma\) in \(S\) is a meridian in \(C\) if and only if it is a meridian in \(M\)._ _Moreover, \(C\) can be constructed by compressing any maximal set of disjoint, pairwise nonhomotopic meridians in \(S\)._ This is a version of a construction of Bonahon [5], except that he only defines the characteristic compression body when \(S\) is an entire boundary component of \(M\). In that case, the interior boundary components of \(M\) are incompressible in \(M\), so Bonahon's construction can be used to reduce problems about \(3\)-manifolds to problems about compression bodies and about \(3\)-manifolds with incompressible boundary. The reader can also compare the fact to Lemma 2.1 in [3], which is the special case of the fact where \(M\) is a compression body and \(S\) is its exterior boundary, so that \(C=M\) is obtained by compressing any maximal set of disjoint, nonhomotopic meridians in \(M\). Proof.: Let \(\Gamma\) be a maximal set of disjoint, pairwise nonhomotopic \(M\)-meridians on \(S\), and define \(C\) by compressing \(\Gamma\). We have to check that any curve in \(S\) that is an \(M\)-meridian is also a \(C\)-meridian. Suppose not, and take an \(M\)-meridian \(m\subset S\) that is not a \(C\)-meridian, and that intersects \(\Gamma\) minimally. Since \(\Gamma\) is maximal, \(m\) intersects some component \(\gamma\subset\Gamma\). Then there is an arc \(\alpha\subset\gamma\) with endpoints on \(m\) and interior disjoint from \(m\), that is homotopic rel endpoints in \(M\) to the arcs \(\beta^{\prime},\beta^{\prime\prime}\subset m\) with the same endpoints. (Here \(\alpha\) is an 'outermost' arc of intersection on a disk bounded by \(\gamma\), where the intersection is with the disk bounded by \(m\), see e.g. Lemma 2.8 in [3].) Since \(m\) is in minimal position with respect to \(\Gamma\), the curves \(m^{\prime}=\alpha\cup\beta^{\prime}\) and \(m^{\prime\prime}=\alpha\cup\beta^{\prime\prime}\) are both essential, and are \(M\)-meridians in \(S\) that intersects \(\Gamma\) fewer times than \(m\). So by minimality of \(m\), both \(m^{\prime},m^{\prime\prime}\) are \(C\)-meridians, implying that \(\alpha\) is homotopic rel endpoints to \(\beta^{\prime}\) and \(\beta^{\prime\prime}\) in \(C\). This implies \(m\) is a \(C\)-meridian, contrary to assumption. For uniqueness, suppose we have two subcompression bodies \(C_{1},C_{2}\) of \((M,S)\) in which all curves in \(S\) that are meridians in \(M\) are also meridians in \(C_{1},C_{2}\). Since \(C_{1},C_{2}\) are subcompression bodies of \((M,S)\), the kernels of the maps \[\pi_{1}\Sigma\longrightarrow\pi_{1}C_{i}\] induced by inclusion are both normally generated by the set of all elements of \(\pi_{1}\Sigma\) that represent simple closed curves in \(S\) that are meridians in \(M\). Hence, the disk sets \(\mathcal{D}(\Sigma,C_{i})\) are the same for \(i=1,2\). It follows that \(C_{1},C_{2}\) are isotopic in \(M\), say by Corollary 2.2 of [3]. ### Interval bundles In this paper, an _interval bundle_ always means a fiber bundle \(B\longrightarrow Y\), where \(Y\) is a compact surface with boundary, and where all fibers are closed intervals \(I\). Regarding the fibers as'vertical', we call the associated \(\partial I\)-bundle over \(Y\) the _horizontal boundary_ of \(B\), written \(\partial_{H}B\). An interval bundle that is isomorphic to \(Y\times[-1,1]\) is called _trivial_, and we often call nontrivial interval bundles _twisted_. All \(3\)-manifolds in this paper are assumed to be orientable, but even when the total space \(B\) of an interval bundle is orientable, the base surface \(Y\) may not be. Indeed, let \(Y\) be a compact non-orientable surface and let \(\pi:\hat{Y}\longrightarrow Y\) be its orientation cover. Then the mapping cylinder \[B:=\hat{Y}\times[0,1]/\sim,\ \ (x,1)\sim(x^{\prime},1)\iff\pi(x)=\pi(x^{ \prime})\] is orientable, and is a twisted interval bundle over \(Y\), where the fiber over \(y\in Y\) is obtained by gluing together the two intervals \(\{x\}\times[0,1]\) and \(\{x^{\prime}\}\times[0,1]\) along \((x,1)\) and \((x^{\prime},1)\), where \(\pi^{-1}(y)=\{x,x^{\prime}\}\). The horizontal boundary \(\partial_{H}B\) here is \(\hat{Y}\times\{0\}\), which is homeomorphic to the orientable surface \(\hat{Y}\). Note that \(B\) is double covered by the trivial interval bundle \(\hat{Y}\times[-1,1]\). **Fact 2.5**.: _Suppose that \(B\longrightarrow Y\) is an interval bundle and \(B\) is orientable. If \(Y\) is orientable, then \(B\) is a trivial interval bundle. If \(Y\) is nonorientable, then \(B\) is isomorphic to the mapping cylinder of the orientation cover of \(Y\)._ Proof.: If \(Y\) and \(B\) are orientable, so is the line bundle, so the bundle is trivial. If \(Y\) is nonorientable, the horizontal boundary \(\partial_{H}B\subset\partial B\) is an orientable surface that double covers \(Y\), and from there it's easy to construct the desired isomorphism to the mapping cylinder of the projection \(\partial_{H}B\longrightarrow Y\). An interval bundle \(B\longrightarrow Y\) comes with a _canonical involution_\(\sigma\), which is well defined up to isotopy, and which is defined as follows. If \(B\cong Y\times[0,1]\) is a trivial interval bundle, we define \[\sigma:Y\times[-1,1]\longrightarrow Y\times[-1,1],\ \ \sigma(y,t)=(y,-t).\] And if \(B\) is the twisted interval bundle \(B\cong\hat{Y}\times[0,1]/\sim\) above, we define \[\sigma:\hat{Y}\times[0,1]/\sim\longrightarrow\hat{Y}\times[0,1]/\sim,\ \ \sigma(\hat{y},t)=(\iota(\hat{y}),t)\] where \(\iota\) is the nontrivial deck transformation of the orientation cover. Note that \(\sigma\) is always an orientation reversing involution of \(B\), so in particular, when we give the surface \(\partial_{H}B\) its boundary orientation, the restriction \(\sigma|_{\partial_{H}B}\) is also orientation reversing. We also recall the following well-known fact. **Fact 2.6**.: _If \(S\) is a compact, orientable surface with nonempty boundary, The trivial interval bundle \(S\times[-1,1]\) is homeomorphic to a handlebody._ It's a nice topology exercise to visualize the homeomorphism. Regard \(S\) as the union of a polygon and a collection of bands (long, skinny rectangles), each of which is glued along its short sides to two sides of the polygon. Thickening, the picture becomes a ball with \(1\)-handles attached. Note that if \(S=S_{g,b}\) has genus \(g\) and \(b\) boundary components, then the handlebody \(S\times[-1,1]\) has genus \(2g+b-1\), since that is the rank of the free group \(\pi_{1}(S\times[-1,1])\cong\pi_{1}S\). Finally, suppose \(\pi:B\longrightarrow Y\) is an interval bundle and \(f:\partial_{H}B\longrightarrow\partial_{H}B\) is a homeomorphism. We say that \(f\)_extends to \(B\)_ if there is a homeomorphism \(F:B\longrightarrow B\) such that \(F|_{\partial_{H}B}=f\). We leave the following to the reader. **Fact 2.7**.: _The following are equivalent:_ 1. \(f\) _extends to_ \(B\)_,_ 2. \(f\circ\sigma\) _is isotopic to_ \(f\) _on_ \(\partial_{H}B\)_,_ 3. _after isotoping_ \(f\)_, there is a homeomorphism_ \(\bar{f}:Y\longrightarrow Y\) _such that_ \(\pi\circ f=\bar{f}\circ\pi\)_,_ 4. _there is a homeomorphism from_ \(B\) _to either_ \[Y\times[-1,1]\ \ \text{or}\ \ \hat{Y}\times[0,1]/\sim,\] _taking horizontal boundary to horizontal boundary, such that_ \(f=F|_{\partial_{H}B}\)_, and where either_ \[F:Y\times[-1,1]\longrightarrow Y\times[-1,1],\ \ F(y,t)=(\bar{f}(y),t),\] _for some homeomorphism_ \(\bar{f}:Y\longrightarrow Y\)_, or_ \[F:\hat{Y}\times[0,1]/\sim\longrightarrow\hat{Y}\times[0,1]/\sim,\ \ F(y,t)=(\bar{f}(y),t),\] _for some homeomorphism_ \(\bar{f}:\hat{Y}\longrightarrow\hat{Y}\) _commuting with the deck group of_ \(\hat{Y}\longrightarrow Y\)_, and hence covering a homeomorphism of_ \(Y\)_._ ### The characteristic submanifold of a pair Suppose that \(M\) is a compact, orientable \(3\)-manifold and that \(S\subset\partial M\) is an incompressible subsurface. In the late 1970s, Jaco-Shalen [25] and Johannson [27] described a 'characteristic' submanifold of \((M,S)\) that contains the images of all nondegenerate maps from interval bundles and Seifert fibered spaces. **Theorem 2.8** (see pg 138 of Jaco-Shalen [24]).: _There is a perfectly embedded Seifert pair \((X,\Sigma)\subset(M,S)\), unique up to isotopy and called the characteristic submanifold of \((M,S)\), such that any nondegenerate map \((B,F)\longrightarrow(M,S)\) from a Seifert pair \((B,F)\) is homotopic as a map of pairs into \((X,\Sigma)\)._ A _Seifert pair_ is \(3\)-manifold pair that is a finite disjoint union of interval bundle pairs \((B,\partial_{H}B)\) and \(S^{1}\)-bundle pairs. Here, an \(S^{1}\)_-bundle pair_\((B,F)\) is a \(3\)-manifold \(B\) fibered by circles, where \(F\subset\partial B\) is a compact subsurface saturated by fibers. A Seifert pair \((X,\Sigma)\subset(M,S)\) is _well embedded_ if \(X\cap\partial M=\Sigma\subset S\) and the frontier of \(X\) in \(M\) is a \(\pi_{1}\)-injective surface, and is _perfectly embedded_ if it is well embedded, no component of the frontier of \(X\) in \(M\) is homotopic into \(S\), and no component of \(X\) is homotopic into another component. When \((B,F)\) is a connected Seifert pair, a map \(f:(B,F)\longrightarrow(M,S)\) is _essential_ if it is not homotopic as a map of pairs into \(S\). Notice that this only depends on the image of \(f\) and not on \(f\) itself. One says \(f\) is _nondegenerate_ if it is essential, its \(\pi_{1}\)-image is nontrivial, its \(\pi_{1}\)-image is non-cyclic when \(F=\emptyset\), and no fiber of \(B\) is nullhomotopic in \((M,S)\). For disconnected \((B,F)\), one says \(f\) is nondegenerate if its restriction to every component is nondegenerate. The following is very well known. **Fact 2.9**.: _If \(int(M)\) is hyperbolizable and \((B,F)\) is an \(S^{1}\)-bundle pair that is perfectly embedded in \((M,S)\), then either_ 1. \((B,F)\) _is a 'fibered solid torus', i.e._ \(B\) _is an_ \(S^{1}\)_-bundle over a disk, and_ \(F\subset\partial B\cong T^{2}\) _is a collection of fibered parallel annuli, or_ 2. \((B,F)\) _is a 'thickened torus', i.e._ \(B\) _is an_ \(S^{1}\)_-bundle over an annulus, so is homeomorphic to_ \(T^{2}\times[0,1]\)_, and each component of_ \(F\) _is either a torus or a fibered annulus._ So in particular, the components of the characteristic submanifold of \((M,S)\) are either interval bundles, solid tori, or thickened tori. Proof.: Suppose that \((B,F)\) is a perfectly embedded \(S^{1}\)-bundle pair in \(M\). Then \(B\longrightarrow Y\) is an \(S^{1}\)-bundle, where \(Y\) is a compact \(2\)-orbifold, and the cyclic subgroup \(Z\subset\pi_{1}B\) corresponding to a regular fiber is normal in \(\pi_{1}B\). In a hyperbolic \(3\)-manifold, any subgroup of \(\pi_{1}\) that has a cyclic normal subgroup is elementary, say by a fixed point analysis on \(\partial_{\infty}\mathbb{H}^{3}\). So, \(\pi_{1}B\) is either cyclic or isomorphic to \(\mathbb{Z}^{2}\). It follows that \(Y\) is a disc, in which case \(B\) is a fibered solid torus, or \(Y\) is an annulus, in which case \(B\) is a thickened torus. In this paper we will mostly be interested in interval bundles. For brevity, we'll use the following terminology, which differs slightly from the terminology above used by Jaco-Shalen. **Definition 2.10**.: An _essential interval bundle_ in \((M,S)\) is an essential, well-embedded interval bundle pair \((B,\partial_{H}B)\hookrightarrow(M,S)\). Note that the horizontal boundary of any essential interval bundle is an incompressible subsurface of \(S\). The definition above differs from a well embedded interval bundle pair in that we are excluding boundary-parallel interval bundles over annuli, and differs from a perfectly embedded interval bundle pair in that we are allowing components of the frontier of an interval bundle over a surface that is not an annulus to be boundary parallel. For instance, if \(Y\) is a surface with boundary and \(Y^{\prime}\subset Y\) is obtained by deleting collar neighborhoods of the boundary components, and we set \(M=Y\times[-1,1],\) which is a handlebody, then \((Y^{\prime}\times[-1,1],Y^{\prime}\times\{-1,1\})\) is an essential interval bundle in \((M,\partial M)\), but is not perfectly embedded. However, note that any essential interval bundle \((B,\partial_{H}B)\hookrightarrow(M,S)\) is perfectly embedded in \((M,\partial_{H}B).\) ### Compression arcs Suppose \((B,\partial_{H}B)\subset(M,S)\) is an essential interval bundle. An arc \(\alpha\subset S\) with endpoints on \(\partial(\partial_{H}B)\) and interior disjoint from \(\partial_{H}B\) is called a _compression arc_ if it is homotopic in \(M\) to a fiber of \(B\), while keeping its endpoints on \(\partial(\partial_{H}B)\). See Figure 6. To link this definition with more classical ones, it is easy to see that there is a compression arc for \(B\) if and only if \(\overline{\mathrm{Fr}(B)}\) is boundary compressible, see [24, pp.36-37] for a definition. Write our interval bundle as \(\pi:B\longrightarrow Y\). Let \(\alpha\) be a compression arc for \(B\). After isotoping the bundle map \(\pi\), we can assume that \(\alpha\) is homotopic rel endpoints to a fiber \(\pi^{-1}(y)\), where \(y\in Y.\) Suppose \(c\) is an oriented, two-sided, essential, simple closed loop \(Y\) based at \(y\), and suppose that either \(c\) is nonperipheral in \(Y\), or that \(Y\) is an annulus or Mobius band. Write \(\pi^{-1}(c)=c_{-}\cup c_{+}\), where \(c_{\pm}\) are disjoint simple closed oriented loops in \(X\) based at \(y_{\pm}\), and where the orientations of \(c_{\pm}\) project to that of \(c\). **Claim 2.11**.: _The concatenation \(m(c):=c_{-}\cdot\alpha\cdot c_{+}^{-1}\cdot\alpha^{-1}\) is homotopic to a meridian on \(S\)._ So, a compression arc \(\alpha\) allows one to make compressible curves on \(S\) from essential curves on \(Y\). See Figure 6. Proof.: Since \(\alpha\) is homotopic rel endpoints to the fiber \(\pi^{-1}(y)\), the curve \(m(c)\) is homotopic in \(M\) to a curve in \(B\) that projects under \(\pi\) to \(c\cdot c^{-1}\), and hence \(m(c)\) is nullhomotopic in \(M\). Checking orientations, one can see that \(m(c)\) is homotopic to a simple closed curve on \(S\). So, we only have to prove that \(m(c)\) is homotopically essential on \(S\). Suppose that \(c_{-},c_{+}\) are freely homotopic on \(\partial_{H}B\) as oriented curves. (This happens exactly when the curve \(c\subset Y\) bounds a Mobius band in \(Y\).) Then \(m(c)\) is homotopic to the commutator of two essential simple closed curves on \(S\) that intersect once, and hence is essential since \(S\) is not a torus. We can now assume that that \(c_{\pm}\) are not freely homotopic in \(\partial_{H}B\) as oriented curves. If \(m(c)\) is inessential, then \(c_{\pm}\)_are_ freely homotopic on \(S\), so \(c_{\pm}\) are homotopic in \(\partial_{H}B\) to boundary components \(c^{\prime}_{\pm}\subset\partial_{H}B\) that bound an annulus in \(S\setminus\partial_{H}B\). In this case \(c_{\pm}\) are peripheral, so we may assume that \(Y\) is either an annulus or a Mobius band. If \(Y\) is a Mobius band, we are in the situation of the previous paragraph and are done. So, \(Y\) is an annulus, and \(\partial_{H}B\) is a pair of disjoint annuli on \(S\), where \(c^{\prime}_{\pm}\) lie in different components of \(\partial_{H}B\). Since \(c^{\prime}_{\pm}\) bound an annulus in \(S\setminus\partial_{H}B\), the interval bundle \(B\) is inessential, contrary to our assumption. In fact, more is true. **Fact 2.12** (Arcs that produce meridians).: _Suppose \((B,\partial_{H}B)\subset(M,S)\) is an essential interval bundle and let \(\alpha\subset S\) be an arc with endpoints on \(\partial_{H}B\) and interior disjoint from \(\partial_{H}B\). Let \(X\subset S\) be a regular neighborhood of \(\alpha\cup\partial_{H}B\) within \(S\). Then there is a meridian in \(X\) if and only if we have either:_ 1. _the endpoints of_ \(\alpha\) _lie on the same component_ \(c\) _of_ \(\partial(\partial_{H}B)\)_, and there is an arc_ \(\beta\subset c\) _such that_ \(\alpha\cup\beta\) _is a meridian, or_ 2. \(\alpha\) _is a compression arc._ Note that in the second case the endpoints of \(\alpha\) lie on distinct components of \(\partial(\partial_{H}B)\), so in particular the two cases are mutually exclusive. The reason we say \(X\) 'contains a meridian' instead of 'is compressible' is that \(X\) may not be an essential subsurface of \(S\), and we want to emphasize that the essential curve in \(X\) that is compressible in \(M\) is actually essential in \(S\). For example, let \(Y\) be a compact surface with boundary, \(Y^{\prime}\subset Y\) be obtained by deleting a collar neighborhood of \(\partial Y\), set \(B=Y^{\prime}\times[-1,1]\) and \(M=Y\times[-1,1]\), and let \(\alpha\) be a spanning arc of \(B\) in \(\partial M\). Proof.: The 'if' direction is immediate: in case (1) we are essentially given a meridian in \(X\), and in case (2) we can appeal to Claim 2.11. Figure 6. \(Z\subset Y\) is a compact surface, the interval bundle \(B=Z\times[-1,1]\) embeds in \(M=Y\times[-1,1]\), and \(\alpha\) above is a compression arc. Also pictured in light gray is a meridian as described in Claim 2.11. We now work on the 'only if' direction. Write our regular neighborhood of \(\partial_{H}B\cup\alpha\) as \(X=\partial_{H}B\cup R\) where \(R\) is a rectangle with two opposite'short' sides on the boundary of \(\partial_{H}B\). Let \(D\subset M\) be an essential disc whose boundary is contained in \(X\), and where \(D\) intersects the frontier \(\operatorname{Fr}(B)\subset M\) in a minimal number of components. Let \(a\subset D\cap\operatorname{Fr}(B)\) be an arc that is 'outermost' in \(D\), i.e. there is some arc \(a^{\prime}\subset\partial D\) with the same endpoints as \(a\) such that \(a,a^{\prime}\) bound an open disk in \(D\) that does not intersect \(\operatorname{Fr}(B)\). We claim that \(a^{\prime}\subset R\). If not, then \(a^{\prime}\subset\partial_{H}B\), and bounds a disk in \(B\) with the arc \(a\subset\partial B\). Writing the interval bundle as \(\pi:B\longrightarrow Y\), the projection \(\pi(a\cup a^{\prime})\) in \(Y\) is then also nullhomotopic, so \(\pi(a^{\prime})\) is homotopic rel endpoints into \(\partial Y\). Lifting this homotopy through the covering map \(\partial_{H}B\longrightarrow Y\) we get that \(a^{\prime}\) is inessential in \(\partial_{H}B\), i.e. is homotopic in \(\partial_{H}B\) rel endpoints into \(\partial(\partial_{H}B)\). Lifting this homotopy through the covering map \(\partial_{H}Y\longrightarrow Y\)\(\pi(a)\subset\partial Y\), it follows that \(\pi(a^{\prime})\) is an inessential arc in \(Y\). We can then decrease the number of components of \(D\cap\operatorname{Fr}(B)\), contradicting that this number is minimal. So, \(a^{\prime}\subset R\). Again by minimality of the intersection, the endpoints of \(a^{\prime}\) lie on opposite short sides of \(R\), so \(\alpha\) is homotopic to \(a^{\prime}\) through arcs in \(R\) with endpoints on \(\operatorname{Fr}(B)\). Since \(a^{\prime}\) is homotopic rel endpoints to \(a\subset\operatorname{Fr}(B)\), it follows that \(\alpha\) is homotopic rel endpoints into \(\operatorname{Fr}(B)\). If the two endpoints of \(\alpha\) lie on the same component of \(\partial(\partial_{H}B)\), we are in case (1), and otherwise we are in case (2). ### Laminations We assume the reader is familiar with geodesic and measured laminations on finite type hyperbolic surfaces. See e.g. [12, 28]. Suppose \(\lambda\) is a connected geodesic lamination on a finite type hyperbolic surface \(S\) with geodesic boundary. We say that \(\lambda\)_fills_ an essential subsurface \(T\subset S\) if \(\lambda\subset T\) and \(\lambda\) intersects every essential, non-peripheral simple closed curve in \(T\). **Fact 2.13**.: _For every connected \(\lambda\), there is a unique subsurface with geodesic boundary (as in SS2.1) that is filled by \(\lambda\), which we denote by \(S(\lambda)\). It is the minimal subsurface with geodesic boundary in \(S\) that contains \(\lambda\)._ Here, \(S(\lambda)\) can be constructed by taking a component \(\tilde{\lambda}\subset\tilde{S}\subset\mathbb{H}^{2}\) of the preimage of \(\lambda\), letting \(C\subset\mathbb{H}^{2}\) be the convex hull of the set of endpoints of leaves of \(\tilde{\lambda}\) in \(\partial\mathbb{H}^{2}\), and projecting \(C\) into \(S\). Suppose that \(M\) is a compact, orientable irreducible \(3\)-manifold let \(S\subset\partial M\) be an essential subsurface. The _limit set_ of \((S,M)\) is the closure \[\Lambda(S,M)=\overline{\{\text{meridians }\gamma\subset S\}}\subset\mathcal{PML}(S),\] where \(\mathcal{PML}(S)\) is the space of projective measured laminations on \(S\). The limits set was first studied by Masur [39] in the case that \(M\) is a handlebody, with \(S\) its entire boundary. In this case, Kerckhoff [29] later proved that the limit set has measure zero in \(\mathcal{PML}(S)\), although a mistake in his argument was later found and fixed by Gadre [18]. In some ways, \(\Lambda(S,M)\) acts as a dynamical limit set. For instance, let \(\operatorname{Map}(S)\) be the mapping class group of \(S\), and let \(\operatorname{Map}(S,M)\subset\operatorname{Map}(S)\) be the subgroup consisting of mapping classes represented by restrictions of homeomorphisms of \(M\). Then we have: **Fact 2.14**.: 1. _If_ \(\Lambda(S,M)\) _is nonempty, it is the smallest nonempty closed subset of_ \(\mathcal{PML}(S)\) _that is invariant under_ \(\operatorname{Mod}(S,H)\)_._ 2. _If_ \(\operatorname{Map}(S,M)\) _contains a pseudo-Anosov map on_ \(S\)_, then_ \(\Lambda(S,M)\) _is the closure of the set of the attracting and repelling laminations of pseudo-Anosov elements of_ \(\operatorname{Map}(S,M)\)_._ Note that \(\operatorname{Map}(S,M)\) contains a pseudo-Anosov map on \(S\) if and only if the disk set \(\mathcal{D}(S,M)\) has infinite diameter in the curve complex \(\mathcal{C}(S)\), where the latter condition was discussed earlier in Proposition 3.1. See also [2, 34]. Proof.: For the first part just note that Dehn twist \(T_{m}\) around meridians \(m\subset S\) are in \(\operatorname{Map}(M,S)\), so if \(A\subset\mathcal{PML}(S)\) is nonempty and invariant, \(\lambda\in A\) and \(m\) is a meridian, then \(m=\lim_{i}T_{m}^{i}(\lambda)\) is also in \(A\), implying \(\Lambda(M,S)\subset A\). For the second part, take a pseudo-Anosov \(f\in\operatorname{Map}(M,S)\) with attracting lamination \(\lambda_{+}\), say. If \(m\) is a meridian in \(S\), then \(T_{m}^{i}\circ f\circ T_{m}^{-i}\) are pseudo-Anosov maps on \(S\) and their attracting laminations converge to \(m\), and then the argument finishes as before. ### Laminations on interval bundles Suppose that \(Y\) is a compact hyperbolizable surface with boundary, and that \(B\longrightarrow Y\) is an interval bundle over \(Y\). Endow \(Y\) and the horizontal boundary \(\partial_{H}B\) with arbitrary hyperbolic metrics such that the boundary components are all geodesic. Suppose we have two geodesic laminations \(\lambda_{\pm}\) on \(\partial_{H}B\). **Definition 2.15**.: We say that \(\lambda_{\pm}\) are _essentially homotopic through \(B\)_ if there is a lamination \(\lambda\) and a homotopy \(h_{t}:\lambda\longrightarrow B,\ t\in[-1,1]\) such that \(h_{\pm 1}\) is a homeomorphism onto \(\lambda_{\pm}\), and where \((h_{t})\) is not homotopic into \(\partial_{H}B\). When \(B\) is a trivial interval bundle, \(\lambda_{\pm}\) are essentially homotopic through \(B\) if and only if we can write \(B\cong Y\times[0,1]\) in such a way that \(\lambda_{\pm}=\lambda\times\{\pm 1\}\) for some geodesic lamination on \(Y\). This is an easy consequence of the fact that on a surface, homotopic laminations are isotopic. In general: **Fact 2.16**.: _Suppose that \(\lambda_{\pm}\) are disjoint or equal geodesic laminations on \(\partial_{H}B\). Then the following are equivalent._ 1. \(\lambda_{\pm}\) _are essentially homotopic through_ \(B\)_._ 2. \(\lambda_{\pm}\) _is isotopic on_ \(\partial_{H}B\) _to_ \(\sigma(\lambda_{\mp})\)_, where_ \(\sigma\) _is the canonical involution of_ \(B\) _discussed in SS_2.5_._ _Moreover, (1) and (2) imply_ * _There is a geodesic lamination_ \(\bar{\lambda}\) _on_ \(Y\) _such that_ \(\lambda_{-}\cup\lambda_{+}\) _is isotopic on_ \(\partial_{H}B\) _to the preimage_ \((\pi|_{\partial_{H}B})^{-1}(\bar{\lambda})\)_._ Here, (3) does not always imply (1,2), since it could be that \(\bar{\lambda}\) has two components, \((\pi|_{\partial_{H}B})^{-1}(\bar{\lambda})\) has four, and these components are incorrectly partitioned into the two laminations \(\lambda_{\pm}\). However, that's the only problem, so for instance if \(\lambda_{\pm}\) are minimal then (1) - (3) are equivalent. While we have phrased things more generally in the section, we can always assume in proofs that our hyperbolic metrics have been chosen so that the covering map \(\pi|_{\partial_{H}B}:\partial_{H}B\longrightarrow Y\) is locally isometric. Here, we're using the fact that given two hyperbolic metrics with geodesic boundary on a compact surface, a geodesic lamination with respect to one metric is isotopic to a unique geodesic lamination with respect to the other hyperbolic metric. In this case, we can remove the word 'isotopic' from (2) and (3). Proof.: The fact is trivial when \(B\) is a trivial interval bundle. When \(B\) is nontrivial, lift the homotopy to the trivial interval bundle \(B^{\prime}\longrightarrow B\) that double covers \(B\), giving homotopic laminations \(\lambda^{\prime}_{\pm}\subset\partial_{H}B^{\prime}\). (1) \(\iff\) (2) follows since the canonical involution on \(B^{\prime}\) covers that of \(B\). For (2) \(\implies\) (3), note that since \(\lambda_{\pm}\) are disjoint or equal and differ by \(\sigma\), their projections \(\pi(\lambda_{\pm})\subset Y\) are the same, and are a geodesic lamination \(\bar{\lambda}\) on \(Y\). ## 3. Large and small disk sets and compression bodies Suppose that \(S\subset\partial M\) is an essential subsurface of the boundary of a compact, irreducible 3-manifold \(M\), and that \(\partial S\) is incompressible in \(M\). The following is probably known to some experts, but we don't think it appears anywhere in the literature, so we give a complete proof. **Proposition 3.1** (Diameters of disk sets).: _With \(M,S\) as above, either_ * \(\mathcal{D}(S,M)\) _has infinite diameter in_ \(\mathcal{C}(S)\)_,_ * \(S\) _has one nonseparating meridian_ \(\delta\)_, and every other meridian is a band sum of_ \(\delta\)_,_ * \(S\) _has a single meridian, which is separating, or_ * \(\mathcal{D}(S,M)=\emptyset\)_._ In case (1), we will say that \(\mathcal{D}(S,M)\) is _large_, and in cases (2)-(4), we will say that \(\mathcal{D}(S,M)\) is _small_. Similarly, if \(C(S,M)\) is the characteristic compression body defined in Fact 2.4, then \(C(S,M)\) is said to be large or small depending on whether \(\mathcal{D}(S,M)\) is large or small. See also the discussion of small compression bodies in SS3 of [3]. Here, recall that a _band sum_ of a meridian \(\delta\) is the boundary of a regular neighborhood of \(\delta\cup\beta\), where \(\beta\) is a simple closed curve on \(S\) that intersects \(\delta\) once. Any such band sum must be a meridian: for instance, as an element of \(\pi_{1}M\) it is a commutator with a trivial element. Also, (3) includes the case when \(M\) is a solid torus and \(S=\partial M\), in which case there is only one (nonseparating) meridian. When \(M\) is not a solid torus, though, every nonseparating curve has infinitely many band sums. Before beginning the proof, we first establish the following: **Claim 3.2**.: _Suppose \(S\) is not a torus, \(\gamma\subset S\) is a meridian on \(S\) and \(\delta\) is a meridian that lies in a component \(T\subset S\setminus\gamma\). If \(\gamma\) is not a band sum of \(\delta\), there is a pseudo-Anosov \(f:T\longrightarrow T\) that extends to a homeomorphism of \(M\)._ The condition that \(\gamma\) is not a band sum is necessary. For if \(M\) is a handlebody with \(S=\partial M\), and \(\gamma\) is a separating meridian that bounds a compressible punctured torus \(T\subset S\), then \(T\) has only a single meridian \(\delta\). This \(\delta\) is nonseparating and \(\gamma\) is a band sum of \(\delta\). Any map \(T\longrightarrow T\) that extends to a homeomorphism of \(M\) must then fix \(\delta\), so cannot be pseudo-Anosov. Similarly, if \(S\) is a torus and \(\gamma\) is a meridian, the complement of \(\gamma\) is an annulus, which does not admit any pseudo-Anosov map. Proof of Claim 3.2.: Suppose first that \(\gamma\) is not separating. Any simple closed curve that intersects \(\gamma\) once can be used to create a band sum. Now \(S\) is not a torus, and cannot be a punctured torus either, since then its boundary would be compressible. So, there are a pair \(\alpha,\beta\) of band sums of \(\gamma\) that fill \(S\smallsetminus\gamma\). By a theorem of Thurston [17, III.3 in 13], the composition of twists \(T_{\alpha}\circ T_{\beta}^{-1}\) is pseudo-Anosov. Each twist extends to \(M\), because twist about meridians can be extended to twists along the disks they bound. Now suppose \(\gamma\) separates \(S\), and suppose that \(R\) is the component of \(T\setminus(\gamma\cup\delta)\) adjacent to \(\gamma\) and \(\delta\). Any curve in \(R\) that bounds a pair of pants with \(\gamma\) and \(\delta\) is also a meridian. Such curves are constructed as the boundary of a neighborhood of the union of \(\gamma,\delta\) and any arc in \(R\) joining the two. Therefore, there is a pair \(\alpha,\beta\) of such curves that fills \(R\). As before, \(f=T_{\alpha}\circ T_{\beta}^{-1}\) is a pseudo-Anosov on \(R\) that extends to \(M\). However, there was nothing special about \(\delta\) in the above construction. So if there is some (non-peripheral) meridian \(\delta^{\prime}\subset T\) with \(\delta\neq\delta^{\prime}\), there is also a pseudo-Anosov \(f^{\prime}\) on the corresponding surface \(R^{\prime}\), such that \(f^{\prime}\) extends to \(M\). Since \(R\) and \(R^{\prime}\) fill \(T\), [14, Theorem 6.1] says that for large \(i\) the composition \(f^{i}(f^{\prime})^{i}\) is a pseudo-Anosov on \(T\). See Figure 7. The only case left to consider is when \(\delta\) is the only (non-peripheral) meridian in \(T\). Since new meridians usually can be created by joining \(\delta\) and \(\gamma\) with an arc and taking a regular neighborhood, the only possibility here is that \(T\) is a punctured torus, in which case this construction always just produces \(\gamma\) again. But then \(\gamma\) is a band sum of \(\delta\). Proof of Proposition 3.1.: When \(S\) is a torus, distinct curves have nonzero algebraic intersection number, so either there are no meridians or there is a single meridian. So, we assume \(S\neq T^{2}\) below. We first claim that if there are two meridians in \(S\), neither of which is a band sum of the other, then \(\mathcal{D}(S,M)\) has infinite diameter in the curve complex. To see this, suppose \(\gamma_{1},\gamma_{2}\) are such meridians. Claim 3.2 gives two pseudo-Anosov maps \(f_{1},f_{2}\), each defined on the component of \(S\setminus\gamma_{i}\) that contains \(\gamma_{j}\), where \(i\neq j\). Since the component of \(S\setminus\gamma_{1}\) containing \(\gamma_{2}\) and the component of \(S\setminus\gamma_{2}\) containing \(\gamma_{1}\) together fill \(S\), for large \(k\) the composition \(f^{k}g^{k}\) is a pseudo-Anosov map on the entire surface \(S\), by [14, Theorem 6.1]. Any such composition extends to \(M\), so maps meridians to meridians. As pseudo-Anosovs act with unbounded orbits on the curve complex [41], this implies that the set of meridians has infinite diameter in \(\mathcal{C}(S)\). Starting now with the proof of the proposition, suppose there are _no non-separating meridians_ in \(S\). If \(\gamma,\delta\) are distinct (separating) meridians, then an innermost disk surgery produces another separating meridian \(\gamma_{2}\) disjoint from \(\gamma_{1}\), see [3, Lemma 2.8]. By the previous paragraph, \(\mathcal{D}(S,M)\) has infinite diameter in the curve complex. So, the only other options are if \(\mathcal{D}(S,M)=\emptyset\), or if the only meridian is a single separating curve. Suppose now that there is a non-separating meridian \(\gamma\) in \(S\). By Claim 3.2, unless the disc set has infinite diameter in the curve complex, any meridian disjoint from \(\gamma\) must be a band sum of \(\gamma\). So, either we are in case (2) of the proposition, or there is some meridian \(\delta\) that intersects \(\gamma\). Any innermost disk surgery of \(\delta\) along \(\gamma\) must produce a band sum \(\beta\) of \(\gamma\). However, this \(\beta\) must then bound a punctured torus \(T\) containing \(\gamma\), and \(\delta\) is then forced to lie inside \(T\), which gives a contradiction, see Figure 8. ## 4. Windows from limit sets Let \(N=\Gamma\backslash\mathbb{H}^{3}\) be an orientable, geometrically finite hyperbolic \(3\)-manifold, let \(\Lambda\subset\partial\mathbb{H}^{3}\) be the limit set of \(\Gamma\), and let \[CC(N):=\Gamma\backslash CH(\Lambda)\subset N\] be the convex core of \(N\). Equip \(\partial CC(N)\) with its intrinsic length metric, which is hyperbolic, see for instance [51, Prop 8.5.1]. Let \(S_{\pm}\) be (possibly degenerate) incompressible subsurfaces with geodesic boundary in \(\partial CC(N)\) that are either equal or are essentially disjoint, as in SS2.1. Let \[\tilde{S}_{\pm}\subset\partial CH(\Lambda)\subset\mathbb{H}^{3}\] be lifts of \(S_{\pm}\), where if \(S_{-}=S_{+}\), we require that \(\tilde{S}_{-}\neq\tilde{S}_{+}\). Let \(\Gamma_{\pm}\subset\Gamma\) be the stabilizers of \(\tilde{S}_{\pm}\), let \(\Lambda_{\pm}\subset\partial\mathbb{H}^{3}\) be their limit sets and \(\Delta=\Gamma_{+}\cap\Gamma_{-}\). The lift \(\tilde{S}_{\pm}\) is isometric to a convex subset of \(\mathbb{H}^{2}\). Let \(\partial_{\infty}S_{\pm}\subset\partial\mathbb{H}^{2}\) be the boundary of \(\tilde{S}_{\pm}\). By [45, Theorem 5.6], say, the inclusion \(\tilde{S}_{\pm}\hookrightarrow\mathbb{H}^{3}\) extends continuously to a \(\Gamma_{\pm}\)-equivariant quotient map \[\iota_{\pm}:\partial_{\infty}\tilde{S}_{\pm}\longrightarrow\Lambda_{\pm} \subset\partial\mathbb{H}^{3}.\] **Theorem 4.1** (Windows from limit sets).: \(\Lambda_{-}\cap\Lambda_{+}=\Lambda_{\Delta}\)_. Next, suppose \(\Delta\) is nonempty and is not a cyclic group acting parabolically on either \(\tilde{S}_{-}\) or \(\tilde{S}_{\pm}\), and let \(\tilde{C}_{\pm}\subset\tilde{S}_{\pm}\) be the convex hulls of the subsets \(\iota_{\pm}^{-1}(\Lambda_{\Delta})\subset\partial_{\infty}\tilde{S}_{\pm}.\) Then \(\tilde{C}_{\pm}\) are \(\Delta\)-invariant, the quotients \(C_{\pm}:=\Delta\backslash\tilde{C}_{\pm}\) are (possibly degenerate) subsurfaces with geodesic boundary in \(S_{\pm}\), and there is an essential homotopy from \(C_{-}\) to \(C_{+}\) in \(CC(N)\) that is the projection of a homotopy from \(\tilde{C}_{-}\) to \(\tilde{C}_{+}\)._ Above, \(C_{\pm}\) are (possibly degenerate) subsurfaces with geodesic boundary in \(S_{\pm}\), as defined in SS2.1, but it follows from the above and Theorem 2.8 that there are'resolutions' (see SS2.1) \(C^{\prime}_{\pm}\subset S_{\pm}\) such that \(C^{\prime}_{\pm}\) bound an interval bundle in \(CC(M)\). So informally, the theorem says that the intersection \(\Lambda_{-}\cap\Lambda_{+}\) is Figure 8. A surgery of a curve \(\delta\) along a non-separating \(\gamma\) cannot produce a curve \(\beta\) that is a band sum of \(\gamma\). exactly the limit set of the fundamental group of some essential interval bundle in \((CC(M),S^{\prime}_{-}\cup S^{\prime}_{+})\). The term 'window' comes from Thurston [52] and refers to interval bundles; for example, one can'see through' a trivial interval bundle from one horizontal boundary component to the other. The assumption that \(\Delta\) is not cyclic and acting parabolically on either \(\tilde{S}_{\pm}\) is just for convenience in the statement of the theorem. (Just to be clear, note that an element \(\gamma\in\Delta\) can act parabolically as an isometry of \(\mathbb{H}^{3}\), but hyperbolically on the convex subsets \(\tilde{S}_{\pm}\subset\mathbb{H}^{2}\).) If \(\Delta\) is cyclic and acts parabolically on \(\tilde{S}_{+}\) the subset \(\tilde{C}_{+}\) in the statement of the theorem will be empty. However, using the same proof one can construct a homotopy from a simple closed curve on \(S_{+}\) bounding a cusp of \(S_{+}\) to some simple closed curve on \(S_{-}\). As mentioned in the introduction, a version of Theorem 4.1 was known to Thurston, see his discussion of the Only Windows Break Theorem in [52]. Precise statements for geometrically finite \(N\) without accidental parabolics were worked out in Lecuire's thesis [33] and by Walsh [53]; note that Walsh uses the conformal boundary instead of the convex core boundary, but the two points of view are equivalent. However, for our applications in this paper, we need to allow accidental parabolics in \(\tilde{S}_{\pm}\), which are not allowed in those theorems. Also, our proof is more direct and natural3 than those in [53, 33], despite the extra complication coming from parabolics. Footnote 3: In both [53, 33], the authors focus on proving that the boundary components of \(\tilde{C}_{\pm}\) project to simple closed curves in \(S_{\pm}\), but that isn’t sufficient to say that \(\tilde{C}_{\pm}\) projects to a subsurface with geodesic boundary in \(S_{\pm}\), which is what they then claim. E.g. in [53] it is stated that under a covering map, the boundary of a subset goes to the boundary of the image, but this isn’t true. Finally, the assumption that \(N\) is geometrically finite is not really essential for the theorem statement. With a bit more work dealing with degenerate ends, one can prove the theorem for all finitely generated \(\Gamma\). Essentially, the point is to use Canary's covering theorem [9] to show that degenerate NP-ends in the covers \(N_{\pm}:=\Gamma_{\pm}\backslash\mathbb{H}^{3}\) have neighborhoods that embed in \(N\), and then to use this to prove that geodesic rays in \(\mathbb{H}^{3}\) that converge to points in \(\Lambda_{-}\cap\Lambda_{+}\) cannot exit degenerate ends in \(N_{\pm}\). After showing this, the proof of Claim 4.3 extends to the general case. However, we don't have an application for that theorem in mind, so we'll spare the reader the details. ### Proof of Theorem 4.1 We first focus on proving that \(\Lambda_{-}\cap\Lambda_{+}=\Lambda_{\Delta}\). For each \(\xi\in\partial\mathbb{H}^{3}\), let \(\Gamma_{\pm}(\xi)\subset\Gamma_{\pm}\) be the stabilizer of \(\xi\). **Claim 4.2**.: _Let \(\xi\in\partial\mathbb{H}^{3}\) and suppose that \(\Gamma_{-}(\xi)\) and \(\Gamma_{+}(\xi)\) are both nontrivial. Then they are equal._ Proof.: By the Tameness Theorem [1, 8], we can identify \(CC(N)\) topologically with a subset of a 3-compact manifold with boundary \(M\), where \[CC(N)\supset int(M),\ \ CC(N)\cap\partial M=\partial CC(N), \tag{1}\] and where \(\partial CC(N)\) is a collection of essential subsurfaces of \(\partial\bar{N}\). Let \(\partial_{\chi=0}M\) be the union of all torus boundary components of \(M\), and let \((X,\Sigma)\) be the characteristic submanifold of the pair \((M,S_{-}\cup S_{+}\cup\partial_{\chi=0}M)\), as in SS2.6. Since \(\Gamma_{\pm}\) are both contained in a discrete group \(\Gamma\), both \(\Gamma_{\pm}(\xi)\) are contained in the stabilizer \(\Gamma(\xi)\), which is either infinite cyclic, or rank 2 parabolic. Suppose first that \(\Gamma(\xi)\) is rank 2 parabolic. The groups \(\Gamma_{\pm}(\xi)\) are both cyclic, since \(S_{\pm}\) are incompressible hyperbolic surfaces, so their fundamental groups do not contain \(\mathbb{Z}^{2}\) subgroups. So, we can write \(\Gamma_{\pm}(\xi)=\langle\gamma_{\pm}\rangle\) for closed curves \(\gamma_{\pm}\) on \(S_{\pm}\). Both \(\gamma_{\pm}\) are homotopic into some fixed component \(T\subset\partial_{\chi=0}M\), the component whose fundamental group can be conjugated to stabilize \(\xi\). So, there is a component \((X_{0},\Sigma_{0})\subset(X,\Sigma)\) of the characteristic submanifold such that \(\Sigma_{0}\) intersects \(T\) and both \(\gamma_{\pm}\) are homotopic on \(S_{\pm}\) into \(\Sigma_{0}\). Since \(M\not\cong T^{2}\times[0,x]\), the component \((X_{0},\Sigma_{0})\) is either an interval bundle over an annulus (so, a fibered solid torus), or an \(S^{1}\)-bundle pair, so by Fact 2.9, \(X_{0}\) is either a fibered solid torus or a thickened torus. In either case, \(\Sigma_{0}\) intersects each of \(S_{\pm}\) in a fibered annulus, and these annuli are disjoint, so they are parallel on a torus boundary component of \(X_{0}\), implying that \(\gamma_{\pm}\) are homotopic in \(M\), and hence \(\Gamma_{\pm}(\xi)\) are conjugate in \(\Gamma\). But since \(\Gamma_{\pm}(\xi)\) have the same fixed point at infinity, the conjugating element must fix \(\xi\), and therefore commute with the two groups, implying \(\Gamma_{-}(\xi)=\Gamma_{+}(\xi)\). Now assume \(\Gamma(\xi)\) is cyclic. Pick a basepoint \(p\in S_{-}\), say, and let \(\gamma_{-}\subset S_{-}\) be a loop based at \(p\) representing a generator of \(\Gamma_{-}(\xi)\). Represent a generator of \(\Gamma_{+}(\xi)\) as \(\alpha\cdot\gamma_{+}\cdot\alpha^{-1}\), where \(\alpha\) is an arc from \(p\in S_{-}\) to a point in \(S_{+}\), and \(\gamma_{+}\) is a loop in \(S_{+}\). Since \(\Gamma_{\pm}\) stabilize distinct components \(\tilde{S}_{\pm}\), the arc \(\alpha\) is not homotopic into \(S_{-}\cup S_{+}\). So, \(\alpha\) is a spanning arc of an essential map from an annulus, where the boundary components of the annulus map to powers of \(\gamma_{\pm}\). It follows that the loops \(\gamma_{\pm}\) are homotopic on \(S_{\pm}\) into \(\Sigma_{0}\) for some component \((X_{0},\Sigma_{0})\subset(X,\Sigma)\). If \(X_{0}\) is an \(I\)-bundle with horizontal boundary \(\Sigma_{0}\), then as \(\gamma_{\pm}\) are not proper powers in \(\pi_{1}S_{\pm}\), they are both primitive in \(\pi_{1}X_{0}\), and hence \(\gamma_{\pm}\) (rather than their powers) are homotopic in \(X_{0}\subset M\). Similarly, if \((X_{0},\Sigma_{0})\) is a fibered solid torus, \(\Sigma_{0}\) is a collection of parallel annuli on \(\partial X_{0}\), so since \(\gamma_{\pm}\) are primitive in \(\pi_{1}S_{\pm}\), they are homotopic on \(S_{\pm}\) to _simple_ closed curves in these annuli, and hence are homotopic to each other in \(X_{0}\). It follows that there are generators for \(\Gamma_{\pm}(\xi)\) that are conjugate in \(\Gamma\), but since these generators both fix \(\xi\), they are equal. **Claim 4.3**.: _For all \(\xi\in\Lambda_{-}\cap\Lambda_{+}\), we have \(\Gamma_{-}(\xi)=\Gamma_{+}(\xi)\). Moreover,_ \[\Lambda_{\Delta}=\Lambda_{-}\cap\Lambda_{+}.\] Proof.: Let \(N_{\pm}\subset\mathbb{H}^{3}\) be the \(1\)-neighborhood of the convex hull of \(\Lambda_{\pm}\), and for small \(\epsilon>0\), let \(T_{\pm}(\epsilon)\subset\mathbb{H}^{3}\) be the set of all points that are translated less than \(\epsilon\) by some parabolic element of \(\Gamma_{\pm}\). If \(\epsilon\) is at most the Margulis constant \(\epsilon_{0}\), then \(T_{\pm}(\epsilon)\) is a disjoint union of horoballs in \(\mathbb{H}^{3}\). The sets \(N_{\pm}\) and \(T_{\pm}(\epsilon)\) are \(\Gamma_{\pm}\) invariant. Since \(\Gamma_{\pm}\) is a finitely generated subgroup of \(\Gamma\), which is geometrically finite, \(\Gamma_{\pm}\) is geometrically finite as well by [9]. So, the action of \(\Gamma_{\pm}\) on \(N_{\pm}\setminus T_{\pm}(\epsilon)\) is cocompact, see e.g. Theorem 3.7 in [43], implying that either the function \[D_{+}:\mathbb{H}^{3}\longrightarrow\mathbb{R}_{>0},\ \ D_{+}(x)=\min\{d(x, \gamma(x))\ |\ \gamma\in\Gamma_{+}\ \text{loxodromic}\}\] is bounded above on \(N_{+}\setminus T_{+}(\epsilon)\) by some \(B(\epsilon)>0\), or \(\Gamma_{+}\) is elementary parabolic. A similar statement holds for \(-\) instead of \(+\). With \(\epsilon_{0}\) the Margulis constant, the Margulis Lemma then implies that _if \(\epsilon>0\) is sufficiently small with respect to \(B(\epsilon_{0})\), and \(\Gamma_{+}\) is not elementary parabolic, then_ \[T_{-}(\epsilon)\cap N_{+}\subset T_{+}(\epsilon_{0}), \tag{2}\] and similarly with \(-,+\) exchanged. Indeed, if not then we have (say) a point \(p\in\mathbb{H}^{3}\) that is translated by less than \(\epsilon\) by some parabolic \(\gamma_{-}\in\Gamma_{-}\) and by at most \(B\) by some loxodromic \(\gamma_{+}\in\Gamma_{+}\). If \(\epsilon\) is small with respect to \(B\), then both \(\gamma_{-}\) and \([\gamma_{+},\gamma_{-}]\) translates \(p\) by at most \(\epsilon_{0}\), so they generate an elementary discrete group by the Margulis lemma applied to \(\Gamma\), implying that \(\gamma_{+}\) fixes the fixed point of \(\gamma_{-}\), which contradicts that they generate a discrete group. Fix \(\xi\in\Lambda_{+}\cap\Lambda_{-}\). We claim that \(\Gamma_{-}(\xi)=\Gamma_{+}(\xi)\). By Claim 4.2 it suffices to show that whenever \(\Gamma_{-}(\xi)\) is nontrivial, say, so is \(\Gamma_{+}(\xi)\). First, assume that \(\Gamma_{-}(\xi)\) is elementary parabolic. We claim that \(\Gamma_{+}(\xi)\) is elementary parabolic as well. Assume not, and let \(\alpha\) be a geodesic ray in \(\mathbb{H}^{3}\) converging to \(\xi\). Then \(\alpha(t)\) lies in \(T_{-}(\epsilon)\cap N_{+}\) for large \(t\), and therefore in \(T_{+}(\epsilon_{0})\) for large \(t\) by (2), which implies \(\xi\) is a parabolic fixed point of \(\Gamma_{+}\) as well, a contradiction. Next, suppose that \(\Gamma_{-}(\xi)\) is elementary loxodromic. If \(\xi\) is a parabolic fixed point of \(\Gamma_{+}\), we are done, so let's assume this isn't the case. Let \(\alpha\) be the axis of \(\Gamma_{-}(\xi)\), parametrized so \(\alpha(t)\to\xi\) as \(t\to\infty\). Since \(\xi\) is not a \(\Gamma_{+}\) parabolic fixed point, there are \(t_{i}\to\infty\) such that \(\alpha(t_{i})\not\in T_{+}(\epsilon)\) for all \(i\). Since the action of \(\Gamma_{+}\) on \(N_{+}\setminus T_{+}(\epsilon)\) is cocompact, if \(p\in\mathbb{H}^{3}\) is a fixed basepoint, there are elements \(\gamma_{i}^{+}\in\Gamma_{+}\) such that \(\sup_{i}d(\gamma_{i}^{+}(p),\alpha(t_{i}))<\infty\). Since the action of \(\Gamma_{-}(\xi)\) on \(\alpha\) is cocompact, there are then elements \(\gamma_{i}^{-}\in\Gamma_{-}(\xi)\) with \[\sup_{i}d(\gamma_{i}^{+}(p),\gamma_{i}^{-}(p))<\infty.\] By discreteness of \(\Gamma\), after passing to a subsequence we can assume \(\gamma_{i}^{+}=\gamma_{i}^{-}\circ g\) for some fixed \(g\in\Gamma\). Hence, for all \(i\) we have \[\gamma_{i}^{+}\circ(\gamma_{1}^{+})^{-1}=\gamma_{i}^{-}\circ(\gamma_{1}^{-})^{- 1}\in\Gamma_{+}\cap(\Gamma_{-}(\xi))\subset\Gamma_{+}(\xi),\] so we are done. Finally, we want to show that \(\Lambda_{-}\cap\Lambda_{+}=\Lambda_{\Delta}\). The inclusion \(\Lambda_{\Delta}\subset\Lambda_{-}\cap\Lambda_{+}\) is clear. So, take \(\xi\in\Lambda_{-}\cap\Lambda_{+}\). We can assume that \(\Gamma_{\pm}(\xi)=1\), since otherwise we're in the cases handled above. Let \(\alpha\) be a geodesic ray in \(\mathbb{H}^{3}\) converging to \(\xi\). As in the previous case, since \(\xi\) is not a parabolic fixed point of \(\Gamma_{+}\), there are \(t_{i}\to\infty\) such that \(\alpha(t_{i})\not\in T_{+}(\epsilon_{0})\) for all \(i\). Discarding finitely many \(i\), we have \(\alpha(t_{i})\in N_{+}\), so it follows from (2) that \(\alpha(t_{i})\not\in T_{-}(\epsilon)\). Fixing a base point \(p\in\mathbb{H}^{3}\), as \(\Gamma_{-}\) acts cocompactly on \(N_{-}\setminus T_{-}(\epsilon)\) and \(\Gamma_{+}\) acts cocompactly on \(N_{+}\setminus T_{+}(\epsilon_{0})\), there are elements \(\gamma_{i}^{\pm}\in\Gamma_{\pm}\) such that \[\sup_{i}d(\gamma_{i}^{\pm}(p),\alpha(t_{i}))<\infty.\] So passing to a subsequence, \(\gamma_{i}^{+}=\gamma_{i}^{-}\circ g\) for some fixed \(g\in\Gamma\), and then \[\gamma_{i}^{+}\circ(\gamma_{1}^{+})^{-1}=\gamma_{i}^{-}\circ(\gamma_{1}^{-})^ {-1}\in\Gamma_{+}\cap\Gamma_{-}=\Delta\] for all \(i\). But applying this sequence to \(p\) and letting \(i\to\infty\) gives a sequence of points in the orbit \(\Delta(p)\) that converge to \(\xi\), so \(\xi\in\Lambda_{\Delta}\). Now assume that \(\Delta\neq 1\). We want to construct the interval bundle \(W\) mentioned in the statement of the theorem. After an isotopy on \(\partial CC(N)\), let's assume that \(S_{\pm}\) is a subsurface of \(\partial CC(N)\) with geodesic boundary. Consequently, we allow degenerate subsurfaces, where \(S_{\pm}\) is a simple closed geodesic, as well as subsurfaces where only the interior is embedded and two boundary components can coincide. As \(\partial CC(N)\) may have cusps, we also must allow \(S_{\pm}\) to be noncompact with finite volume, rather than compact. Recall that \(\tilde{S}_{\pm}\) is isometric to a convex subset of \(\mathbb{H}^{2}\), and that if \(\partial_{\infty}S_{\pm}\subset\partial\mathbb{H}^{2}\) is the boundary of \(\tilde{S}_{\pm}\) the inclusion \(\tilde{S}_{\pm}\hookrightarrow\mathbb{H}^{3}\) extends continuously to a \(\Gamma_{\pm}\)-equivariant quotient map \[\iota_{\pm}:\partial_{\infty}\tilde{S}_{\pm}\longrightarrow\Lambda_{\pm} \subset\partial\mathbb{H}^{3}.\] Moreover, if \(\xi,\xi^{\prime}\in\partial_{\infty}\tilde{S}_{+}\), say, we have \(\iota_{+}(\xi)=\iota_{+}(\xi^{\prime})\) if and only if there is an element \(\gamma\in\Gamma_{+}\) that acts hyperbolically on \(\tilde{S}_{+}\cup\partial_{\infty}\tilde{S}_{+}\) with fixed points \(\xi,\xi^{\prime}\in\partial_{\infty}\tilde{S}_{+}\), but acts parabolically on \(\mathbb{H}^{3}\). By discreteness of the action \(\Gamma_{+}\curvearrowright\tilde{S}_{+}\), each \(\xi\in\partial_{\infty}\tilde{S}_{+}\) has the same image under \(\iota_{+}\) as _at most one_ other \(\xi^{\prime}\). Similar statements holds with \(-\) instead of \(+\). All this is a consequence (for instance) of Bowditch's theory of the boundary of a relatively hyperbolic group [6]: since the action \(\Gamma_{\pm}\curvearrowright\mathbb{H}^{3}\) is geometrically finite, \(\Lambda_{\pm}\) is a model for the Bowditch boundary of the group \(\Gamma_{\pm}\) relatively to its maximal parabolic subgroups, so the statement above follows from Theorem 1.3 of [37], say4. Footnote 4: See also Theorem 5.6 of [45], which says that there is a continuous equivariant extension \(\iota_{\pm}\) of the inclusion \(\tilde{S}_{\pm}\hookrightarrow\mathbb{H}^{3}\) as above. This theorem is stated in a much more general setting, though, and our statement is a trivial case. Let \(\iota_{\pm}^{-1}(\Lambda_{\Delta})\subset\partial_{\infty}\tilde{S}_{\pm}\). Since \(\Delta\neq 1\) and is not cyclic parabolic, \(\iota_{\pm}^{-1}(\Lambda_{\Delta})\) has at least two points, so it has a well-defined convex hull \(\tilde{C}_{\pm}\subset S_{\pm}\). **Claim 4.4** (Convex hulls).: _One of the following holds._ 1. \(\Delta\) _is cyclic and acts hyperbolically on_ \(\tilde{S}_{+}\)_. The convex hull_ \(\tilde{C}_{+}\) _is its geodesic axis, which is precisely invariant under_ \(\Delta\subset\Gamma\)_, so that the quotient_ \(C_{+}:=\Delta\backslash\tilde{C}_{+}\) _embeds as a simple closed geodesic in_ \(S_{+}\)_._ 2. \(\tilde{C}_{+}\) _is a subsurface of_ \(\tilde{S}_{+}\) _with geodesic boundary, the interior_ \(int(\tilde{C}_{+})\) _is precisely invariant under_ \(\Delta\subset\Gamma\)_, and the quotient_ \(C_{+}:=\Delta\backslash\tilde{C}_{+}\) _is a generalized subsurface of_ \(S_{+}\) _with compact geodesic boundary._ _A similar statement holds with \(-\) instead of \(+\)._ Proof.: Let's work with \(+\), for concreteness. If \(g\in\Delta\), then \(g(\Lambda_{\Delta})=\Lambda_{\Delta}\), so \(g\) leaves \(\iota_{+}^{-1}(\Lambda_{\Delta})\) invariant by equivariance of \(\iota_{+}\). Hence \(g\) leaves \(\tilde{C}_{+}\) invariant. Let's suppose first that \(\tilde{C}_{+}\) has nonempty interior, since that is the more interesting case. We'll address the case that \(\tilde{C}_{+}\) is a biinfinite geodesic at the end of the proof. Let \(g\in\Gamma_{+}\setminus\Delta\). We want to show \[g(int(\tilde{C}_{+}))\cap int(\tilde{C}_{+})=\emptyset.\] Assume this is not the case. By Claim 4.3, the fixed points of \(g\) in \(\partial_{\infty}S_{+}\) lie outside \(\iota_{+}^{-1}(\Lambda(\Delta))\). So, we cannot have \(g(\tilde{C}_{+})\subset\tilde{C}_{+}\), as then we'd have \(g^{n}(\tilde{C}_{+})\subset\tilde{C}_{+}\) for all \(n\), contradicting that points of \(\tilde{S}_{+}\) converge to the fixed points of \(g\) under iteration. Considering backwards iterates, we also cannot have \(\tilde{C}_{+}\subset g(\tilde{C}_{+})\). Therefore, \(\partial\tilde{C}_{+}\) and \(\partial g(\tilde{C}_{+})\) intersect transversely. Since \(\tilde{C}_{+}\) has nonempty interior, \(\Delta\) is nonelementary, and therefore the fixed points of loxodromic isometries of \(\Delta\) are dense in \(\Lambda_{\Delta}\). Loxodromic fixed points of \(\Delta\) are in particular _not_ parabolic fixed points in \(\Gamma_{\pm}\), so any biinfinite geodesic in \(\tilde{C}_{+}\) is a limit of biinfinite geodesics in \(\tilde{C}_{+}\) whose endpoints are _not_ fixed points of parabolic isometries of \(\Gamma_{\pm}\). By the previous paragraph, there are then biinfinite geodesics \(\alpha_{+},\beta_{+}\) in \(\tilde{C}_{+}\) such that \(g(\alpha_{+})\) and \(\beta_{+}\) intersect transversely, and where the endpoints of \(\alpha_{+},\beta_{+}\) project under \(\iota_{+}\) to points \(\xi_{\alpha},\xi^{\prime}_{\alpha},\xi_{\beta},\xi^{\prime}_{\beta}\in\Lambda_ {\Delta}\) that are not parabolic fixed points in \(\Gamma_{\pm}\). Let \(\alpha_{-}\) be the geodesic in \(\tilde{S}_{-}\) whose endpoints in \(\partial_{\infty}\tilde{S}_{-}\) map to the points \(\xi_{\alpha},\xi^{\prime}_{\alpha}\) under \(\iota_{-}\). Define \(\beta_{-}\) similarly. Then \[\alpha:=\alpha_{+}\cup\{\xi_{\alpha},\xi^{\prime}_{\alpha}\}\cup\alpha_{-}, \ \ \beta:=\beta_{+}\cup\{\xi_{\beta},\xi^{\prime}_{\beta}\}\cup\beta_{-}\] are two _simple_ closed curves on the closure \(cl(\partial CH(\Lambda_{\Gamma}))\subset\mathbb{H}^{3}\cup\partial\mathbb{H}^{3}\), which is homeomorphic to a sphere. For instance, the arcs \(\alpha_{\pm}\) are disjoint and \(\xi_{\alpha}\neq\xi_{\alpha}^{\prime}\), since the endpoints of \(\alpha_{+}\) are not parabolic fixed points. Now consider how the two simple closed curves \(g(\alpha),\beta\) intersect. The arcs \(\beta_{-}\) and \(g(\alpha_{+})\) are disjoint since \(\tilde{S}_{-}\neq\tilde{S}_{+}\). The arcs \(g(\alpha_{-}),\beta_{-}\) are disjoint since \(g\not\in\Gamma_{-}\) and hence \(g(\alpha_{-})\) lies on a different translate of \(\tilde{S}_{-}\) than \(\beta_{-}\subset\tilde{S}_{-}\). Moreover, since \(g(\alpha_{+}),\beta_{+}\) intersect transversely in \(\tilde{S}_{+}\), the endpoints of \(g(\alpha_{+})\) and \(\beta_{+}\) are distinct in \(\partial_{\infty}\tilde{S}_{+}\), and since none of them are parabolic fixed points, the points \(g(\xi_{\alpha}),g(\xi_{\alpha}^{\prime}),\xi_{\beta},\xi_{\beta}^{\prime}\) are all distinct. But by assumption, \(g(\alpha_{+})\) intersects \(\beta_{+}\) transversely in a single point! This shows that \(g(\alpha)\) and \(\beta\) intersect exactly once, transversely, which is a contradiction. By precise invariance of the action on the interior, the quotient \(int(C_{+})=\Delta/int(\tilde{C}_{+})\) embeds in the finite volume surface \(S_{+}\), so \(C_{+}\) has finite volume itself. So if \(\partial C_{+}\) is non-compact, it must have two noncompact boundary components that are asymptotic. Lifting, we get two boundary components \(\beta_{1},\beta_{2}\) of \(\tilde{C}_{+}\) that are asymptotic. Since \(\tilde{C}_{+}\) is convex, it is contained in the subset of \(\mathbb{H}^{2}\) bounded by \(\beta_{1},\beta_{2}\), and hence the common endpoint of \(\beta_{1},\beta_{2}\) is an isolated point of \(\Lambda_{\Delta}\), which is a contradiction since \(\Delta\) is not elementary. The case when \(\tilde{C}_{+}\) is a biinfinite geodesic is similar. Here, \(\Delta\) must be cyclic, acting on \(\tilde{S}_{+}\) with axis \(\tilde{C}_{+}\), and acting either parabolically or loxodromically on \(\mathbb{H}^{3}\). In the parabolic case, \(\tilde{C}_{+}\) compactifies to a simple closed curve on the sphere \(cl(CH(\Lambda_{\Gamma}))\subset\mathbb{H}^{3}\cup\partial\mathbb{H}^{3}\), so no translate \(g(\tilde{C}_{+}),g\in\Gamma_{+}\), can intersect \(\tilde{C}_{+}\) transversely, since if it did we'd get two simple closed curves on the sphere that intersect once. In the loxodromic case, we get a similar contradiction by looking at the simple closed curve \(cl(\tilde{C}_{+}\cup\tilde{C}_{-})\subset cl(\partial\tilde{M})\) and its \(g\)-image. So, \(\tilde{C}_{+}\) is precisely invariant under \(\Delta\subset\Gamma\). The quotient \(C_{+}:=\Delta\backslash\tilde{C}_{+}\) is obviously compact, and is therefore a simple closed geodesic in \(S_{+}\). We claim that \(C_{-}\) and \(C_{+}\) are homeomorphic. If \(C_{\pm}\) are isotopic in \(\partial CC(M)\) this is clear, and otherwise we argue as follows. The subgroups \(\pi_{1}C_{\pm}\) are both represented by \(\Delta\), so are conjugate in \(\pi_{1}M\). The fact that every curve in \(C_{-}\) is homotopic to a curve in \(C_{+}\) (and vice versa) implies that \(C_{\pm}\) are isotopic to subsurfaces \(C_{\pm}^{\prime}\subset\Sigma\) in the boundary \(\Sigma\) of a component \((X,\Sigma)\) of the characteristic submanifold5 of \((CC(M),S_{-}\cup S_{+})\), see SS2.6, and that even within \(X\) every closed curve in \(C_{-}^{\prime}\) is homotopic to a closed curve in \(C_{+}^{\prime}\), and vice versa. When \(X\) is a solid torus or thickened torus, \(C_{\pm}^{\prime}\) are annuli, while if \(X\) is an interval bundle, \(C_{\pm}^{\prime}\) bound a vertical interval bundle in \(X\), and are homeomorphic. So, let \(f:C_{-}\longrightarrow C_{+}\) be a homeomorphism, lift \(f\) to a \(\Delta\)-equivariant homeomorphism \(\tilde{f}:\tilde{C}_{-}\longrightarrow\tilde{C}_{+}\) and let \[F:\tilde{C}_{-}\times[0,1]\longrightarrow CH(\Lambda)\] where \(F(x,\cdot)\) parametrizes the geodesic from \(x\) to \(f(x)\). Then \(F\) is \(\Delta\)-equivariant, and projects to an essential homotopy from \(C_{-}\) to \(C_{+}\), as desired. ### An annulus theorem for laminations Suppose \(M\) is a compact, orientable, hyperbolizable \(3\)-manifold with nonempty boundary and let \(S=\partial_{\chi<0}M\) be the union of all non-torus boundary components of \(M\). When \(\alpha,\beta\subset S\) are disjoint simple closed curves that are essential and homotopic in \(M\), but not homotopic in \(S\), the Annulus Theorem says that there is an essential embedded annulus \(A\subset M\) with \(\partial A=\alpha\cup\beta\), see Scott [50]. More generally, equip \(S\) with an arbitrary hyperbolic metric. An _essential homotopy_ between two geodesic laminations \(\lambda_{\pm}\) on \(S\) is a map \[H:(\lambda\times[-1,1],\lambda\times\{-1,1\})\longrightarrow(M,S)\] where \(\lambda\) is a lamination, such that \(H\) maps \(\lambda\times\{\pm 1\}\) homeomorphically onto \(\lambda_{\pm}\), and where \(H\) is not homotopic rel \(\lambda\times\{-1,1\}\) into \(\partial M\). Here is an 'Annulus Theorem' for minimal laminations. **Proposition 4.5** (An annulus theorem for laminations).: _Let \(\lambda_{-},\lambda_{+}\) be two minimal geodesic laminations on \(S\) that are either disjoint or equal, and assume that \(S(\lambda_{\pm})\) are incompressible in \(M\). If \(\lambda_{\pm}\) are essentially homotopic in \((M,S)\), there is an essential interval bundle \((B,\partial_{H}B)\subset(M,S)\) such that \(\lambda_{\pm}\) fill \(\partial_{H}B\), and where \(\lambda_{\pm}\) are essentially homotopic through \(B\), as in SS2.9._ Here, \(S(\lambda_{\pm})\) are the subsurfaces with geodesic boundary filled by \(\lambda_{\pm}\), as in SS2.8. The assumption that they are incompressible generalizes the assumption that \(\alpha,\beta\) are homotopically essential in \(M\) in the Annulus Theorem. Proof.: Identify \(M\setminus\partial_{\chi=0}M\) with the convex core of a geometrically finite hyperbolic \(3\)-manifold. Set \(S_{\pm}:=S(\lambda_{\pm})\). Lift the essential homotopy from \(\lambda_{-}\) to \(\lambda_{+}\) to a homotopy from lifts \(\tilde{\lambda}_{-}\subset\tilde{S}_{-}\) to \(\tilde{\lambda}_{+}\subset\tilde{S}_{+}\) in \(\mathbb{H}^{3}\). Under the homotopy, which has bounded tracks, corresponding leaves of \(\tilde{\lambda}_{\pm}\) have the same endpoints in \(\partial\mathbb{H}^{3}\). The endpoints of \(\tilde{\lambda}_{\pm}\) are dense in \(\partial_{\infty}\tilde{S}_{\pm}\), so this means that the subsurfaces \(C_{\pm}\subset S_{\pm}\) constructed in Theorem 4.1 are just \(C_{\pm}=S_{\pm}\). Passing to disjoint or equal resolutions \(S^{\prime}_{\pm}\) of \(S_{\pm}\) and applying Theorem 2.8 gives an interval bundle \(B\) where \(\lambda_{\pm}\) fill \(\partial_{H}B=S^{\prime}_{-}\cup S^{\prime}_{+}\). We claim that \(\lambda_{\pm}\) are essentially homotopic through \(B\). By Fact 2.16, it suffices to show that if \(\sigma\) is the canonical involution of \(B\), as described in SS4.5, then \(\sigma(\lambda_{\pm})\) is isotopic to \(\lambda_{\mp}\) on \(S^{\prime}_{\mp}\). Using the notation of Theorem 4.1, \(\sigma\) lifts to a \(\Delta\)-equivariant involution \(\tilde{\sigma}\) of \(\tilde{B}\) that exchanges \(\tilde{S}^{\prime}_{-}\) and \(\tilde{S}^{\prime}_{+}\), where here \(\Delta=\Gamma_{-}\cap\Gamma_{+}\). By equivariance, \(\tilde{\sigma}\) extends continuously to the identity on \(\Lambda_{\Delta}\) so \(\tilde{\sigma}(\tilde{\lambda}_{-})\) is a lamination on \(\tilde{S}_{+}\) with all the same endpoints at infinity as \(\tilde{\lambda}_{+}\), and hence equals \(\tilde{\lambda}_{+}\). The claim follows. ## 5. Laminations on the boundary Suppose that \(M\) is a compact, orientable \(3\)-manifold with hyperbolizable interior and nonempty boundary \(\partial M\). Equip \(M\) with an arbitrary Riemannian metric and lift it to a Riemannian metric on the universal cover \(\tilde{M}\). As in the introduction, a biinfinite path or ray \(h\) on \(\partial\tilde{M}\) is called _homoclinic_ if there are points \(s^{i},t^{i}\) with \(|s^{i}-t^{i}|\to\infty\) such that \[\sup_{i}d_{\tilde{M}}(h(s^{i}),h(t^{i}))<\infty.\] Two rays \(h_{+},h_{-}\) on \(\partial\tilde{M}\) are called _mutually homoclinic_ if there are parameters \(s^{i}_{\pm}\to\infty\) such that \[\sup_{i}d_{\tilde{M}}(h_{+}(s^{i}_{+}),h_{-}(s^{i}_{-}))<\infty.\] Here, a _ray_ is a continuous map from an interval \([a,\infty)\), and a _biinfinite path_ is a continuous map from \(\mathbb{R}\). We will also call rays and paths on \(\partial M\) (mutually) homoclinic if they have lifts that are (mutually) homoclinic paths on \(\partial\tilde{M}\). We refer the reader to SS5.1 for some comments on alternate definitions of homoclinic that exist in the literature. Note that if we divide a biinfinite homoclinic path into two rays, then either one of the two rays is itself homoclinic, or the two rays are mutually homoclinic. Also, these definitions are metric independent: since \(M\) is compact, any two Riemannian metrics on \(M\) lift to quasi-isometric metrics on \(\tilde{M}\), and a path is homoclinic or mutually homoclinic with respect to one metric if and only if it is with respect to the other metric. Here are some examples. 1. Suppose that \(D\) is a properly embedded disc in \(M\), and \(h:\mathbb{R}\longrightarrow\partial M\) is a path that covers \(\partial D\subset\partial M\). Then \(h\) is homoclinic: indeed, \(D\) lifts homeomorphically to \(\tilde{M}\), so \(h\) lifts to a path in \(\tilde{M}\) with compact image. 2. Suppose that \(\phi:(S^{1}\times[0,1],S^{1}\times\{0,1\})\longrightarrow(M,\partial M)\) is an essential embedded annulus. Then rays covering the two boundary components of the annulus are mutually homoclinic: indeed, \(\phi\) lifts to \[\tilde{\phi}:\mathbb{R}\times[0,1]\longrightarrow\tilde{M},\] and we have \(\sup_{t\in\mathbb{R}}d(\tilde{\phi}(t,0),\tilde{\phi}(t,1))<\infty\), so restricting to \(t\in[0,\infty)\) we get two mutually homoclinic rays in \(\tilde{M}\). It will be convenient below to work with a particular choice of metric on \(M\). **Example 5.1** (An explicit metric on \(M\)).: Let \(\partial_{\chi<0}M\) be the union of all components of \(\partial M\) that have negative Euler characteristic, i.e. are not tori. Thurston's Haken hyperbolization theorem, see [28], implies that there is a hyperbolic 3-manifold \(N=\mathbb{H}^{3}/\Gamma\) homeomorphic to the interior of \(M\), where every component of \(\partial_{\chi<0}M\) corresponds to a convex cocompact end of \(N\). A torus \(T\subset\partial M\), on the other hand, determines a cusp of \(N\). So, in other words, \(N\) is'minimally parabolic': the only parabolics come from torus boundary components of \(M\). For each \(T\), pick an open neighborhood \(N_{T}\subset N\) of the associated cusp that is the quotient of a horoball in \(\mathbb{H}^{3}\) by a \(\mathbb{Z}^{2}\)-action. Then \[M\cong CC(N)\ \setminus\bigcup_{\text{tori }T\subset\partial M}N_{T}, \tag{3}\] and we will identify \(M\) with the right-hand side everywhere below. Then * \(\tilde{M}\subset\mathbb{H}^{3}\) is obtained from the convex hull \(CH(\Gamma)\subset\mathbb{H}^{3}\) of the limit set of \(\Gamma\) by deleting an equivariant collection of horoballs, and * the path metric induced on \(\partial_{\chi<0}M\) is hyperbolic [51, Proposition 8.5.1], and the path metric induced on every torus \(T\subset\partial M\) is Euclidean. We now specialize to the case of paths that are _geodesics_ on \(\partial M\). Recall from Example (1) above that one can make homoclinic paths by running around the boundaries of disks in \(\partial M\). The following shows that discs are essential in such constructions. **Fact 5.2**.: _Suppose that \(S\subset\partial M\) is an essential subsurface. Then the inclusion of any lift \(\tilde{S}\subset\partial\tilde{M}\) is a quasi-isometric embedding into \(\tilde{M}\). Moreover if \(S\) is incompressible then any pair of mutually homoclinic infinite rays on \(S\) are asymptotic and no biinfinite geodesic \(\gamma\) in \(S\) is homoclinic._ Proof.: Think of \(M\) as embedded in a complete hyperbolic 3-manifold \(N\) as in (3), write \(N=\Gamma\backslash\mathbb{H}^{3}\), and let \(\tilde{M}\subset\mathbb{H}^{3}\) be the preimage of \(M\), so that \(\tilde{M}\) is obtained from the convex hull \(CH(\Gamma)\) be deleting an equivariant collection of horoballs. Fix a subgroup \(\Delta<\Gamma\) that represents the conjugacy class associated to the image of the fundamental group of \(S\subset M\). To show that \[\tilde{S}\hookrightarrow\partial\tilde{M}\] is a quasi-isometric embedding, it suffices to show that \(\Delta\) is undistorted in \(\Gamma\). But since \(M\) is geometrically finite and \(\Delta\) is finitely generated, it follows from a result of Thurston (see Proposition 7.1 in [46]) that the group \(\Delta\) is geometrically finite, and geometrically finite subgroups of (say, geometrically finite) hyperbolic 3-manifold groups are undistorted, c.f. Corollary 1.6 in [23]. For the'moreover' statement, assume \(S\) is incompressible, so that \(\tilde{S}\) is simply connected, and consider a pair of infinite rays \[h^{\pm}:\mathbb{R}^{+}\to\tilde{S}\] that are geodesic for the induced hyperbolic metric and \(t_{n}^{\pm}\to+\infty\) such that \(d_{\tilde{M}}(h^{+}(t_{n}^{+}),h^{-}(t_{n}^{-}))\) is bounded. Since \(\tilde{S}\subset\partial\tilde{M}\) is a quasi-isometric embedding, \(d_{\tilde{S}}(h^{+}(t_{n}^{+}),h^{-}(t_{n}^{-}))\) is also bounded. Since \(\tilde{S}\) is simply connected and hyperbolic, this is possible only if \(h^{+}\) and \(h^{-}\) are asymptotic on \(S\). Taking \(h^{+}=h^{-}\) we get that a geodesic ray on \(\tilde{S}\) can not be homoclinic. Taking \(h^{+}\neq h^{-}\), we get that any pair of mutually homoclinic infinite rays on \(S\) are asymptotic. In particular two disjoint geodesic rays in a homoclinic geodesic should be asymptotic. This is impossible for a geodesic in a simply connected hyperbolic surface. Example (2) above shows how embedded annuli in \(M\) can be used to create mutually homoclinic rays. In analogy to Fact 5.2, one can show that annuli are essential in such a construction. For instance, suppose \(M\) is acylindrical. Then work of Thurston, see [28] and more generally [35], says that we can choose the hyperbolic manifold \(N\) so that \(\partial CC(N)\cong\partial_{\chi<0}M\) is totally geodesic. Hence, the preimage of \(\partial_{\chi<0}M\) in \(\tilde{M}\subset\mathbb{H}^{3}\) is a collection of hyperbolic planes. Any geodesic ray on \(\partial_{\chi<0}M\) then lifts to a geodesic in \(\mathbb{H}^{3}\), and two geodesic rays on \(\partial_{\chi<0}M\) are mutually homoclinic if and only if their geodesic lifts are asymptotic in \(\mathbb{H}^{3}\), which implies that they were asymptotic on \(\partial_{\chi<0}M\). ### Alternate definitions of homoclinic Above, we defined a path \[h:I\longrightarrow\partial\tilde{M}\] to be homoclinic if there is are \(s^{i},t^{i}\in I\) with \(|s^{i}-t^{i}|\to\infty\) such that \[\sup_{i}d_{\tilde{M}}(h(s^{i}),h(t^{i}))<\infty.\] Some other papers use slight variants of this definition. For example, the definition of _(faiblement) homoclinique_ in Otal's thesis [49] is almost the same as what is written above, except that distances are computed _in the intrinsic metric on \(\partial\tilde{M}\)_ instead of in \(\tilde{M}\). This is equivalent to our definition, though: the nonobvious direction follows from Fact 5.2, which says that boundary components of \(\tilde{M}\) quasi-isometrically embed in \(\tilde{M}\). And in the definition of _homoclinique_ in Lecuire's earlier work [35], distances are computed not in \(\tilde{M}\), but within \(\mathbb{H}^{3}\), with respect to a given identification of \(\tilde{M}\) with the convex core of some minimally parabolic hyperbolic \(3\)-manifold, as discussed in Example 5.1. When \(M\) has tori in its boundary, the inclusion \(\tilde{M}\hookrightarrow\mathbb{H}^{3}\) is not a quasi-isometric embedding, but the following lemma says that \(d_{\mathbb{H}^{3}}\) is bounded if and only if \(d_{\tilde{M}}\) is bounded, so Lecuire's earlier definition is equivalent to ours. **Lemma 5.3**.: _Whenever \(x,y\in\tilde{M}\), we have_ \[d_{\mathbb{H}^{3}}(x,y)\leq d_{\tilde{M}}(x,y)\leq e^{d_{\mathbb{H}^{3}}(x,y)/ 2}d_{\mathbb{H}^{3}}(x,y).\] Proof.: Set \(N:=\Gamma\backslash\mathbb{H}^{3}\), so that \(\tilde{M}\) is obtained from the convex hull \(CH:=CH(\Lambda(\Gamma))\) of the limit set of \(\Gamma\) by deleting horoball neighborhoods around all rank two cusps. Take a \(\mathbb{H}^{3}\)-geodesic \(\gamma\) from \(x\) to \(y\). Then \(\gamma\) lies inside \(CH\), and it can only penetrate the deleted horoball neighborhoods to a depth of \(d(x,y)/2\). Now, whenever \(B\supset B^{\prime}\) are horoballs in \(\mathbb{H}^{3}\) such that \(d_{\mathbb{H}^{3}}(\partial B,B^{\prime})\leq d(x,y)/2\), the closest point projection \[\pi:B\setminus B^{\prime}\longrightarrow\partial B\] is well defined and \(e^{d(x,y)/2}\)-lipschitz. (Indeed, it suffices to take \(B\) as the height \(1\) horoball in the upper half space model and \(B^{\prime}\) as the height \(e^{d(x,y)/2}\) horoball, and then the claim is obvious.) So, the parts of \(\gamma\) above that penetrate the deleted horoballs can be projected back into \(\partial\tilde{M}\), and if we do this the resulting path has length at most \(e^{d(x,y)/2}d(x,y)\). We should mention the version of homoclinic defined in Casson's original unpublished notes. There, \(M\) is a handlebody, and if we regard \(\partial\tilde{M}\hookrightarrow\mathbb{H}^{3}\) as above, then a simple geodesic \(h:I\longrightarrow\partial\tilde{M}\) is called _homoclinic_ if when we subdivide \(h\) into two rays \(h_{\pm}\), these rays limit onto subsets \(A_{\pm}\subset\mathbb{H}^{3}\cap\partial\mathbb{H}^{3}\) such that \(A_{+}\cap A_{-}\neq\emptyset\). This definition is stronger than all the ones mentioned above: if \(A_{+}\cap A_{-}\) contains a point on \(\partial\tilde{M}\), rather than at infinity, then the definition of homoclinic above is obviously satisfied. Otherwise, \(h_{\pm}\) have to have a common accumulation point in \(\partial\mathbb{H}^{3}\), which corresponds to an end \(\xi\) of \(\tilde{M}\), and one can use the treelike structure of the universal cover \(\tilde{M}\) of the handlebody \(M\) to say that \(h_{\pm}\) have to both intersect a sequence of meridians \((m_{i})\) on \(\tilde{M}\) that cut off smaller and smaller neighborhoods of \(\xi\). The times \(t_{p}^{i}m\) when \(h_{\pm}\) intersects \(m_{i}\) then work in the definition of homoclinic above. In fact, Casson's definition is strictly stronger. For instance, if \(h_{\pm}\) both spiral around disjoint simple closed curves \(\gamma_{\pm}\subset\partial\tilde{M}\), then \(h\) is homoclinic by our definition but not by Casson's. However, Statement 1.0.1 still fails using Casson's original definition, due to the examples in Figure 1. ### Waves, Tight position, and Intrinsic limits As in the previous section, let \(M\) be a compact, orientable hyperbolizable \(3\)-manifold with nonempty boundary \(\partial M\), which we think of as the convex core of a hyperbolic \(3\)-manifold with horoball neighborhoods of its rank \(2\) cusps deleted. **Definition 5.4** (Waves and tight position).: Suppose that \(m\) is a meridian multicurve on \(\partial M\), and let \(\gamma\subset\partial M\) be a simple geodesic ray or a simple biinfinite geodesic. An \(m\)_-wave_ is a segment \(\beta\subset\gamma\) that has endpoints on \(m\), and is homotopic rel endpoints in \(M\) to an arc of \(m\). If \(\gamma\) has no \(m\)-waves, and every infinite length segment of \(\gamma\) intersects \(m\), then we say that \(\gamma\) is in _tight position_ with respect to \(m\). Waves and tight position were discussed previously in [31, 35], for instance. Note that in our definition, an \(m\)-wave \(\beta\) can intersect \(m\) in its interior. More generally, an \(m\)-wave of a lamination is an \(m\)-wave of one of its leaves, and a lamination is in _tight position_ with respect to \(m\) if all of its leaves are. Note that from this perspective, if a geodesic \(\gamma\) is in tight position with respect to some multicurve \(m\) (regarded as a lamination), then it is in tight position with respect to some component of \(m\). As an example, a meridian \(\gamma\) can never be in tight position with respect to another meridian \(m\): taking discs with boundaries \(\gamma\) and \(m\) that are transverse and intersect minimally, any arc of intersection of these disks terminates in a pair of intersection points of \(\gamma\) and \(m\) that bound a \(m\)-wave of \(\gamma\). More generally, we have the following fact. **Fact 5.5** (Tight position \(\implies\mathbb{H}^{3}\) quasi-geodesic).: _Let \(\gamma\) be a simple geodesic ray or biinfinite geodesic on \(\partial M\). If \(\gamma\) is in tight position with respect to some meridian \(m\) then any lift \(\tilde{\gamma}\subset\partial\tilde{M}\) of \(\gamma\) is a quasigeodesic in \(\mathbb{H}^{3}\). In particular, \(\gamma\) is an \(\tilde{M}\)-quasigeodesic, and is not homoclinic._ Proof.: Intersecting with \(m\) breaks \(\gamma\) into a union of finite arcs. By simplicity of \(\gamma\), these arcs fall into only finitely many homotopy classes rel \(m\), and there is a universal upper bound \(L=L(\gamma,m)\) on their lengths. Let \(D\) be a disc with boundary \(m\) and let \(\tilde{D}\) be the entire preimage of \(D\) in \(\tilde{M}\). Tightness means that the path \(\tilde{\gamma}\) intersects infinitely many components of \(\tilde{D}\), and intersects no single component more than once. In the notation of Example 5.1, we have that \(\tilde{M}\subset\mathbb{H}^{3}\) is obtained from \(CH(\Gamma)\) by deleting an equivariant collection of horoballs. Each component of \(\tilde{D}\) separates \(CH(\Gamma)\), so if \(\gamma^{\prime}\) is a segment of \(\gamma\), any geodesic in \(\mathbb{H}^{3}\) joining the endpoints of \(\gamma^{\prime}\) must intersect each of the discs that \(\gamma^{\prime}\) intersects. Hence, if \(\epsilon>0\) is the minimum distance between any two components of \(\tilde{D}\), then \(\tilde{\gamma}\) is a \((L/\epsilon,L)\)-quasigeodesic. We now describe how to create systems of meridians with respect to which a given lamination is in tight position. Figure 9. Surgering \(A\) along \(\alpha\) gives a meridian \(c\). **Definition 5.6** (Surgery).: Suppose that \(\lambda\) is a geodesic lamination on \(\partial M\) and \(m=\sqcup_{i=1}^{n}m_{i}\) is a geodesic meridian multi-curve on \(\partial M\), and \(\beta\) is an \(m\)-wave in \(\lambda\) whose interior is disjoint from \(m\). Then the pair of points \(\partial\beta\) separates some component \(m_{i}\) of \(m\) into two arcs \(m_{i}^{1},m_{i}^{2}\), both of which are homotopic to \(\beta\) rel endpoints in \(M\). We perform a _\(\lambda\)-surgery_ on \(m\) by replacing \(m_{i}^{1}\) (say) with \(\beta\), thus constructing a new multicurve \(m^{\prime}:=(\beta\cup m_{i}^{2})\sqcup\bigsqcup_{j\neq i}m_{j}\). This notion of surgery appears in many other references, e.g. [3, 11, 31, 35]. We summarize its elementary properties here: **Fact 5.7**.: _Suppose that \(\lambda\) is a geodesic lamination._ 1. _If_ \(m\) _is a meridian and_ \(\lambda\) _has an_ \(m\)_-wave, it also has an_ \(m\)_-wave whose interior is disjoint from_ \(m\)_, so a_ \(\lambda\)_-surgery can be performed._ 2. _Any curve_ \(m^{\prime}\) _obtained by_ \(\lambda\)_-surgery on a meridian_ \(m\) _as above is a meridian._ 3. _If_ \(m\) _is a_ cut system _for_ \(M\)_, i.e. a multi-curve of meridians bounding discs that cut_ \(M\) _into balls and_ \(3\)_-manifolds with incompressible boundary, then some_ \(\lambda\)_-surgery on a component of_ \(m\) _is another cut system._ Proof.: For (1), suppose that \(\lambda\) has an \(m\)-wave \(\beta\). Let \(\tilde{m}\) be the entire preimage of \(m\) in the cover \(\partial\tilde{M}\), and lift \(\beta\) to an arc \(\tilde{\beta}\) starting and ending on some fixed component \(m_{0}\subset\tilde{m}\). Since each component of \(\tilde{m}\) separates \(\partial\tilde{M}\), there is some "outermost" subarc \(\tilde{\beta}^{\prime}\) that has endpoints on the same component of \(\tilde{m}\), and that has interior disjoint from \(\tilde{m}\). This \(\tilde{\beta}^{\prime}\) projects to an \(m\)-wave of \(\lambda\) whose interior is disjoint from \(m\). For (2), note that if \(m^{\prime}:=\beta\cup m_{2}\) is obtained by \(\lambda\)-surgery on \(m\), as above, then \(m,m^{\prime}\) are homotopic in \(M\), and hence \(m^{\prime}\) is nullhomotopic. Also, if \(m^{\prime}\) is inessential in \(\partial M\), then \(\beta\) is homotopic on \(\partial M\) to \(m_{2}\), implying that \(\lambda\) and \(m\) were not in minimal position, a contradiction since they are both geodesic. Hence \(m^{\prime}\) is a meridian. For (3), consider an \(m\)-wave in \(\lambda\) whose interior is disjoint from \(m\) and say that \(\partial\beta\subset m_{1}\). Then \(\partial\beta\) separates \(m_{1}\) into two arcs \(m_{1}^{1},m_{1}^{2}\). It is not difficult to see that either \((\beta\cup m_{1}^{1})\sqcup\bigsqcup_{j\neq 1}m_{j}\) or \((\beta\cup m_{1}^{2})\sqcup\bigsqcup_{j\neq 1}m_{j}\) is a cut system. The following lemma is a modification of a result of Kleineidam-Souto [31, Lemmas 7 and 8] that is essential for everything below. **Lemma 5.8** (No waves, or a sequence of meridians).: _Suppose \(\lambda\) is a geodesic lamination on \(S=\partial M\) and \(m\) is a meridian multi-curve. Then either_ 1. _there exists a finite sequence of_ \(\lambda\)_-surgeries on_ \(m\) _that terminates in some meridian multi-curve_ \(m^{\prime}\) _where_ \(\lambda\) _has no_ \(m^{\prime}\)_-waves,_ 2. \(S(\lambda)\) _contains a sequence of meridians_ \((\gamma_{i})\) _such that_ \(i(\lambda,\gamma_{i})\to 0\)_, with respect to every transverse measure on_ \(\lambda\) Here, (2) makes sense even when \(\lambda\) admits no transverse measure of full support. Note that if \(\lambda\) is a minimal lamination and \(\partial S(\lambda)\) is incompressible, then (2) implies that \(\lambda\) is an intrinsic limit of meridians. Proof of Lemma 5.8.: The two cases depend on whether \(\lambda\) contains infinitely many homotopy classes of \(m\)-waves, or not. Here, our homotopies are through arcs on \(S\), keeping their endpoints on \(m\). If there are only finitely many classes of \(m\)-waves in \(\lambda\), then a finite sequence of \(\lambda\)-surgeries converts \(m\) into a multi-curve \(m^{\prime}\) such that \(\lambda\) has no \(m^{\prime}\)-waves, as each surgery decreases the number of waves by at least one. If there are infinitely many homotopy classes of \(m\)-waves in \(\lambda\), then we can choose a sequence of parameterized \(m\)-waves \(\alpha_{i}:[0,1]\longrightarrow\mathbb{R}\) such that 1. the two sequences of endpoints \((\alpha_{i}(0))\) and \((\alpha_{i}(1))\) both converge, and if either sequence converges into a simple closed curve \(\gamma\subset\lambda\), then it approaches \(\gamma\) from only one side, 2. no \(\alpha_{i}\) and \(\alpha_{j}\) are homotopic keeping their endpoints on \(m\), for \(i\neq j\). To construct the desired sequence of meridians, let \(\beta_{i}^{0}\) be the shortest geodesic on \(S\) from \(\alpha_{i}(0)\) to \(\alpha_{i+1}(0)\), and define \(\beta_{i}^{1}\) similarly. For large \(i\), the union \(\beta_{i}^{0}\cup\alpha_{i}\cup\beta_{i}^{1}\cup\alpha_{i+1}\) is an essential closed curve in \(S(\lambda)\) that is nullhomotopic in \(M\). It may not be simple, since \(\beta_{i}^{0}\) and \(\beta_{i}^{1}\) may overlap, but it has at most one self intersection. So by the Loop Theorem [24], one of the three simple closed curves obtained by surgery on it is a meridian \(\gamma_{i}\). Now, the fact that the endpoints can approach a simple closed curve in \(\lambda\) only from one side implies that for large \(i\), the curves \(\gamma_{i}\) do not intersect any simple closed curve contained in \(\lambda\). Since \(\gamma_{i}\) only intersects \(\lambda\) along the arcs \(\beta_{i}^{0}\) and \(\beta_{i}^{1}\), whose hyperbolic lengths converge to zero, it follows that \(i(\gamma_{i},\lambda)\to 0\) for any transverse measure on \(\lambda\). Here is an important application of Lemma 5.8. **Lemma 5.9** (Quasigeodesic or a sequence of meridians).: _Suppose \(\lambda\subset\partial_{\chi<0}M\) is a minimal geodesic lamination and that \(\partial S(\lambda)\) is incompressible in \(M\). Let \(h\subset S(\lambda)\) be a simple geodesic ray or biinfinite geodesic that is disjoint from \(\lambda\) or contained in \(\lambda\). Then either_ 1. _any lift_ \(\tilde{h}\subset\partial\tilde{M}\) _of_ \(h\) _is a quasi-geodesic in_ \(\tilde{M}\)_._ 2. \(S(\lambda)\) _contains a sequence of meridians_ \((\gamma_{i})\) _such that_ \(i(\lambda,\gamma_{i})\to 0\)_, with respect to every transverse measure on_ \(\lambda\)_._ _In particular, if \(h\) is homoclinic, then \(\lambda\) satisfies (2)._ Proof.: Assume that (2) does not hold. Given a cut system \(m\) for \(M\), Lemma 5.8 and Fact 5.7 (3) say that we can perform \(\lambda\)-surgeries until we obtain a new cut system \(m\) such that \(\lambda\cup h\) has no \(m\)-waves. If \(m\) intersects \(\lambda\), then \(\lambda\cup h\) is in tight position with respect to \(m\), so (1) follows from Fact 5.5. Therefore, we can assume \(m\) does not intersect \(\lambda\). Up to isotopy, we can also assume that \(S(\lambda)\) does not intersect \(m\). Since \(\partial S(\lambda)\) is assumed to be a collection of incompressible curves, it follows that \(S(\lambda)\) is itself incompressible, so (1) follows from Fact 5.2. We now come to the central definition of the section. **Definition 5.10**.: A minimal geodesic lamination \(\lambda\subset\partial_{\chi<0}M\) is an _intrinsic limit of meridians_ if there is a transverse measure6 on \(\lambda\) and a sequence of meridians \((\gamma_{i})\) contained in \(S(\lambda)\) such that \(\gamma_{i}\to\lambda\) in \(\mathcal{PML}(S(\lambda))\). Footnote 6: It is currently unknown whether the particular transverse measure matters: we might suspect that a measured lamination is a projective limit of meridians if and only if the same is true for any other measured lamination with the same support, but there could also very well be a counterexample. Using Lemma 5.9, we can prove the following proposition, which gives several equivalent characterizations of intrinsic limits. **Proposition 5.11** (Intrinsic limits).: _Suppose \(\lambda\subset S=\partial M\) is a minimal geodesic lamination and \(\partial S(\lambda)\) is incompressible. The following are equivalent:_ 1. \(\lambda\) _is an intrinsic limit of meridians,_ 2. _given (some/any) transverse measure on_ \(\lambda\)_, there is a sequence of meridians_ \((\gamma_{i})\) _in_ \(S(\lambda)\) _such that_ \(i(\gamma_{i},\lambda)\to 0,\)__ 3. _there is a biinfinite homoclinic geodesic in_ \(S(\lambda)\) _that is either a leaf of_ \(\lambda\)_, or is disjoint from_ \(\lambda\)_,_ 4. _given any transverse measure on_ \(\lambda\)_, there is a sequence of essential (possibly non-simple) closed curves_ \((\gamma_{i})\) _in_ \(S(\lambda)\) _such that each_ \(\gamma_{i}\) _is nullhomotopic in_ \(M\)_, and_ \(i(\gamma_{i},\lambda)\to 0.\)__ Note that when we say \(\partial S(\lambda)\) is incompressible, we mean that no closed curve that is a boundary component of \(S(\lambda)\) is nullhomotopic in \(M\). This condition is mainly here to make statements and proofs easier. For instance, without this assumption our proof of (4) \(\implies\) (2) may produce peripheral meridians, but peripheral meridians can't be used in (2) \(\implies\) (1). Proof.: (2) \(\implies\) (1). Fix some transverse measure on \(\lambda\). By (2), \[i(\gamma_{i},\lambda)/\operatorname{length}(\gamma_{i})\to 0,\] so after passing to a subsequence we can assume that \((\gamma_{i})\) converges to a measured lamination \(\mu\) in \(S(\lambda)\) that does not intersect \(\lambda\) transversely. As \(\lambda\) fills \(S(\lambda)\), \(\mu\) is supported on \(\lambda\). (1) \(\implies\) (3). After passing to a subsequence, we can assume that \((\gamma_{i})\) converges in the Hausdorff topology to some lamination, which must then be an extension of \(\lambda\) by finitely many leaves. (3) follows from an unpublished criterion of Casson, see Lecuire [35, Theoreme B.1] for a proof, that states that any Hausdorff limit of meridians has a homoclinic leaf. \((3)\implies(2)\). This is an immediate corollary of Lemma 5.9. \((4)\iff(2)\). The direction \(\Leftarrow\) is immediate, so suppose \((\gamma_{i})\) is a sequence of essential closed curves in \(S(\lambda)\) that are nullhomotopic in \(H\) and \(i(\lambda,\gamma_{i})\to 0\). By Stallings' version of the Loop Theorem, for each \(i\) there is a meridian \(\gamma_{i}^{\prime}\) that is obtained from \(\gamma_{i}\) by surgery at the self intersection points. Such surgeries can only decrease the intersection number with \(\lambda\), so \((2)\) follows. We will also need the following criterion in the next section. **Lemma 5.12** (Intrinsic limits of annuli).: _Suppose \(\lambda\subset\partial_{\chi<0}M\) is a minimal lamination such that \(S(\lambda)\) is compressible but \(\partial S(\lambda)\) is incompressible, and that there is a sequence \((A_{i})\) of essential embedded annuli in \((M,S(\lambda))\) with \(i(\partial A_{i},\lambda)\to 0\). Then \(\lambda\) is an intrinsic limit of meridians._ Proof.: Pick a meridian \(m\subset S(\lambda)\). For each \(i\), let \(T_{i}:M\longrightarrow M\) be the Dehn twist along the annulus \(A_{i}\). Then for any sequence \(n_{i}\in\mathbb{Z}\), the curves \(T_{i}^{n_{i}}(m)\) are meridians, and if \(n_{i}\) grows sufficiently fast, then \[i(T_{i}^{n_{i}}(m),\lambda)/\operatorname{length}(T_{i}^{n_{i}}(m))\to 0.\] Hence, after passing to a subsequence \(T_{i}^{n_{i}}(m)\) converges to a lamination \(\lambda^{\prime}\) supported in \(S(\lambda)\) with zero intersection number with \(\lambda\), implying \(\lambda^{\prime}\) and \(\lambda\) have the same support, so \(\lambda\) is an intrinsic limit of meridians. ## 6. Limits of homoclinic rays In this section we characterize the laminations onto which pairs of disjoint mutually homoclinic rays can accumulate. **Theorem 6.1** (Mutually homoclinic rays).: _Let \(M\) be a compact orientable hyperbolizable \(3\)-manifold and equip \(\partial_{\chi<0}M\) with an arbitrary hyperbolic metric. Let \(h_{\pm}\) be two disjoint, mutually homoclinic simple geodesic rays on \(\partial_{\chi<0}M\) that accumulate onto (possibly equal) minimal laminations \(\lambda_{\pm}\), and that the multicurve \(\partial S(\lambda_{\pm})\) is incompressible in \(M\). Then one of the following holds:_ 1. _one of_ \(\lambda_{+}\) _or_ \(\lambda_{-}\) _is an intrinsic limit of meridians,_ 2. \(h_{+}\) _and_ \(h_{-}\) _are asymptotic on_ \(\partial_{\chi<0}M\)_, and either_ 1. _any two mutually homoclinic lifts_ \(\tilde{h}_{\pm}\) _to_ \(\partial\tilde{M}\) _are asymptotic on_ \(\partial\tilde{M}\)_, or_ 2. \(\lambda:=\lambda_{-}=\lambda_{+}\) _is a simple closed curve that is homotopic in_ \(M\) _to a nontrivial power_ \(\gamma^{n},\ n>1\) _of some closed curve_ \(\gamma\) _in_ \(M\)_,_ 3. \(h_{\pm}\) _are not asymptotic on_ \(\partial_{\chi<0}M\)_, and there is an essential (possibly nontrivial) interval bundle_ \(B\subset M\) _such that_ \(\lambda_{\pm}\) _each fill a component of_ \(\partial_{H}B\)_, and_ \(\lambda_{\pm}\) _are essentially homotopic through_ \(B\)_, as in SS_2.9_._ Proof.: Let \(\lambda\) be a geodesic ray on \(\partial_{\chi<0}M\). Then \(\lambda\) is an intrinsic limit of meridians. Since \(\lambda\) is an intrinsic limit of meridians, we have \(\lambda\) is an intrinsic limit of meridians. **Lemma 6.2**.: _Let \(M\) be a compact orientable hyperbolizable \(3\)-manifold and equip \(\partial_{\chi<0}M\) with an arbitrary hyperbolic metric. Let \(h_{\pm}\) be two disjoint, mutually homoclinic simple geodesic rays on \(\partial_{\chi<0}M\) that accumulate onto (possibly equal) minimal laminations \(\lambda_{\pm}\), and that the multicurve \(\partial S(\lambda_{\pm})\) is incompressible in \(M\). Then one of the following holds:_ 1. _one of_ \(\lambda_{+}\) _or_ \(\lambda_{-}\) _is an intrinsic limit of meridians,_ 2. \(h_{+}\) _and_ \(h_{-}\) _are asymptotic on_ \(\partial_{\chi<0}M\)_, and either_ 1. _any two mutually homoclinic lifts_ \(\tilde{h}_{\pm}\) _to_ \(\partial\tilde{M}\) _are asymptotic on_ \(\partial\tilde{M}\)_, or_ 2. \(\lambda:=\lambda_{-}=\lambda_{+}\) _is a simple closed curve that is homotopic in_ \(M\) _to a nontrivial power_ \(\gamma^{n},\ n>1\) _of some closed curve_ \(\gamma\) _in_ \(M\)_,_ 3. \(h_{\pm}\) _are not asymptotic on_ \(\partial_{\chi<0}M\)_, and there is an essential (possibly nontrivial) interval bundle_ \(B\subset M\) _such that_ \(\lambda_{\pm}\) _each fill a component of_ \(\partial_{H}B\)_, and_ \(\lambda_{\pm}\) _are essentially homotopic through_ \(B\)_, as in SS_2.9_._ The proof of Theorem 6.1 is given in SS6.1. One can construct examples of mutually homoclinic rays in each of cases (1)-(3). For concreteness, suppose that \(M\) is a handlebody. For (1), pick two meridians \(\lambda_{-},\lambda_{+}\) on \(M\) and let \(h_{\pm}\) spiral onto \(m_{\pm}\). One can also produce similar examples by letting \(\lambda_{\pm}\) be arbitrary laminations in disjoint subsurfaces \(S(\lambda_{\pm})\) that are spheres with at least 4 boundary components, all of which are compressible in \(M\), and letting \(h_{\pm}\) accumulate onto \(\lambda_{\pm}\). For (2) (a), take \(\lambda\) to be any simple closed curve on \(\partial M\) that is essential in \(M\) but has no nontrivial roots in \(\pi_{1}M\), and let \(h_{\pm}\) spiral around \(\lambda\) in the same direction. We'll discuss 2 (b) in Remark 6.2. For (3), write \(M=S\times[-1,1]\) where \(S\) is a surface with boundary, let \(\lambda\) be a lamination on \(S\), and let \(h_{\pm}\) be corresponding leaves of \(\lambda_{\pm}:=\lambda\times\{\pm 1\}\). Then \(\tilde{M}\cong\tilde{S}\times[-1,1]\), so there are lifts of \(h_{\pm}\) that are mutually homoclinic. One can also construct similar examples of (3) where the interval bundle \(B\) is twisted. In case (1), we expect it is possible that \(S(\lambda_{-})\) is incompressible, say, while \(\lambda_{+}\) is an intrinsic limit of meridians. For instance, suppose \(C\) is a compression body with connected, genus-at-least-two interior boundary \(\partial_{-}C\), and exterior boundary \(\partial_{+}C\). Let \(f:C\longrightarrow C\) be a homeomorphism such that \(f|_{\partial_{+}C}\) and \(f|_{\partial_{-}C}\) are both pseudo-Anosov, with attracting laminations \(\lambda_{+},\lambda_{-}\), respectively. We expect that there are rays \(\ell_{\pm}\subset\lambda_{\pm}\) that are mutually homoclinic. But \(\lambda_{+}\) is an intrinsic limit of meridians, while \(S(\lambda_{-})\) is incompressible. The assumption that \(\partial S(\lambda_{\pm})\) is incompressible is necessary in Theorem 6.1. For instance, suppose \(M\) is a compression body with exterior boundary a genus 3 surface \(S\), where the only meridian on \(S\) is a single separating curve \(\gamma\). Let \(T\) be the component of \(S\setminus\gamma\) that is a punctured genus 2 surface. Then there are distinct minimal geodesic laminations \(\lambda,\lambda^{\prime}\subset T\), each of which fills \(T\), that are properly homotopic in \(M\): just take distinct laminations that are identified when we cap off the puncture of \(T\) to get a closed genus 2 surface. Corresponding ends of corresponding leaves of \(\lambda,\lambda^{\prime}\) are mutually homoclinic rays that accumulate onto \(\lambda,\lambda^{\prime}\), respectively, but none of (2)-(3) hold. One could write down a version of Theorem 6.1 that omits the assumption that \(\partial S(\lambda_{\pm})\) is incompressible, but the conclusion would be relative to capping off \(S(\lambda_{\pm})\), and the statement would be more complicated. Figure 10. An example of (2) (b) in Theorem 6.1, see Remark 6.2. **Remark 6.2**.: The reader may be wondering about (2) (b) in Theorem 6.1, and how it differs from (2) (a). A relevant example of two asymptotic rays that have mutually homoclinic nonasymptotic lifts to \(\partial\tilde{M}\) is given in Figure 10. On the left we have a solid torus that is a boundary-connect-summand of \(M\), which (say) is a handlebody. The biinfinite geodesic \(h\) is homoclinic and its two ends are mutually homoclinic rays that both spiral onto a simple closed curve \(\lambda\), the \((2,1)\)-curve on the solid torus. Then \(\lambda\) is homotopic to the square of the core curve of the solid torus. Although the two ends of \(h\) are asymptotic on \(\partial M\), any lift \(\tilde{h}\) in \(\partial\tilde{M}\) will have ends that are mutually homoclinic, but nonasymptotic. On the right, we have drawn the preimage \(\tilde{\lambda}\) of \(\lambda\), and two lifts \(\tilde{h}_{1},\tilde{h}_{2}\) of \(h\). Note that one end of \(\tilde{h}_{1}\) is asymptotic to an end of \(\tilde{h}_{2}\). When \(M\) is a compression body, Casson-Gordon prove in [13, Theorem 4.1] that any simple closed curve \(\lambda\subset\partial M\) that has a nontrivial root in \(\pi_{1}M\) lies on the boundary of a solid torus that is a boundary connect summand of \(M\), exactly as in Figure 10. When \(M\) has incompressible boundary, such \(\lambda\) come from components of the characteristic submanifold of \(M\), see SS2.6, that are either solid tori or twisted interval bundles over nonorientable surfaces. Here is a slightly more refined version of Theorem 6.1 that applies to homoclinic biinfinite geodesics on \(\partial_{\chi<0}M\). **Corollary 6.3** (Homoclinic biinfinite geodesics).: _Suppose that \(M\) is as in Theorem 6.1, that \(h\) is a homoclinic biinfinite simple geodesic on some component \(S\subset\partial_{\chi<0}M\), that \(h_{\pm}\) are the two ends of \(h\), that \(h_{\pm}\) limit onto \(\lambda_{\pm}\), and that \(\partial S(\lambda_{\pm})\) is incompressible in \(M\)._ _Then one of (1)-(3) in Theorem 6.1 holds. Moreover, in case (2), if \(\lambda:=\lambda_{\pm}\) is not an intrinsic limit of meridians then either_ 1. _[label=()]_ 2. _after reparametrizing_ \(h\)_, there is some_ \(s\) _such that_ \(h(-s)\) _and_ \(h(s)\) _are joined by a geodesic segment_ \(c\) _with_ \(h\cap int(c)=\emptyset\)_, such that_ \(c\)_,_ \(h|_{(-\infty,-s)}\) _and_ \(h|_{[s,\infty)}\) _bound an embedded geodesic triangle_ \(\Delta\subset S\) _with one ideal vertex, and_ \(c\cup h([-s,s])\) _is a meridian in_ \(M\)_, or_ 3. \(\lambda\) _is a simple closed curve on_ \(S\)_, the two ends of_ \(h\) _spiral around_ \(\lambda\) _in the same direction, and any neighborhood of the union_ \(h\cup\lambda\subset S\) _contains a meridian._ _And in case (3), we can choose the interval bundle \(B\) such that \(h\) contains a subarc \(\alpha\subset h\) that is a compression arc for \(B\)._ Proof.: Let \(\tilde{h}\) be a homoclinic lift of \(h\) on \(\partial\tilde{M}\). By Lemma 5.9, either one of \(\lambda_{\pm}\) is an intrinsic limit of meridians in \(M\), in which case we're in case (1) and are done, or both ends of \(\tilde{h}\) are quasi-geodesic in \(\tilde{M}\). Since \(\tilde{h}\) is homoclinic, it follows that its two ends are mutually homoclinic, so we're in the setting of Theorem 6.1 and one of (2)-(3) holds. Assume we're in case (2) of Theorem 6.1, and set \(\lambda:=\lambda_{\pm}\). Assume first that \(\lambda:=\lambda_{\pm}\) is a simple closed curve. Since the two ends of \(h\) are asymptotic, they spiral around \(\lambda\) in the same direction. Let \(U\) be a neighborhood of \(h\cup\lambda\) on \(\partial_{\chi<0}M\). Then \(h\) is a homoclinic geodesic contained in \(U\), so Lemma 5.2 implies that \(U\) is compressible as desired in (ii). Now suppose that \(\lambda\) is _not_ a simple closed curve, in which case we're in case (2) (a) of Theorem 6.1. We show (i) holds. Let's start by constructing the desired geodesic triangle. Parametrize \(h\), pick a universal covering map \(\mathbb{H}^{2}\longrightarrow S\) and lift \(h\) to a \(\hat{h}\) in \(\mathbb{H}^{2}\), and let \[\xi=\lim_{t\to+\infty}\hat{h}(t)\in\partial_{\infty}\mathbb{H}^{2}.\] Since the two ends of \(h\) are asymptotic on \(S\), there is a deck transformation \(\gamma:\mathbb{H}^{2}\longrightarrow\mathbb{H}^{2}\) such that \(\xi=\lim_{t\to-\infty}\gamma\circ\hat{h}(t).\) It follows that if we use a particular arc-length parametrization of \(h\), we may assume that for each \(t\in\mathbb{R}\), the points \(\hat{h}(t),\gamma\circ\hat{h}(-t)\) lie on a common horocycle tangent to \(\xi\). Fix some large \(s\) such that the geodesic segment \(\hat{c}\) joining \(\hat{h}(s)\) and \(\gamma\circ\hat{h}(-s)\) is shorter than the injectivity radius of \(S\), and therefore projects to a simple geodesic segment \(c\) in \(S\). Let \(\hat{\Delta}\subset\mathbb{H}^{2}\) be the triangle bounded by \(\hat{c}\) and the two rays \(\hat{h}([s,\infty))\) and \(\gamma\circ\hat{h}((-\infty,s])\). Let \(g:\mathbb{H}^{2}\longrightarrow\mathbb{H}^{2}\) be a deck transformation. We claim that \(g\circ\hat{h}(\mathbb{R})\cap int(\hat{\Delta})=\emptyset\). If not, then since \(\hat{\Delta}\) has geodesic sides, two of which are disjoint from \(g\circ\hat{h}\), it follows that one of the two endpoints of \(g\circ\hat{h}\) is \(\xi\). If it's the positive endpoint, then \(g\) fixes \(\xi\), and the axis of \(g\) projects to a (simple) closed curve on \(S\), around which the two ends of \(h\) spiral, contradicting that \(\lambda\) isn't a simple closed curve. If the negative endpoint of \(g\circ\hat{h}\) is \(\xi\), then \(g\circ\gamma^{-1}\) fixes \(\xi\) and we get a similar contradiction. Next, we claim that we have \(g(int(\hat{\Delta}))\cap int(\hat{\Delta})=\emptyset\) as long as \(g\neq id\). Suppose that for some \(g\neq id\) the intersection is nonempty. Then \(g(\xi)\neq\xi\), since otherwise we have a contradiction as in the previous paragraph. The Figure 11. Homoclinic geodesics \(h\) as in cases (i) and (ii) in Corollary 6.3, respectively. previous paragraph implies that the sides of the triangles \(g(\hat{\Delta}),\hat{\Delta}\) that are lifts of rays of \(h\) do not intersect the interior of the other triangle. So, the only way the interiors of \(g(\hat{\Delta}),\hat{\Delta}\) can intersect is if \(\hat{c}\) and \(g(\hat{c})\) intersect. However, this does not happen since we chose \(s\) large enough so that \(\hat{c}\) projects to a simple geodesic segment in \(S\). The previous two paragraphs imply that \(\hat{\Delta}\) projects to an embedded geodesic triangle \(\Delta\) in \(S\) whose interior is disjoint from \(h\), as desired in (i). By construction, \(c\) and \(h([-s,s])\) are simple geodesic segments and, since \(g\circ\hat{h}(\mathbb{R})\cap int(\hat{\Delta})=\emptyset\) for any \(g\neq id\), they are disjoint. It follows that \(c\cup h([-s,s])\) is an essential simple closed curve on \(S\). Note that \(c\cup h([-s,s])\) is an essential simple closed curve on \(S\), since it is the concatenation of two geodesic segments with disjoint interiors. We need to show it is nullhomotopic in \(M\). Now, if \(\tilde{h}\) is a lift of \(h\) to \(\partial\tilde{M}\), its ends are mutually homoclinic, and hence are asymptotic on \(\partial\tilde{M}\) by the assertion in case (2) of Theorem 6.1. Therefore, after choosing compatible lifts, the projection \(\hat{\Delta}\longrightarrow\Delta\) factors through a geodesic triangle \(\tilde{\Delta}\subset\partial\tilde{M}\) bounded by \(\tilde{h}([s,\infty))\), \(\tilde{h}((-\infty,-s])\) and a geodesic segment \(\tilde{c}\). The curve \(c\cup h([-s,s])\) is the projection of the closed curve \(\tilde{c}\cup\tilde{h}([-s,s])\subset\tilde{M}\), and therefore is nullhomotopic in \(M\). Now assume we are in case (3). Let \(S_{\pm}\) be the components of \(\partial_{H}B\) containing \(\lambda_{\pm}\). We may assume that \(h\) is in minimal position with respect to \(\partial S_{\pm}\). Since \(h\) is simple and the ends of \(h\) limit onto minimal laminations that fill \(S_{\pm}\), we have that \(h\) intersects \(\partial S_{-}\cup\partial S_{+}\) at most twice. Furthermore, in the case that \(S_{-}=S_{+}\), the homoclinic geodesic \(h\) cannot be contained entirely in the incompressible surface \(S_{\pm}\), by Fact 5.2. So, \(h\) is the concatenation of two rays in \(S_{\pm}\) and an arc \(\alpha\) such that \(int(\alpha)\) lies outside \(S_{\pm}\). Let \(X\subset S\) be the union of \(S_{\pm}\) and a regular neighborhood of \(\alpha\). Since \(h\) is homoclinic, there is a meridian on \(X\) by Fact 5.2. If the two endpoints of \(\alpha\) lie on different boundary components of \(\partial_{H}B\), then \(\alpha\) is a compression arc for \(B\) by Fact 2.12. So, we may assume that the two endpoints of \(\alpha\) lie on the same boundary component \(c\) of \(\partial_{H}B\). Fact 2.12 then says that \(\alpha\) is homotopic rel endpoints in \(M\) to an arc of \(c\). So, if we make a new path \(h^{\prime}\subset\partial_{H}B\) from \(h\) by replacing \(\alpha\) with that arc of \(c\), then \(h^{\prime}\) is still homoclinic, so it cannot be boundedly homotopic to a geodesic in \(\partial_{H}B\) by Fact 5.2, which implies that its ends \(h_{\pm}\) are asymptotic, a contradiction to the assumption in (3). ### Proof of Theorem 6.1 The proof proceeds in a few cases. As in Example 5.1, we identify \(M\) with the convex core \(CC(N)\) of a geometrically finite hyperbolic \(3\)-manifold \(N=\mathbb{H}^{3}/\Gamma\), and we identify the universal cover \(\tilde{M}\) with the preimage of \(CC(N)\) in \(\mathbb{H}^{3}\), which is the convex hull of the limit set of \(\Gamma\). Note that the closure of \(\tilde{M}\) in \(\mathbb{H}^{3}\cup\partial\mathbb{H}^{3}\) is a ball. There are four cases to consider: 1. Both \(\lambda_{\pm}\) are simple closed curves. We show that either (1) or (2) holds. 2. Both \(\lambda_{\pm}\) are distinct, in which case the surfaces \(S(\lambda_{\pm})\) are disjoint, but one of these surfaces is compressible, say \(S(\lambda_{+})\). We show (1). 3. At least one of \(\lambda_{\pm}\) is not a simple closed curve, and both \(S(\lambda_{\pm})\) are incompressible. We show (2) (a) or (3) holds. 4. \(\lambda_{-}=\lambda_{+}\), which is not a simple closed curve, and \(S(\lambda_{\pm})\) is compressible. We show either (1) or (2) (a) holds. (A) and (B) above are the easiest. Our proof in case (C) involves a hyperbolic geometric interpretation of the characteristic submanifold of a pair, as discussed in SS3 of [33] and in Walsh [54]; our argument is a bit more complicated than theirs, since we have to deal with accidental parabolics. In case (D), our argument adapts and fills some gaps in a surgery argument of Kleineidam-Souto [31] and Lecuire [35, Appendix C]. Proof of (A).: Assume that both \(\lambda_{\pm}\) are simple closed curves. If one of \(\lambda_{\pm}\) is a meridian, we are in case (1) and are done. So, we may assume that both \(\lambda_{\pm}\) are incompressible in \(M\). If \(\lambda_{-}\neq\lambda_{+}\), then we are in case (3) by the Annulus Theorem. So we may assume the two curves are the same, and write \(\lambda:=\lambda_{\pm}\). We claim that \(h_{\pm}\) spiral around \(\lambda\) in the same direction, so that they are asymptotic on \(\partial M\). Suppose not, and pick mutually homoclinic lifts \(\tilde{\tilde{h}}_{\pm}\) in \(\tilde{M}\). Then \(\tilde{h}_{-}\) and \(\tilde{h}_{+}\) are asymptotic to lifts \(\tilde{\lambda}\) and \(\alpha(\tilde{\lambda})\) of \(\lambda\), where \(\alpha\in\Gamma\) is a deck transformation. Any lift of \(\lambda\) is a quasi-geodesic in \(\tilde{M}\), and hence in \(\mathbb{H}^{3}\), so \(\tilde{h}_{\pm}\) are quasi-geodesic rays, and therefore have well-defined endpoints in \(\partial\mathbb{H}^{3}\), which must be the same since the two rays are mutually homoclinic. Since \(h_{\pm}\) spiral around \(\lambda\) in opposite directions, this means that \(\alpha\in\Gamma\) takes one endpoint of \(\tilde{\lambda}\) in \(\partial\mathbb{H}^{3}\) to the other endpoint of \(\tilde{\lambda}\). Since \(\tilde{\lambda}\) is stabilized by a loxodromic isometry in \(\Gamma\), and \(\Gamma\) is torsion-free and discrete, this is impossible. Suppose we are not in case (2) (a), so there are mutually homoclinic lifts \(\tilde{h}_{\pm}\) that are not asymptotic on \(\partial\tilde{M}\). As in the previous paragraph, we may assume that \(\tilde{h}_{-}\) and \(\tilde{h}_{+}\) are asymptotic to lifts \(\tilde{\lambda}\) and \(\alpha(\tilde{\lambda})\) for some deck transformation \(\alpha\in\Gamma\). Since \(\tilde{h}_{\pm}\) are not asymptotic, \(\tilde{\lambda}\neq\alpha(\tilde{\lambda})\). As before, \(\alpha\) fixes the common endpoint of \(\tilde{h}_{\pm}\) in \(\partial\mathbb{H}^{3}\), which is a fixed point of the cyclic group \(\langle\beta\rangle\subset\Gamma\) of loxodromic isometries fixing \(\tilde{\lambda}\). As \(\Gamma\) is discrete and torsion-free, and \(\alpha\not\in\langle\beta\rangle\), we have that \(\alpha\) is a root of \(\beta\) or \(\beta^{-1}\) in \(\Gamma\), and (2) (b) follows. Proof of (B).: Suppose that \(\lambda_{\pm}\) are distinct, in which case the surfaces \(S(\lambda_{\pm})\) are disjoint, but that one of these surfaces is compressible, say \(S(\lambda_{+})\). We claim that \(\lambda_{+}\) is an intrinsic limit of meridians, in which case (1) holds and we are done. If not, take a meridian \(m\subset S(\lambda_{+})\) and apply Lemma 5.8. We obtain a new meridian \(m^{\prime}\subset S(\lambda_{+})\) such that \(\lambda_{+}\) has no \(m\)-waves. Since \(\lambda_{+}\) fills \(S(\lambda_{+})\) and the boundary components \(\partial S(\lambda_{\pm})\) are incompressible, it follows that \(\lambda_{+}\) is in tight position with respect to \(m\). So after possibly restricting the domains, \(h_{+}\) is in tight position with respect to \(m^{\prime}\), while \(h_{-}\) never intersects \(m^{\prime}\). This contradicts the fact that \(h_{\pm}\) are mutually homoclinic, since if \(\tilde{h}_{\pm}\) are homoclinic lifts in \(\tilde{M}\), for large \(t\) the point \(\tilde{h}_{+}(t)\) is separated from the image of \(\tilde{h}_{-}\) by arbitrarily many lifts of \(m^{\prime}\). _Proof of (C)._ Assume that at least one of \(\lambda_{\pm}\) is not a simple closed curve, and that \(S_{\pm}:=S(\lambda_{\pm})\) are incompressible. Note that \(S_{\pm}\) are equal or have disjoint interiors. We want to prove that we're in case (2) or (3). Lift \(h_{\pm}\) to mutually homoclinic rays \(\tilde{h}_{\pm}\subset\partial\tilde{M}\). Fact 5.2 implies that each inclusion \(\tilde{S}_{\pm}\hookrightarrow\tilde{M}\) is a quasi-isometric embedding, so if \(\tilde{S}_{-}=\tilde{S}_{+}\), then the two mutually homoclinic rays \(\tilde{h}_{\pm}\) are actually asymptotic on \(\partial\tilde{M}\). If this is true for all lifts \(\tilde{h}_{\pm}\), we are in case (2) (a) and are done. So, we can assume below that \(\tilde{S}_{-}\neq\tilde{S}_{+}\). Note that it may still be that \(\lambda_{-}=\lambda_{+}\) and \(S_{-}=S_{+}\). Let \(\Gamma_{\pm}\subset\Gamma\) be the stabilizer of \(\tilde{S}_{\pm}\) and let \(\Lambda_{\pm}\subset\partial\mathbb{H}^{3}\) be the limit set of \(\Gamma_{\pm}\). Since \(\Gamma_{\pm}\) acts cocompactly on \(\tilde{S}_{\pm}\), the inclusion \(\tilde{S}_{\pm}\hookrightarrow\mathbb{H}^{3}\) extends continously to a map \(\partial_{\infty}\tilde{S}_{\pm}\longrightarrow\Lambda_{\pm}\subset\partial \mathbb{H}^{3}\), by the main result of [45]. In particular, \(\tilde{h}_{\pm}\) have well defined endpoints in \(\partial\mathbb{H}^{3}\), and since they're mutually homoclinic, they have the same endpoint \(\xi\in\Lambda_{-}\cap\Lambda_{+}\subset\partial\mathbb{H}^{3}\). We now apply Theorem 4.1. Since \(\xi\in\Lambda_{-}\cap\Lambda_{+}\), using the notation of Theorem 4.1, the rays \(\tilde{h}_{\pm}\) are either eventually contained in the convex hulls \(\tilde{C}_{\pm}\subset\tilde{S}_{\pm}\), or are asymptotic onto their boundaries. But \(\tilde{C}_{\pm}\) project to (possibly degenerate) generalized subsurfaces \(C_{\pm}\) with geodesic boundary in \(S_{\pm}\), and the rays \(h_{\pm}\) limit onto filling laminations in \(S_{\pm}\), so it follows that actually \(C_{\pm}=S_{\pm}\), and that there is a homotopy from \(S_{-}\) to \(S_{+}\) in \(M\) that is the projection of a homotopy from \(\tilde{S}_{-}\) to \(\tilde{S}_{+}\). Since one of \(\lambda_{\pm}\) is not a simple closed curve, this means they are _both_ not simple closed curves and the (a priori degenerate) subsurfaces with geodesic boundary \(S_{\pm}\) are not simple closed geodesics. Let \(S^{\prime}_{\pm}\subset int(S_{\pm})\) be obtained by deleting small collar neighborhoods of \(\partial S_{\pm}\), so that \(\tilde{S}^{\prime}_{\pm}\) are both actually embedded, still contain \(\lambda_{\pm}\), and are either disjoint or equal. Since \(S^{\prime}_{\pm}\) are incompressible and homotopic in \(M\), Theorem 2.8 implies that they bound an essential interval bundle \(B\subset M\). Moreover, the fact that the homotopy from \(S_{-}\) to \(S_{+}\) is the projection of a homotopy from \(\tilde{S}_{-}\) to \(\tilde{S}_{+}\) means that we can assume that there is a component \(\tilde{B}\subset\tilde{M}\) of the preimage of \(B\) that intersects \(\partial\tilde{M}\) in \(\tilde{S}^{\prime}_{\pm}\). Note that \(\tilde{B}\) is invariant under \(\Delta=\Gamma_{-}\cap\Gamma_{+}\), since any element of \(\Delta\) preserves \(\tilde{S}^{\prime}_{\pm}\), and hence \(\tilde{B}\). We claim that \(\lambda_{\pm}\) are essentially homotopic through \(B\). By Fact 2.16, it suffices to show that if \(\sigma\) is the canonical involution of \(B\), as described in SS4.5, then \(\sigma(\lambda_{\pm})\) is isotopic to \(\lambda_{\mp}\) on \(S^{\prime}_{\mp}\). Well, \(\sigma\) lifts to a \(\Delta\)-equivariant involution \(\tilde{\sigma}\) of \(\tilde{B}\) that exchanges \(\tilde{S}^{\prime}_{-}\) and \(\tilde{S}^{\prime}_{+}\), where here \(\Delta=\Gamma_{-}\cap\Gamma_{+}\). By equivariance, \(\tilde{\sigma}\) extends continuously to the identity on \(\Lambda_{\Delta}\), so in particular its extension fixes \(\xi\), and hence \(\tilde{\sigma}(h_{\pm})\) is properly homotopic to \(h_{\mp}\) on \(S_{\mp}\), which implies \(\tilde{\sigma}(\lambda_{\pm})\) is isotopic to \(\lambda_{\mp}\) as desired. If \(h_{\pm}\) are not asymptotic on \(\partial M\), then we are in case (3) and are done. So, assume \(h_{\pm}\) are asymptotic. Then there is some \(\gamma\in\Gamma\) such that \(\gamma(\tilde{h}_{-})\subset\tilde{S}^{\prime}_{+}\) and is asymptotic to \(\tilde{h}_{+}\). This \(\gamma\) fixes the endpoint \(\xi\in\partial\mathbb{H}^{3}\) of \(\tilde{h}_{\pm}\). Moreover, \(\gamma(\tilde{B})\) is a component of the preimage of \(B\) that contains \(S^{\prime}_{+}\), and therefore equals \(\tilde{B}\). So, \(\gamma\) exchanges \(\tilde{S}^{\prime}_{\pm}\), and therefore \(\gamma^{2}\in\Delta\). But then \(\tilde{h}_{\pm}\) are asymptotic to the axes of \(\gamma^{2}\curvearrowright\tilde{S}_{\pm}\), implying that \(h_{\pm}\) accumulate onto simple closed curves in \(\partial M\), contradicting our assumption in (C). Proof of (D).: Assume that \(\lambda_{-}=\lambda_{+}\), write \(\lambda=\lambda_{\pm}\) for brevity, assume that \(\lambda\) is not a simple closed curve, and that \(S(\lambda)\) is compressible. We want to prove that either \(\lambda\) is an intrinsic limit of meridians, or \(h_{\pm}\) are asymptotic, as are any pair of mutually homoclinic lifts \(\tilde{h}_{\pm}\). If \(\lambda\) is an intrinsic limit of meridians, we are done, so since \(S(\lambda)\) is compressible with incompressible boundary, by Lemma 5.8 we can choose a meridian \(m\subset S(\lambda)\) with respect to which \(\lambda\) is in tight position. Let \(\tilde{m}\) be its full preimage in \(\partial\tilde{M}\), and let \(\tilde{h}_{\pm}\) be _any_ pair of mutually homoclinic lifts in \(\partial\tilde{M}\). Truncating if necessary, we can assume that \(h_{\pm}\) are in tight position with respect to \(m\), and hence the lifts \(\tilde{h}_{\pm}\) are quasigeodesic rays in \(\mathbb{H}^{3}\), by Fact 5.5. Since they are mutually homoclinic, \(\tilde{h}_{-}\) and \(\tilde{h}_{+}\) converge to the same point \(\xi\in\partial_{\infty}\mathbb{H}^{3}\), and tightness further implies that after restricting to appropriate subrays, \(\tilde{h}_{-}\) and \(\tilde{h}_{+}\) intersect exactly the same components of \(\tilde{m}\), in the same order. Reparametrizing, we have \[\tilde{h}_{\pm}:[0,\infty)\longrightarrow\partial\tilde{M},\ \ \tilde{h}_{+}(i),\tilde{h}_{-}(i)\in\tilde{m}_{i},\ \forall i\in\mathbb{N},\] where each \(\tilde{m}_{i}\) is a component of \(\tilde{m}\), and where \(\tilde{h}_{\pm}(t)\not\in\tilde{m}\) when \(t\not\in\mathbb{N}\). Let \[d_{i}:=d_{\tilde{m}}\big{(}\tilde{h}_{+}(i),\tilde{h}_{-}(i)\big{)}\] be the distance along \(\tilde{m}\) between \(\tilde{h}_{+}(i)\) and \(\tilde{h}_{-}(i)\). **Claim 6.4**.: _There is some uniform \(\epsilon>0\), independent of the particular chosen lifts \(\tilde{h}_{\pm}\), such that either_ 1. \(\tilde{h}_{\pm}\) _are asymptotic on_ \(\partial\tilde{M}\)_, and hence_ \(h_{\pm}\) _are asymptotic on_ \(\partial M\)_, or_ 2. \(\liminf_{i}d_{i}\geq\epsilon\)_._ Proof.: Let's assume that \(\tilde{h}_{+}\) and \(\tilde{h}_{-}\) are not asymptotic on \(\partial\tilde{M}\), and write \(d=\liminf_{i}d_{i}\). Fix some transverse measure on \(\lambda\). If \(d\) is small, we will construct meridians \(\gamma\subset S(\lambda)\) with very small intersection number with \(\lambda\). Since \(\lambda\) is not an intrinsic limit of meridians, there is some fixed lower bound for such intersection numbers, which will give a contradiction for small \(d\). Suppose \(d\) is small and pick \(0<<i<j\) such that \[d_{i},d_{j}<2d,\] let \(b_{i}\) be the (unique) shortest path on \(\tilde{m}\) from \(\tilde{h}_{-}(i)\) to \(\tilde{h}_{+}(i)\), and define \(b_{j}\) similarly. Let \(\tilde{\gamma}_{ij}\) be the loop on \(\partial\tilde{M}\) obtained by concatenating the four segments \(\tilde{h}_{+}([i,j]),\tilde{h}_{-}([i,j]),b_{i}\) and \(b_{j}\) in the obvious way. We first claim that after fixing \(i\), it is possible to choose \(j\) such that \(\tilde{\gamma}_{ij}\) is homotopically essential on \(\partial\tilde{M}\). Assume not, let \(\tilde{S}\subset\partial\tilde{M}\) be the component containing \(\tilde{h}_{\pm}\), fix a universal covering map \[\mathbb{H}^{2}\longrightarrow\tilde{S},\] and lift the rays \(\tilde{h}_{\pm}|_{[i,\infty)}\) to rays \[\mathfrak{h}_{\pm}:[i,\infty)\rightarrow\mathbb{H}^{2}\] in such a way that \(b_{i}\) lifts to a segment connecting \(\mathfrak{h}_{-}(i)\) to \(\mathfrak{h}_{+}(i)\). Now, there are infinitely many \(j>i\) with \(d_{j}<2d\). For each such \(j\), we know that \(\tilde{\gamma}_{ij}\) is homotopically inessential on \(\partial\tilde{M}\), so the points \(\mathfrak{h}_{-}(j)\) to \(\mathfrak{h}_{+}(j)\) are at most \(2d\) away from each other in \(\mathbb{H}^{2}\). This gives a sequences of points exiting the rays \(\mathfrak{h}_{\pm}\) that are always at most \(2d\) apart, so \(\mathfrak{h}_{-}\) is asymptotic to \(\mathfrak{h}_{+}\). Hence, \(\tilde{h}_{-}\) is asymptotic to \(\tilde{h}_{+}\), contrary to our assumption. We now fix large \(i,j\) such that \(d_{i},d_{j}<2d\) and \(\tilde{\gamma}:=\tilde{\gamma}_{ij}\) is homotopically essential on \(\partial\tilde{M}\). Then \(\tilde{\gamma}\) projects to a homotopially essential loop \(\gamma\subset\partial M\) that is homotopically trivial in \(M\). Note that if \(i,j\) are chosen large enough and \(d\) is small, then \(\gamma\subset S(\lambda)\). Furthermore, since the segments \(b_{i},b_{j}\) are the only parts of \(\tilde{\gamma}\) that intersect \(\lambda\), and these segments have hyperbolic length less than \(2d\), the intersection number \(i(\gamma,\lambda)\) is small when \(d\) is small. (Recall that \(\lambda\) is a minimal lamination that is not a simple closed curve, so no leaves have positive weight, and hence hyperbolic length can be compared to intersection number.) But \(\lambda\) is not an intrinsic limit of meridians, so Proposition 5.11 (4) says that there is some positive lower bound for the intersection numbers of \(\lambda\) with essential curves that are nullhomotopic in \(M\). Hence, we get a contradiction if \(d\) is small. Suppose we have two pairs \(\{a,b\}\) and \(\{c,d\}\) of points in \(m\), all four of which are distinct. We say the two pairs are _unlinked_ in \(m\) if in the induced cyclic ordering on \(\{a,b,c,d\}\subset m\), \(a\) is adjacent to \(b\) and \(c\) is adjacent to \(d\), and we say that the two pairs are _linked_ otherwise. **Claim 6.5**.: _If \(i,j\in\mathbb{N}\), \(i<j\), then the pairs \(\{h_{+}(i),h_{-}(i)\}\) and \(\{h_{+}(j),h_{-}(j)\}\) are unlinked in \(m\)._ For an example where the pairs are _linked_, see Figure 12. The proof below works in general whenever \(h_{\pm}\) are simple geodesic rays on \(\partial M\) in tight position with respect to \(m\), where neither \(h_{+}\) nor \(h_{-}\) spirals onto a simple closed curve. Proof.: The essential observation used in the proof is that the closure \[cl(\partial\tilde{M})\subset\mathbb{H}^{3}\cup\partial\mathbb{H}^{3}\] is homeomorphic to a sphere: indeed, the closure of \(\tilde{M}\) in \(\mathbb{H}^{3}\cup\partial\mathbb{H}^{3}\) is a ball, since \(\tilde{M}\subset\mathbb{H}^{3}\) is convex with nonempty interior, and the closure of the boundary is the boundary of the closure. We obtain the unlinking property above by exploiting separation properties of arcs and curves on \(cl(\partial\tilde{M})\). Since \(h_{\pm}\) are in tight position with respect to \(m\), both lifts \(\tilde{h}_{\pm}\) cross \(\tilde{m}_{i}\) exactly once. Since \(\tilde{h}_{\pm}\) limit to the same point in \(\partial\mathbb{H}^{3}\), they must then cross \(\tilde{m}_{i}\) in the same direction. In other words, the tangent vectors \(h_{+}(i)^{\prime},h_{-}(i)^{\prime}\) point to the same side of \(m\). The same statement holds for \(j\). This allows us to break into the following two cases: 1. the arcs \(h_{\pm}|_{[i,j]}\) start and end on the same side of \(m\), i.e. the vectors \(h_{\pm}(i)^{\prime}\) point to the opposite side of \(m\) as the vectors \(h_{\pm}(j)^{\prime}\), or 2. the arcs \(h_{\pm}|_{[i,j]}\) start and end on different sides of \(m\), i.e. all four velocity vectors \(h_{\pm}(i)^{\prime},h_{\pm}(j)^{\prime}\) point to the same side of \(m\), see Figure 13. Figure 12. Two rays spiraling onto a simple closed curve (which is not allowed below), where the points in Claim 6.5 are linked. Figure 13. The cases (a) and (b) in the proof of Claim 6.5. First, assume we're in case (a). Let \[\alpha_{\pm}:=\tilde{h}_{\pm}|_{[i,j]},\] which we regard as oriented arcs in \(\partial\tilde{M}\) starting on \(\tilde{m}_{i}\) and ending on \(\tilde{m}_{j}\). Let \(\gamma:\tilde{M}\longrightarrow\tilde{M}\) be the deck transformation taking \(\tilde{m}_{j}\) to \(\tilde{m}_{i}\). Then the arcs \[\beta_{\pm}:=\gamma\circ\tilde{h}_{\pm}|_{[i,j]}\] start on \(\gamma(\tilde{m}_{i})\) and end on \(\gamma(\tilde{m}_{j})=\tilde{m}_{i}\), and since we're in case (a) they end on the _same side_ of \(\tilde{m}_{i}\) as the arcs \(\alpha_{\pm}\) start. Note that \(\gamma(\tilde{m}_{i})\) is not \(\tilde{m}_{i}\) or \(\tilde{m}_{j}\). Indeed, if \(\gamma(\tilde{m}_{i})=\tilde{m}_{i}\) then we'd have \(\gamma=id\), contradicting that \(\gamma(\tilde{m}_{j})=\tilde{m}_{i}\). And if \(\gamma(\tilde{m}_{i})=\tilde{m}_{j}\), then \(\gamma^{2}\) would leave \(\tilde{m}_{i}\) invariant, implying that \(\gamma^{2}=id\), which is impossible since \(\pi_{1}M\) has no torsion. We claim that _the interiors of the arcs \(\beta_{\pm}\) do not intersect \(\tilde{m}_{i}\) or \(\tilde{m}_{j}\), and the arcs \(\alpha_{\pm}\) do not intersect \(\gamma(\tilde{m}_{i})\)._ Indeed, the interiors of \(\beta_{\pm}\) don't intersect \(\tilde{m}_{i}\) because the arcs \(\beta_{\pm}\) end on \(\tilde{m}_{i}\) and intersect each component of \(\tilde{m}\) at most once, by tight position. The interiors of \(\beta_{\pm}\) don't intersect \(\tilde{m}_{j}\) because any arc from \(\tilde{m}_{i}\) to \(\tilde{m}_{j}\) intersect at least \(j-i+1\) components of \(\tilde{m}\) (counting \(\tilde{m}_{i}\) and \(\tilde{m}_{j}\)), while any proper subarc of \(\beta_{\pm}\) intersects at most \(j-i\) components of \(\tilde{m}\). Here, for the \(j-i+1\) bound we are using tight position of \(h_{\pm}\), the definitions of \(\tilde{m}_{i},\tilde{m}_{j}\), and the fact that each component of \(\tilde{m}\) separates \(\partial\tilde{M}\). The fact that the arcs \(\alpha_{\pm}\) don't intersect \(\gamma(\tilde{m}_{i})\) is similar: any arc from \(m_{i}\) to \(\gamma(\tilde{m}_{i})\) must pass through at least \(j-i+1\) components of \(\tilde{m}\), while any proper subarc of \(\alpha_{\pm}\) intersects at most \(j-i\) components, and \(\alpha_{\pm}\) do not end on \(\gamma(\tilde{m}_{i})\neq\tilde{m}_{j}\). Let \(A\subset cl(\partial\tilde{M})\cong S^{2}\) be the annulus that is the closure of the component of \(cl(\partial\tilde{M})\setminus(\tilde{m}_{i}\cup\tilde{m}_{j})\) that contains the side of \(\tilde{m}_{i}\) on which the arcs \(\alpha_{\pm}\) start and the arcs \(\beta_{\pm}\) end. Then \(\alpha_{\pm}\) are two disjoint arcs in \(A\) that join \(\tilde{m}_{i}\) to \(\tilde{m}_{j}\), and therefore \(\alpha_{\pm}\) separate \(A\) into two rectangles. The component \(\gamma(\tilde{m}_{i})\) on which the arcs \(\beta_{\pm}\) start is contained in the interior of one of these two rectangles. Therefore, the two arcs \(\beta_{\pm}\) must lie in the same component of \(A\setminus(\alpha_{+}\cup\alpha_{-}).\) Looking at endpoints, this means the pairs \(\{\tilde{h}_{+}(i),\tilde{h}_{-}(i)\}\) and \(\{\gamma\circ\tilde{h}_{+}(j),\gamma\circ\tilde{h}_{-}(j)\}\) are unlinked in \(\tilde{m}_{i}\), and the claim follows. Now assume that we're in case (b). The curve \(\tilde{m}_{i}\) separates \(\partial\tilde{M}\), and we let \(X\subset\partial\tilde{M}\) be the closure of the component of \(\partial\tilde{M}\setminus\tilde{m}_{i}\) into which the velocity vectors \(\tilde{h}^{\prime}_{\pm}(i)\) and \((\gamma\circ\tilde{h}_{\pm})^{\prime}(j)\) all point. The closure \[cl(X)\subset\mathbb{H}^{3}\cup\partial\mathbb{H}^{3}\] is homeomorphic to a disk, since \(cl(\partial\tilde{M})\) is a sphere. As before, we let \(\gamma:\tilde{M}\longrightarrow\tilde{M}\) be the deck transformation taking \(\tilde{m}_{j}\) to \(\tilde{m}_{i}\). Then the rays \[\alpha_{\pm}:=\tilde{h}_{\pm}([i,\infty)),\quad\beta_{\pm}:=\gamma\circ\tilde{ h}_{\pm}([j,\infty))\] are all contained in \(X\). Note that \(\alpha_{\pm}\) both limit to a point \(\xi\in\partial\mathbb{H}^{3}\), while \(\beta_{\pm}\) limit to \(\gamma(\xi)\in\partial\mathbb{H}^{3}\). The union \(\alpha_{-}\cup\alpha_{+}\) compactifies to an arc in \(cl(X)\), since the two rays limit to the same point in \(\mathbb{H}^{3}\). The same is true for \(\beta_{-}\cup\beta_{+}\). _Hoping for a contradiction, suppose that the points in the statement of the claim are linked._ Then the pairs of endpoints of \(\alpha_{-}\cup\alpha_{+}\) and \(\beta_{-}\cup\beta_{+}\) are also linked on \(\tilde{m}_{i}=\partial cl(X)\). We now have two arcs on the disk \(cl(X)\) with linked endpoints on \(\partial cl(X)\), so the two arcs must intersect. As \(\alpha_{\pm},\beta_{\pm}\) are all disjoint, the only intersection can be on \(\partial\mathbb{H}^{3}\), so their endpoints at infinity must all agree, i.e. \(\gamma(\xi)=\xi\). Since \(\gamma(\xi)=\xi\), all the rays \(\gamma^{k}\circ\tilde{h}_{+}\) limit to \(\xi\), where \(k\in\mathbb{Z}\). Hence, all these (quasi-geodesic) rays are pairwise mutually homoclinic, and for each pair \(k,l\), the rays \(\gamma^{k}\circ\tilde{h}_{+}\) and \(\gamma^{l}\circ\tilde{h}_{+}\) eventually intersect the same components of \(\tilde{m}\), in the same order, although their initial behavior may be different. In analogy with the setup of Claim 6.4, let \(d_{k,l}\) be the liminf of the distances from \(\gamma^{k}\circ\tilde{h}_{+}\) to \(\gamma^{l}\circ\tilde{h}_{+}\) along the components of \(\tilde{m}\) that they both intersect. We claim that there are \(k,l\) such that \(d_{k,l}<\epsilon\), where \(\epsilon\) is the constant from Claim 6.4. Indeed, for \(N>\operatorname{length}(m)/\epsilon\), it is impossible to pack \(N\) points at least \(\epsilon\) apart in any component of \(\tilde{m}\). So if we let \(k\) range over a set \(F\subset\mathbb{Z}\) of size \(N\), whenever a component of \(\tilde{m}\) intersects all \(\gamma^{k}\circ\tilde{h}_{+},\ k\in S\), two such intersections must be within \(\epsilon\) of each other. There are infinitely many such components of \(\tilde{m}\) and \(F\) is finite, so we can pick \(k,l\in S\) such that \(\gamma^{k}\circ\tilde{h}_{+}\) and \(\gamma^{l}\circ\tilde{h}_{+}\) are within \(\epsilon\) on infinitely many such components. Finally, \(\gamma^{k}\circ\tilde{h}_{+}\) and \(\gamma^{l}\circ\tilde{h}_{+}\) are mutually homoclinic lifts of \(\tilde{h}_{+}\), and \(d_{k,l}<\epsilon\), so the exact same argument as in Claim 6.4 shows that \(\gamma^{k}\circ\tilde{h}_{+}\) and \(\gamma^{l}\circ\tilde{h}_{+}\) are asymptotic on \(\partial\tilde{M}\). It follows that \(h_{+}\) spirals onto a (simple) closed curve in \(\partial M\) in the homotopy class of (a primitive root of) \(\gamma^{l-k}\). (Indeed, \(\gamma^{l-k}\) lifts to a deck transformation of the universal cover \(\mathbb{H}^{2}\longrightarrow\partial M\), and the axis of this deck transformation is asymptotic to suitably chosen lifts of both \(\gamma^{k}\circ\tilde{h}_{+}\) and \(\gamma^{l}\circ\tilde{h}_{+}\).) This is a contradiction, though, since \(h_{+}\) limits onto \(\lambda\), which is not a simple closed curve. Assume now that our mutually homoclinic rays \(\tilde{h}_{\pm}\) are not asymptotic on \(\partial\tilde{M}\), in which case we're in case (2) of the theorem and are done. By Claim 6.4, there is some \(\epsilon>0\) such that \(d_{\tilde{m}_{i}}(\tilde{h}_{+}(i),\tilde{h}_{-}(i))\geq\epsilon\) for all \(i\). We will show that \(\lambda\) is an intrinsic limit of annuli, in the sense of Lemma 5.12, which says that then \(\lambda\) is an intrinsic limit of meridians. The proof is an adaptation and correction of a surgery argument of Lecuire [35, Affirmation C.3]. As there are two gaps7 in Lecuire's earlier argument, we give the proof in full detail below, without many citations of [35]. Footnote 7: The first gap is that the sentence “Quitte à extraire, la suite \((gh^{-1})^{2n}g(\tilde{l}^{1})\) converge vers une geodesique \(\tilde{\gamma}\subset p^{-1}(\alpha_{1})\) dont la projection \(l\subset\partial M\) est une courbe ferme.” at the end of the proof of Affirmation C.3 isn’t adequately justified; this is fixed in Claim 6.4. The second is that the assumption \(d(l_{+}^{2}(y_{i}),l_{+}^{2}(y_{j}))<\varepsilon^{\prime}\) in the statement of Affirmation C.3 is never actually verified, and does not come trivially from a compactness argument. This is fixed in Claim 6.6. **Claim 6.6**.: _Given \(0<\delta<\epsilon\), there are choices of \(i<j\) such that either_ 1. _The points_ \(h_{+}(i)\) _and_ \(h_{+}(j)\) _bound a segment_ \(I_{+}\subset m\) _of length less than_ \(\delta\)_, and similarly with_ \(-\) _instead of_ \(+\)_. The four velocity vectors_ \(h^{\prime}_{+}(i),h^{\prime}_{+}(j),h^{\prime}_{-}(i),h^{\prime}_{-}(j)\) _all point to the same side of_ \(m\)_, and the four segments_ \(h_{+}([i,j])\)_,_ \(h_{-}([i,j])\)_,_ \(I_{+}\) _and_ \(I_{-}\) _have disjoint interiors. So, the curves_ \(\gamma_{\pm}:=h_{\pm}([i,j])\cup I_{\pm}\subset\partial M\) _are simple and disjoint._ 2. _The points_ \(h_{+}(i)\) _and_ \(h_{-}(j)\) _bound a segment_ \(I_{+}\subset m\) _of length less than_ \(\delta\)_, and similarly the points_ \(h_{-}(i)\) _and_ \(h_{+}(j)\) _bound a segment_ \(I_{-}\subset m\) _of length less than_ \(\delta\)_. The four velocity vectors_ \(h^{\prime}_{+}(i),h^{\prime}_{+}(j),h^{\prime}_{-}(i),h^{\prime}_{-}(j)\) _all point to the same side of_ \(m\)_, and the four segments_ \(h_{+}([i,j])\)_,_ \(h_{-}([i,j])\)_,_ \(I_{-}\) _and_ \(I_{+}\) _have disjoint interiors. So, the curve_ \(\gamma\subset\partial M\) _obtained by concatenating all four segments is simple._ See Figure 14 for a very useful picture. Note that in the picture, the velocity vectors of all paths intersecting \(m\) point to the same side of \(m\), i.e. 'up', and all 4-tuples of points are unlinked as in Claim 6.5. Proof.: Start by fixing a circular order on \(m\). Define 'the right' to be the direction in \(m\) that is increasing with respect to the circular order, and define 'the left' similarly. Since \(\lambda\) is minimal and not a simple closed curve, it has Figure 14. Above, the horizontal curve is always \(m\). infinitely many leaves \(\ell\) that are not boundary leaves. Fix some such \(\ell\), making sure that \(h_{\pm}\not\subset\ell\) if the given rays happen to lie inside the lamination \(\lambda\). The ray \(h_{+}\) accumulates onto both sides of \(\ell\), so if we fix \(p\in\ell\cap m\), the set \(h_{+}(\mathbb{N})\) accumulates onto \(p\) from both sides, and similarly with \(-\) instead of \(+\). Fix an interval \(J\subset m\) of length \(\delta\) centered at \(p\), and write \(J=J_{l}\cup J_{r}\) as the union of the closed subintervals to the 'left' and to the 'right' of \(p\). Note that \(p\not\in h_{\pm}(\mathbb{N})\), so each intersection of \(h_{\pm}\) with \(J\) lies in exactly one of \(J_{l}\) or \(J_{r}\). Let's call an index \(i\)_left-closest_ if either \(h_{+}(i)\) or \(h_{-}(i)\) lies in \(J_{l}\) and is closer to \(p\) than any previous \(h_{\pm}(k),\ k<i\), that lies in \(J_{l}\). _Right-closest_ is defined similarly using \(J_{r}\), and we call an index \(i\)_closest_ if it is either left or right closest. Note that since \(\delta<\epsilon\) we can never have both \(h_{+}(i),h_{-}(i)\) in \(J\) simultaneously, so no \(i\) is both right-closest and left-closest at the same time. Since there are infinitely many \(i\) of both types, at some point there will be a transition where some \(i_{l}\) is left-closest, some \(i_{r}>i_{l}\) is right-closest, and there are no closest indices in between. Let \(i_{c}\) be the smallest closest index that is bigger than \(i_{r}\). (Here, \(c\) stands for 'center', since the corresponding point on \(J\) will lie between the points we get from the indices \(i_{l}\) and \(i_{r}\).) We now have _three_ points in \(J\), so two of the corresponding velocity vectors point to the same side of \(m\). Let \(i,j\in\{i_{l},i_{r},i_{c}\}\) be the two corresponding indices, and for concreteness, _let's assume for the moment that \(h_{+}(i)\) and \(h_{+}(j)\) are the corresponding points in \(J\),_ deferring a discussion of the other cases to the end of the proof. Note that since the rays \(h_{\pm}\) are mutually homoclinic and are in tight position with respect to \(m\), the velocity vectors \(h^{\prime}_{-}(i)\) and \(h^{\prime}_{-}(j)\) point to the same sides of \(m\) as \(h^{\prime}_{+}(i)\) and \(h^{\prime}_{+}(j)\), respectively, and so all four vectors point to the same side. That is, * \(h^{\prime}_{+}(i),h^{\prime}_{+}(j),h^{\prime}_{-}(i),h^{\prime}_{-}(j)\) all point to the same side of \(m\). * the segment \(I_{+}\subset J\) bounded by \(h_{+}(i)\) and \(h_{+}(j)\) contains no element \(h_{+}(k)\) or \(h_{-}(k)\) where \(k\) is between \(i\) and \(j\). Let \(I_{-}\subset m\setminus J\) be the segment that is bounded by the points \(h_{-}(i)\) and \(h_{-}(j)\). _Suppose for a moment that we knew that \(I_{-}\) had length less than \(\delta\)._ Then for each \(k\), it is impossible that _both_\(h_{+}(k)\) or \(h_{-}(k)\) lie in \(I_{-}\), as we're assuming that corresponding intersections of \(h_{\pm}\) with \(m\) stay at least \(\epsilon>\delta\) apart. In particular, if \(k\) is between \(i,j\) and we apply the unlinking condition of Claim 6.5 twice, once to \(i,k\) and once to \(j,k\), we get from this and (b) above that _neither_ element \(h_{+}(k)\) or \(h_{-}(k)\) is contained in \(I_{-}\). So, the four segments \(h_{+}([i,j])\), \(h_{-}([i,j])\), \(I_{+}\) and \(I_{-}\) have disjoint interiors, and we're in the situation of case (I) in the claim, as desired. As constructed above, however, there is unfortunately no reason to believe that the interval \(I_{-}\) has length less than \(\delta\). To rectify this, recall that \(\lambda\) actually has infinitely many non-boundary leaves \(\ell^{n}\). For each such \(\ell^{n}\) and \(p^{n}\in\ell^{n}\cap m\), we can repeat the above construction using constants \(\delta^{n}\to 0\), producing points (say) \(h_{+}(i^{n}),h_{+}(j^{n})\) that lie within the length \(\delta^{n}\)-interval \(J^{n}\ni p^{n}\) and that satisfy properties (a) and (b) above. Choose the sequence \(p^{n}\in\ell^{n}\cap m\) so that it is monotonic in the circular order induced on \(m\), and let \(\delta^{n}\to 0\) fast enough so that the associated intervals \(I_{+}^{n}\) are all disjoint, so that in the circular order on \(m\) we have \[h_{+}(i^{1})<h_{+}(j^{1})<h_{+}(i^{2})<h_{+}(j^{2})<\cdots<h_{+}(i^{n}),\] and where each \(I_{+}^{n}\) is the interval \([h_{+}(i^{n}),h_{+}(j^{n})]\), rather than the complementary arc on \(m\) that has endpoints \(h_{+}(i^{n}),h_{+}(j^{n})\). Then Claim 6.5 implies that \[h_{-}(i^{1})>h_{-}(j^{1})>h_{-}(i^{2})>h_{-}(j^{2})>\cdots>h_{-}(i^{n}).\] Discarding finitely many \(n\), we can assume all the point \(h_{+}(i^{n}),h_{+}(j^{n})\) lie in an interval \(U\subset m\) of length less than \(\delta\). Since the points \(h_{-}(i^{n}),h_{-}(j^{n})\) are at least \(\epsilon>\delta\) away from the corresponding \(+\) points, they all lie in \(m\setminus U\). Then since the interval \(I_{-}^{n}\) is defined to be disjoint from \(I_{+}^{n}\subset U\), we must have \(I_{-}^{n}=[h_{-}(j^{n}),h_{-}(i^{n})]\), rather than the other interval with those endpoints. It follows that at least for large \(n\), all the intervals \(I_{-}^{n}\) are disjoint. Since \(m\) as compact, we can then pick some \(n\) where \(I_{-}^{n}\) has length less than \(\delta\), as desired. Therefore, we are in case (I) in the statement of the claim, and are done. In the argument above we have simplified the notation by assuming that we have points \(h_{+}(i^{n}),h_{+}(j^{n})\in J^{n}\) satisfying conditions (a) and (b), which put us in case (I) at the end. Up to exchanging \(+,-\), the only other relevant case is when, our chosen points are \(h_{-}(i^{n}),h_{+}(j^{n})\in J^{n}\). After passing to a subsequence in \(n\), if we are not in the case already addressed, then we may assume that our chosen points are \(h_{-}(i^{n}),h_{+}(j^{n})\in J^{n}\) for all \(n\). And after exchanging \(+\) with \(-\) and passing to a further subsequence, we may assume \[h_{-}(i^{1})<h_{+}(j^{1})<h_{-}(i^{2})<h_{+}(j^{2})<\cdots<h_{-}(i^{1})\] in the circular order on \(m\), and that the interval \(I_{+}^{n}=[h_{-}(i^{n}),h_{+}(j^{n})]\). Everything from then on works exactly as above: if we set \(I_{-}^{n}\) to be the interval bounded by \(h_{+}(i^{n}),h_{-}(j^{n})\) that is disjoint from \(I_{+}^{n}\), then for some \(n\) we have that the length of \(I_{-}^{n}\) is less than \(\delta\), and it is easy to verify that we are in case (II) of the claim. We now finish the proof of Theorem 6.1. Suppose we are in case (I) of Claim 6.6. Then the two simple closed curves \(\gamma_{\pm}\) drawn on the left in Figure 14 are the projections to \(M\) of the paths in \(\tilde{M}\) obtained by concatenating the arcs \(\tilde{h}_{\pm}|_{[i,j]}\) with lifts \(\tilde{I}_{\pm}\subset\tilde{m}_{j}\) of the intervals \(I_{\pm}\subset m\), see Figure 15. We can homotope one path to the other in \(\tilde{M}\) while preserving the fact that the endpoints are points on \(\tilde{m}_{i},\tilde{m}_{j}\) that differ by the unique deck transformation taking \(\tilde{m}_{i}\) to \(\tilde{m}_{j}\). So projecting down, the simple closed curves \(\gamma_{\pm}\) are freely homotopic in \(M\), and hence bound an annulus \(A\hookrightarrow M\). See the left part of Figure 15. There is a uniform lower bound (depending on \(\lambda,m\)) for the angle at any intersection point of any leaf of \(\lambda\) with \(m\), and the points \(h_{\pm}(i)\) are at least \(\epsilon\) away from each other in \(m\). This implies that there is a uniform lower bound for the Hausdorff distance between \(h_{\pm}|_{[i,j]}\) on \(\partial M\). As long as the bound \(\delta\) on the lengths of \(I_{\pm}\) is small enough, the geodesics in the homotopy classes of \(\gamma_{\pm}\) stay very close to \(h_{\pm}|_{[i,j]}\), and are therefore distinct. So, the curves \(\gamma_{\pm}\) are not homotopic in \(\partial M\), and hence bound an _essential_ annulus \(A\hookrightarrow M\). Choosing \(i,j\) to be large and \(\delta\) to be very small, \(\partial A\) is contained in \(S(\lambda)\) and its intersection number with \(\lambda\) is small. Hence, \(\lambda\) is an intrinsic limit of annuli, in the sense of Lemma 5.12, so we're done. Case (II) is similar. Here, the single simple closed curve \(\gamma\) described in Claim 6.6 (II) bounds a Mobius band \(B\hookrightarrow M\), see the right side of Figure 15. Since \(\partial M\) is orientable, \(B\) is not boundary parallel, and hence by JSJ theory the boundary of a regular neighborhood of \(B\) is an essential annulus \(A\hookrightarrow M\) whose boundary consists of two disjoint curves that are both homotopic to \(\gamma\) on \(\partial M\). As in case (I), we can make the intersection number of \(\partial A\) with \(\lambda\) arbitrarily small, so \(\lambda\) is an intrinsic limit of annuli, and we are done. **Remark 6.7**.: The proof (D) above is quite delicate. Most of this delicacy comes from Claims 6.4 and 6.6, which are needed to ensure that the annuli approximating \(\lambda\) that are produced immediately afterward are _embedded_. But Figure 15. On the left, the two paths drawn in heavy ink project to the two simple closed curves \(\gamma_{\pm}\) in Claim 6.6 (I), shown on the left in Figure 14. The shaded region is a rectangle embedded in \(\tilde{M}\) that projects to an embedded annulus \(A\hookrightarrow M\) with boundary \(\gamma_{-}\cup\gamma_{+}\). On the right, the union of the two paths projects to the simple closed curve \(\gamma\) on \(M\) of Claim 6.6 (II), and the shaded region projects to a Möbius band \(B\hookrightarrow M\) with boundary \(\gamma\). while we are able to prove these claims using arguments involving the planarity of the closure of \(\partial\tilde{M}\) in \(\mathbb{H}^{3}\cup\partial_{\infty}\mathbb{H}^{3}\), one would not have to worry about these annuli being embedded if there was a strong 'Annulus Theorem' guaranteeing that any essential singular annulus in an irreducible 3-manifold \(M\) can be surgered to give an essential embedded annulus. If this were true, Claims 6.4 and 6.6 could be replaced by a one paragraph compactness argument. Here, a _singular annulus_ is a map \(f:(A,\partial A)\longrightarrow(M,\partial M)\) where \(A=S^{1}\times[0,1]\). We say \(f\) is _essential_ if it is not homotopic rel \(\partial A\) into \(\partial M\). Such an annulus theorem follows from the JSJ decomposition when \(M\) has incompressible boundary. When \(M\) has compressible boundary, there is a similar theorem as long as the original singular annulus has a spanning arc that is not homotopic rel \(\partial\) into \(\partial M\), see Cannon-Feustel [10]. However, our proof above does not provide such annuli, and indeed such annuli do not exist in compression bodies (the \(M\) of most interest to us), since _any_ proper arc in a compressionbody \(M\) is homotopic rel \(\partial\) into \(\partial M\). In fact, in a general \(M\), one _cannot_ always surger essential singular annuli to produce embedded essential annuli. For instance, the two curves in Figure 16 bound an essential singular annulus that cannot be surgered to give an embedded essential annulus. However, in that example, one _can_ surger to get a meridian in the handlebody, so maybe an essential singular annulus can always be surgered to give either a meridian or an essential embedded annulus? This also turns out not to be true. Suppose \(P\) is a pair of pants and let \(M=P\times[0,1]\), which is homeomorphic to a genus two handlebody. If \(\gamma\longrightarrow P\) is an immersed figure-8 whose image forms a spine of \(P\), then the singular annulus \(\gamma\times[0,1]\longrightarrow P\times[0,1]\) is essential, but the three embedded annuli that one can obtain from it by surgery are all inessential, and no meridian can be created by surgery either. However, we expect this is the _only_ counterexample. The first author of this paper has spent considerable time trying to prove this Figure 16. The two curves \(\alpha\) and \(\beta\) are homotopic through the handlebody pictured, and therefore bound an essential singular annulus. The only annuli one can produce from surgery are inessential, but one can surger to obtain the ‘obvious’ disc in the picture that separates the handlebody into two solid tori. with a tower argument, but pushing down the tower is very subtle, since if the obvious constructions fail, one has to characterize the figure 8 example. ## 7. Hausdorff limits of meridians Let \(M\) be an orientable hyperbolizable compact 3-manifold, equip \(\partial_{\chi<0}M\) with a hyperbolic metric. In [35, Theorem B.1], Lecuire showed that every lamination \(\lambda\) on \(\partial_{\chi<0}M\) that is a Hausdorff limit of meridians contains a homoclinic leaf that is a homoclinic geodesic. The converse is not true: certainly in order to be a Hausdorff limit of meridians \(\mu\) needs to be connected, and as explained in Figure 1 in the introduction there are even connected laminations that contain homoclinic leaves but are not Hausdorff limits of meridians. We say that two laminations \(\mu_{1},\mu_{2}\) are _commensurable_ if they contain a common sublamination \(\nu\) such that for both \(i\), the difference \(\mu_{i}\setminus\nu\) is the union of finitely many leaves. \(\mu_{1}\) and \(\mu_{2}\) are _strongly commensurable_ if they contain a common \(\nu\) such that for both \(i\), the difference \(\mu_{i}\setminus\nu\) is the union of finitely many leaves, none of which are simple closed curves. **Theorem 7.1** (Hausdorff limits of meridians).: _Suppose that \(S\subset\partial_{\chi<0}M\) is a connected subsurface with geodesic boundary, such that \(\partial S\) is incompressible, and that the disc set \(\mathcal{D}(S,M)\) is 'large', i.e. it has infinite diameter in the curve graph \(C(S)\). Let \(\lambda\) be a geodesic lamination in \(int(S)\) that is a finite union of minimal laminations, and assume that the following does not hold:_ * \(S\) _is a closed, genus two surface, there is a separating meridian_ \(\mu\) _that does not intersect_ \(\lambda\) _transversely, and_ \(\lambda\) _intersects transversely the two nonseparating meridians disjoint from_ \(\mu\)_._ _Then \(\lambda\) is strongly commensurable to a Hausdorff limit of meridians in \(S\) if and only if \(\lambda\) is strongly commensurable to a lamination containing a homoclinic leaf, and this happens if and only if one of the following holds:_ 1. \(\lambda\) _is disjoint from a meridian on_ \(S\)_,_ 2. _some component of_ \(\lambda\) _is an intrinsic limit of meridians, or_ 3. _there is an essential (possibly nontrivial) interval bundle_ \(B\subset M\) _over a compact surface_ \(Y\) _that is not an annulus or Mobius band, and there are components_ \(\lambda_{\pm}\subset\lambda\) _that each fill a component of_ \(\partial_{h}B\) _(possibly the same component, if_ \(\partial_{h}B\) _is connected), such that_ \(\lambda_{\pm}\) _are essentially homotopic through_ \(B\)_, as in SS_2.9_, and there is a compression arc_ \(\alpha\) _for_ \(B\) _that is disjoint from_ \(\lambda\)_._ Recall from Proposition 3.1 that when \(\mathcal{D}(S,M)\) does not have infinite diameter in \(C(S)\), it is either empty, consists of a single separating meridian, or consists of a single non-separating meridian \(m\) and all separating curves that are band sums of \(m\). In these cases, it is obvious what the Hausdorff limits of meridians are. For instance, in the last case a finite union \(\lambda\) of minimal laminations in \(S\) is strongly commensurable to a Hausdorff limit of meridians if and only if either \(\lambda=m\) or \(\lambda\subset S\setminus m\). For the 'if' direction, note that if \(\lambda\subset S\setminus m\) then it can be approximated by an arc with endpoints on opposite sides of \(m\), and doing a band sum with \(m\) gives a curve that approximates \(\lambda\). For the 'only if' direction, just note that all meridians are either equal to \(m\) or are contained in \(S\setminus m\). The case (\(\star\)) above really is exceptional. In that case, (1) holds, and at least when \(\mu\subset\lambda\) we have that \(\lambda\) contains a homoclinic leaf, but \(\lambda\) is _not_ commensurable to a Hausdorff limit of meridians. Indeed, let \(T_{\pm}\subset S\setminus\mu\) be the two components of \(S\setminus\mu\) and hoping for a contradiction, take a sequence of meridians \((m_{n})\) that Hausdorff converges to \(\lambda^{\prime}\supset\lambda\). We can assume after passing to a subsequence that \(m_{n}\) has an \(\mu\)-wave in \(T_{+}\) (say) for all \(n\). Since \(T_{+}\) is a compressible punctured torus, there is a _unique_ homotopy class rel \(\mu\) of \(\mu\)-wave in \(T_{+}\), so \(\lambda^{\prime}\) contains a leaf \(\ell\) that either intersects \(T_{+}\) in an arc in this homotopy class, or is contained in \(T_{+}\) and is obtained by spinning an arc in this homotopy class around \(\mu\). But then \(\ell\) intersects nontrivially every nonperipheral minimal lamination in \(T_{+}\) other than the unique nonseparating meridian \(\mu_{+}\) of \(T_{+}\), so \(\lambda\) is disjoint from \(\mu_{+}\), contrary to assumption. ### The proof of Theorem 7.1 Most of the proof of Theorem 7.1 is contained in the following results. Assume that \(S\subset\partial_{\chi<0}M\) is a connected subsurface with geodesic boundary, \(\partial S\) is incompressible, and \(\mathcal{D}(S,M)\) is large. **Lemma 7.2**.: _Suppose \(\lambda\subset S\) is a lamination, there is a meridian \(\mu\) that does not intersect \(\lambda\) transversely, and that if \(S\) is a closed surface of genus \(2\) then \(\mu\) is nonseparating. Then \(\lambda\) is strongly commensurable to a Hausdorff limits of meridians on \(S\)._ The proof of Lemma 7.2 uses some ideas that the first author developed with Sebastian Hensel, whom we thank for his contribution. Proof.: We may assume that \(\lambda\) is a finite union of minimal components. It suffices to assume \(\mu\) is not a leaf of \(\lambda\), as long as we prove the conclusion both for such a \(\lambda\) and for \(\lambda\cup\mu\). Assume first that \(\mu\) is nonseparating in \(S\). Let \((c_{i})\) be a sequence of simple closed curves on \(S\) that Hausdorff-converges to \(\lambda\). One can do this by constructing for each component \(\lambda_{0}\subset\lambda\) a simple closed geodesic approximating \(\lambda_{0}\), by taking an arc that runs along a leaf of \(\lambda_{0}\) for a long time, and then closing it up the next time it passes closest to its initial endpoint in the correct direction. Let \(\alpha\) be a simple closed curve on \(S\) that intersects \(\mu\) once, and intersects all the components of \(\lambda\). For each \(k\), let \(\gamma_{i}^{k}\) be the geodesic homotopic to the 'band sum' of \(\mu\) and \(T_{c_{i}}^{k}(\alpha)\), where \(T_{c_{i}}\) is the twist around \(c_{i}\) and a band sum of two curves intersecting once is the boundary of a regular neighborhood of their union. Note that \(\gamma_{i}^{k}\) is a meridian for all \(i,k\). If \((k_{i})\) is a sequence that increases quickly enough, \((\gamma_{i}^{k_{i}})\) converges to a lamination strongly commensurable to \(\lambda\). And if we pick a meridian \(\beta\) on \(S\) that intersects both \(\mu\) and \(\lambda\), then \(T_{\mu}^{i}\circ T_{\gamma_{i}^{k_{i}}}^{i}(\beta)\) Hausdorff converges to a lamination strongly commensurable to \(\lambda\cup\mu\). Now suppose \(\mu\) is separating. We claim that there is another separating meridian in \(S\) that is disjoint from \(\mu\). Let \(m\) be a maximal multicurve of meridians in \(S\) that contains \(\mu\) as a component. Since \(\mathcal{D}(S,M)\) is large, \(m\neq\mu\). If \(m\) has a separating component distinct from \(\mu\), we are done. So, suppose we have a nonseparating component \(m_{0}\subset m\). We can make a (separating) band sum of \(m_{0}\) that is disjoint from \(\mu\) unless \(m_{0}\) lies in a punctured torus component of \(S\setminus\mu\). So, we assume the latter is true. Since \(\mathcal{D}(S,M)\) is large, it cannot be that \(m=\mu\cup m_{0}\), since then all meridians are disjoint from \(m_{0}\). So, there is another component \(m_{1}\) of \(m\), which we can assume is also nonseparating. This \(m_{1}\) must lie on the opposite side of \(\mu\) from \(m_{0}\), and as before we're done unless the component of \(S\setminus\mu\) containing \(m_{1}\) is also a punctured torus. But in this case, \(S\) is a genus two surface contrary to assumption. Let \(T\subset S\setminus\mu\) be a component that contains a nonperipheral separating meridian, which we call \(\mu^{\prime}\). Let \(V\) be the other component. Write \(\lambda=\lambda_{T}\cup\lambda_{V}\), where \(\lambda_{T}\subset T\) and \(\lambda_{V}\subset S\setminus T\). Let \(C\) be the compression body with exterior boundary equal to the component of \(\partial M\) that contains \(S\), that one obtains by compressing the meridian \(\mu\). We claim that there are sequences of simple closed curves \((\alpha_{i})\), \((\beta_{i})\) in \(T\) such that the following two properties hold: * \((\alpha_{i})\) and \((\beta_{i})\) both Hausdorff converge to a geodesic lamination strongly commensurable to \(\lambda_{T}\), * for all \(i\), \(\alpha_{i}\) and \(\beta_{i}\) bound an essential annulus in \(C\). To construct these sequences, start by picking a simple closed curve \(\alpha\) in \(T\) such that \(\alpha\) and each component of \(\lambda_{T}\) together fill \(T\). Let \(\beta\) be a simple closed curve on \(T\) such that \(\alpha,\beta,\mu\) bound a pair of pants in \(T\). In \(C\), we can compress the boundary component \(\mu\) of this pair of pants, so \(\alpha,\beta\) bound an annulus in \(C\). Moreover, this annulus is essential, since otherwise \(\alpha,\beta\) bound an annulus in \(T\), implying \(T\) is torus with the one boundary component \(\mu\), contradicting the fact that there is a separating nonperipheral meridian in \(T\). Then find a sequence \((c_{i})\) of simple closed curves in \(T\) that Hausdorff converge to a geodesic lamination strongly commensurable to \(\lambda_{T}\), take \(k_{i}\) to be a fast increasing sequence and set \(\alpha_{i}=T_{c_{i}}^{k_{i}}(\alpha)\) and \(\beta_{i}:=T_{c_{i}}^{k_{i}}(\beta)\). Since \(\alpha\) fills with every component of \(\lambda_{T}\), the curve \(\beta\) intersects every component of \(\lambda_{T}\). It follows that \((\alpha_{i})\) and \((\beta_{i})\) Hausdorff converge to a geodesic lamination strongly commensurable to \(\lambda_{T}\). And since each \(c_{i}\) is nonperipheral in \(T\), each component of \(c_{i}\) bounds an annulus in \(C\) with a curve on the interior boundary of \(C\), so the twist \(T_{c_{i}}\) extends to \(C\), implying that \(\alpha_{i},\beta_{i}\) bound an annulus in \(C\) as desired above. Now let \(C^{\prime}\) be the compression body obtained by compressing both \(\mu\) and \(\mu^{\prime}\), so \(C\subset C^{\prime}\subset M\). Note that since both curves are separating and are disjoint, Proposition 3.1 says that \(\mathcal{D}(S,C^{\prime})\) is large, so we can pick a meridian \(m\in\mathcal{D}(S,C^{\prime})\) that intersects \(\mu\) and every component of \(\lambda\). Fix a sequence of geodesic muticurves \((d_{i})\) in \(V\) that Hausdorff converges to \(\lambda_{V}\). As with the twists \(T_{c_{i}}\) in \(C\), the twists \(T_{d_{i}}\) extend to \(C^{\prime}\). And the compositions \(T_{\alpha_{i}}\circ T_{\beta_{i}}^{-1}\) extend to \(C^{\prime}\) because the curves bound annuli in \(C\subset C^{\prime}\). We then define \(\gamma_{i}:=(T_{\alpha_{i}}\circ T_{\beta_{i}}^{-1})^{k_{i}}\circ T_{d_{i}}^{ k_{i}}(m)\) for some fast increasing \(k_{i}\to\infty\). These \(\gamma_{i}\) are all meridians and Hausdorff converge to a lamination strongly commensurable to \(\lambda\). To obtain \(\lambda\cup\mu\) instead, hit \(\gamma_{i}\) with high powers of twists around \(\mu\). Here is a more powerful version of Lemma 7.2. The idea of the proof is more or less the same, but more complicated. **Proposition 7.3** (Promoting Hausdorff limits).: _Suppose that \(\nu,\eta\) are disjoint geodesic laminations on \(S\) that are finite unions of minimal components, and set \(\lambda=\nu\cup\eta\). Suppose also that no component of \(\nu\) is a meridian._ _Let \(X\) be the union of the subsurfaces with geodesic boundary that are filled by the components of \(\nu\). Suppose that there are disjoint, nonhomotopic meridians \(\mu,\mu^{\prime}\) on \(S\) that are disjoint from \(\eta\), and a sequence of homeomorphisms_ \[f_{i}:S\longrightarrow S,\ \ f_{i}|_{S\setminus\text{int}(X)}=id\] _such that \(\mu_{i}:=f_{i}(\mu)\) and \(\mu_{i}^{\prime}:=f_{i}(\mu^{\prime})\) are both sequences of meridians that Hausdorff converge to laminations strongly commensurable to \(\nu\). Then \(\lambda=\nu\cup\eta\) is strongly commensurable to a Hausdorff limit of meridians in \(S\)._ Before proving the proposition, we record the following application. **Corollary 7.4**.: _Suppose that \(\lambda\) is a geodesic lamination on \(S\) that is a finite union of minimal components. If either_ * _some component_ \(\nu\subset\lambda\) _that is not a simple closed curve is an intrinsic limit of meridians,_ * _there are (possibly equal) components_ \(\lambda_{\pm}\subset\lambda\)_, neither of which is a simple closed curve, and where each fills a component of the horizontal boundary (possibly the same component if_ \(\partial_{h}B\) _is connected) of an essential interval bundle_ \[(B,\partial_{H}B)\hookrightarrow(M,S),\] _where_ \(\lambda_{\pm}\) _are essentially homotopic through_ \(B\)_, and where there is a compression arc_ \(\alpha\) _for_ \(B\) _that is disjoint from_ \(\lambda\)_,_ _then \(\lambda\) is strongly commensurable to a Hausdorff limit of meridians._ Proof.: Suppose some component \(\nu\subset\lambda\) that is not a simple closed curve is an intrinsic limit of meridians. Setting \(X:=S(\lambda)\) we can take \((\mu_{i})\) to be any sequence of meridians in \(X\) that Hausdorff converges to a lamination strongly commensurable to \(\nu\). Moreover, since \(\nu\) fills \(X\) and is a limit of meridians, the disc set \(\mathcal{D}(X,M)\) is large, so for each \(i\) there is some meridian \(\mu_{i}^{\prime}\) disjoint from \(\mu_{i}\). Since there are only finitely many topological types of pairs of disjoint curves in \(X\) up to the pure mapping class group of \(X\), after passing to a subsequence we can assume that all \(\mu_{i},\mu_{i}^{\prime}\) are of the form in the proposition. The desired conclusion follows. In the second case, we let \(X\) be the subsurface with geodesic boundary obtained by tightening \(\partial_{H}B\) and set \(\nu=\lambda_{-}\cup\lambda_{+}\). Write the interval bundle as \(\pi:B\longrightarrow Y\), where \(Y\) is a compact surface with boundary. We can assume without loss of generality that \(\alpha\) is a strict compression arc, i.e. that it is homotopic rel endpoints to a fiber \(\pi^{-1}(y),y\in\partial Y\). Note that since \(\lambda_{\pm}\) are not simple closed curves, \(Y\) is not an annulus or Mobius band. Since \(\lambda_{\pm}\) are essentially homotopic through \(B\), Fact 2.16 says that if our reference hyperbolic metrics are chosen appropriately, we have that \(\lambda_{-}\cup\lambda_{+}=(\pi|_{\partial_{H}B})^{-1}(\bar{\lambda})\) for some geodesic lamination \(\bar{\lambda}\) on \(Y\). Since \(\lambda_{\pm}\) together fill \(\partial_{H}B\), the lamination \(\bar{\lambda}\) is minimal and fills \(Y\). So in particular, it has no closed, one-sided leaves, and therefore if we pick a nonzero transverse measure on \(\bar{\lambda}\), we have that \(\bar{\lambda}\) is the projective limit of a sequence of two-sided nonperipheral simple closed curves \((c_{i})\) in \(Y\), by Theorem 1.2 of [15]. Homotope the \(c_{i}\) on \(Y\) to based simple loops at \(y\in\partial Y\), let \(m(c_{i})\) be the associated compressible curves on \(S\) constructed in Claim 2.11, and let \(\mu_{i}\) be the geodesic meridians on \(S\) in their homotopy classes. Then \((\mu_{i})\) Hausdorff converges to a lamination strongly commensurable to \(\lambda_{-}\cup\lambda_{+}\). After passing to a subsequence, we can assume that all the \(c_{i}\) differ by pure homeomorphisms of \(Y\), in which case the meridians \(\mu_{i}\) are as required in Proposition 7.3, for some \(\mu,f_{i}\). Note that since our compression arc is assumed to be disjoint from \(\lambda\), all the \(\mu_{i}\) are disjoint from \(\eta:=\lambda\setminus\lambda_{\pm}\), and hence so is our \(\mu\). We create disjoint meridians \(\mu_{i}^{\prime}\) similarly, by taking some \(c_{i}^{\prime}\) on \(Y\) disjoint from \(c_{i}\), and letting \(\mu_{i}^{\prime}\) be the geodesic meridian homotopic to \(m(c_{i}^{\prime})\). It then follows from Proposition 7.3 that \(\lambda\) is strongly commensurable to a Hausdorff limit of meridians as desired. We now prove the proposition. Proof of Proposition 7.3.: Assume that \(\mu,\mu^{\prime}\) are disjoint meridian on \(S\) that are disjoint from \(\eta\), that \(f_{i}:S\longrightarrow S\) are homeomorphisms that are the identity outside of \(X\), and that \(\mu_{i}:=f_{i}(\mu)\) and \(\mu_{i}^{\prime}:=f_{i}(\mu^{\prime})\) are sequences of meridians that Hausdorff converge to laminations strongly commensurable to \(\nu\). We want to show that \(\nu\cup\eta\) is strongly commensurable to a Hausdorff limit of meridians on \(S\). We now basically repeat the argument in Lemma 7.2, so the reader should make sure that they understand that argument before continuing here. Suppose \(\mu\) (say) is nonseparating in \(S\). Choose a simple closed curve \(\alpha\) on \(S\) that intersects \(\mu\) once and intersects essentially each component of \(\eta\). Let \(\alpha_{i}:=f_{i}(\alpha)\), and note that \(\alpha_{i}\) intersects \(\mu_{i}\) once, and also intersects essentially each component of \(\eta\). Let \((c_{i})\) be a geodesic multi-curve that Hausdorff converges to \(\eta\), and let \(\gamma_{i}^{k}\) be the geodesic homotopic to the 'band sum' \[B(\mu_{i},T_{c_{i}}^{k}(\alpha_{i}))=T_{c_{i}}^{k}(B(\mu_{i},\alpha_{i}))=T_{c _{i}}^{k}\circ f_{i}(B(\mu,\alpha)), \tag{4}\] where here \(B(\cdot,\cdot)\) takes in two simple closed curves that intersect once and returns the boundary of the regular neighborhood of their union. If one of the inputs in a band sum is a meridian, then so is the output, so \(\gamma_{i}^{k}\) is a meridian for all \(i,k\). The given equalities are true at least for large \(i\). The first equality holds because \(\mu\) is disjoint from \(\eta\), \(f_{i}=id\) on the subsurfaces filled by the components of \(\eta\), and therefore \(\mu_{i}\) is disjoint from \(c_{i}\) for large \(i\). The second equality is obvious from the definitions of \(\mu_{i},\alpha_{i}\). Let \((k_{i})\) be a fast increasing sequence. After passing to a subsequence, we can assume that \((\gamma_{i}^{k_{i}})\) Hausdorff converges to a lamination \(\lambda\). _We claim that \(\lambda\) is strongly commensurable to \(\nu\cup\eta\)_. First, using the second term in (4), if \(k_{i}\) is huge with respect to \(i\), then \(c_{i}\) is contained in a small neighborhood of \(\gamma_{i}^{k_{i}}\), and so since \((c_{i})\) Hausdorff converges to \(\eta\), _we have \(\lambda\supset\eta\)_. We claim that each \(\gamma_{i}^{k_{i}}\) essentially intersects each component \(X_{0}\subset X\). If not, then from the third term in (4) it follows that \(B(\mu,\alpha)\) is disjoint from \(X_{0}\). But \(\mu\) essentially intersects \(X_{0}\), since otherwise the Hausdorff limit of the \(\mu_{i}\) will not contain the associated component \(\nu_{0}\subset\nu\). So, \(\mu,\alpha\) and \(X_{0}\) all lie in the punctured torus \(T\subset S\) bounded by \(B(\mu,\alpha)\). But since \(\alpha\) intersects every component of \(\eta\), we have that \(\eta\) intersects \(T\) as well, in a collection of arcs disjoint from \(\mu\). Since \(X_{0}\) is disjoint from \(\eta\), \(X_{0}=\mu\), so \(\nu_{0}=\mu\) is a meridian, contrary to our standing assumption. It now follows that the Hausdorff limit \(\lambda\) essentially intersects each component of \(X\). Since \(\gamma_{i}^{k_{i}}\) is disjoint from \(\mu_{i}\) and \((\mu_{i})\) Hausdorff converges to a lamination containing \(\nu\), The laminations \(\lambda,\nu\) cannot intersect transversely. Since each component of \(X\) is filled by a component of \(\nu\), _we have \(\lambda\supset\nu\)_. Finally, if \(Y\) is the union of all the subsurfaces with geodesic boundary that are filled by the components of \(\eta\), then as \(f_{i}=id\) outside \(X\) and \(c_{i}\subset Y\) for large \(i\), the intersection of \(\gamma_{i}^{k_{i}}\) with \(S\setminus(X\cup Y)\) is properly homotopic to the intersection of \(B(\mu,\alpha)\), which is independent of \(i\). It follows that \(\lambda\setminus(\nu\cup\eta)\)_is a finite collection of non-closed leaves,_ so we are done. We can now assume that both \(\mu,\mu^{\prime}\) are separating, so that \(\mu_{i},\mu_{i}^{\prime}\) are also separating for all \(i\). Let \(T_{i}\subset S\setminus\mu_{i}\) be the component containing \(\mu_{i}^{\prime}\), and let \(V_{i}\) be the other component. Note that \(T_{i}\) is not a punctured torus, since it contains a nonperipheral separating curve. Since \(\partial T_{i}\cap\eta=\emptyset\), we have \[\eta=\eta_{T}\sqcup\eta_{V},\] where the first term is the intersection of \(\eta\) with \(T_{i}\), and the second term is defined similarly. Note that since \(f_{i}=id\) on \(S\setminus X\), all the \(\mu_{i}\) induce the same two-element partition of the components of \(S\setminus X\), so at least after passing to a subsequence the decomposition of \(\eta\) above is actually independent of \(i\), which is why we have omitted the \(i\) in the notation. Let \(C\) be the compression body whose exterior boundary is the component of \(\partial M\) containing \(S\), and which is obtained by compressing the curve \(\mu\). Let \(C^{\prime}\) be similarly obtained by compressing both \(\mu\) and \(\mu^{\prime}\), so that \(C\subset C^{\prime}\subset M\). Since \(C^{\prime}\) admits two nonhomotopic disjoint separating meridians, the disc set \(\mathcal{D}(S,C)\) is large by Proposition 3.1, so we can pick a meridian \(m\in\mathcal{D}(S,C)\) that intersects every component of \(\nu\cup\eta\), as well as \(\mu,\mu^{\prime}\). Let \(C_{i}\subset C^{\prime}_{i}\subset M\) be the compression bodies obtained by compressing \(\mu_{i},\mu^{\prime}_{i}\). Then \(f_{i}\) extends to a map \(C^{\prime}\longrightarrow C^{\prime}_{i}\), implying that \(m_{i}:=f_{i}(m)\) is a meridian in \(C^{\prime}_{i}\). As in the proof of Lemma 7.2, we can pick sequences \((\alpha_{i}),(\beta_{i})\) of simple closed curves in \(T_{i}\) such that \((\alpha_{i})\) and \((\beta_{i})\) both Hausdorff converge to \(\eta_{T}\), and where \(\alpha_{i},\beta_{i}\) bound an essential annulus in \(C_{i}\) for all \(i\). As in the lemma, \(T_{\alpha_{i}}\circ T_{\beta_{i}}^{-1}\) extends to \(C^{\prime}_{i}\). Let \((c_{i})\) be a sequence of multicurves in \(V_{i}\) that Hausdorff converges to \(\eta_{V}\). Each component of \(c_{i}\) bounds an annulus in \(C^{\prime}_{i}\) with a curve on the interior boundary of \(C^{\prime}_{i}\), and hence the multitwist \(T_{c_{i}}\) extends to a homeomorphism of \(C^{\prime}_{i}\). For any given \(k\), set \[\gamma_{i}^{k}:=(T_{\alpha_{k}}\circ T_{\beta_{k}}^{-1})^{k}\circ T_{c_{k}}^{k }(m_{i}).\] We claim that for fast increasing \(k_{i}\), the curves \(\gamma_{i}^{k_{i}}\) Hausdorff converge to a lamination that is strongly commensurable to \(\nu\cup\eta\) as desired. This is proved using the same types of arguments we employed in the nonseparating case above. In particular, recall that \(m\) was selected to intersect all components of \(\nu\cup\eta\). Since \(f_{i}\) is supported on subsurfaces filled by components of \(\nu\), all the \(m_{i}=f_{i}(m)\) intersect all components of \(\nu\cup\eta\), and hence for large \(k_{i}\) they intersect \(\alpha_{k_{i}},\beta_{k_{i}}\). So, \(\gamma_{i}^{k_{i}}\) is twisted many times around \(\alpha_{k_{i}},\beta_{k_{i}}\), and hence its Hausdorff limit contains \(\nu\). Similarly, the \(m_{i}\) intersect \(c_{k_{i}}\) for large \(i\). Since \(c_{k}\) lies in \(V_{k}\), it is disjoint from \(\alpha_{k}\subset T_{k}\) and \(\beta_{k}\subset T_{k}\), and thus the Hausdorff limit of \(\gamma_{i}^{k_{i}}\) contains \(\eta\). Finally, the Hausdorff limit has no other minimal components because \(\alpha_{k},\beta_{k},c_{k}\) are contained in subsurfaces filled by components of \(\nu\cup\eta\), and \(m_{i}=f_{i}(m)\) is constant outside this subsurfaces. We can now start the proof of the theorem. Proof of Theorem 7.1.: Suppose that \(\lambda\subset S\) is a lamination and \((\star)\) does not hold, so that it is not the case that \(S\) is a genus two surface and \(\lambda\) is disjoint from a separating meridian \(\mu\), but intersects the two nonseparating meridians disjoint from \(\mu\). We want to show that \(\lambda\) is strongly commensurable to a Hausdorff limit of meridians if and only if it is strongly commensurable to a lamination containing a homoclinic leaf, which happens if and only if either 1. \(\lambda\) is disjoint from a meridian, 2. some component of \(\lambda\) is an intrinsic limit of meridians, or 3. there is an essential (possibly nontrivial) interval bundle \(B\subset M\) over a compact surface \(Y\) that is not an annulus or Mobius band, and there are components \(\lambda_{\pm}\subset\lambda\) that each fill a component of \(\partial_{H}B\), such that \(\lambda_{\pm}\) are essentially homotopic through \(B\), as in SS2.9, and there is a compression arc \(\alpha\) for \(B\) that is disjoint from \(\lambda\). Hausdorff limit \(\implies\) homoclinic leaf.Suppose first that \(\lambda\) is strongly commensurable to a Hausdorff limit of meridians \(\lambda^{\prime}\). Then by [35, Theorem B.1], there is a homoclinic leaf \(h\subset\lambda^{\prime}\), so \(\lambda\) is strongly commensurable to a lamination with a homoclinic leaf as desired. Homoclinic leaf \(\implies\) (1), (2) or (3).We now assume we have a homoclinic leaf \(h\) in some lamination strongly commensurable to \(\lambda\). The two ends of \(h\) limit onto (possibly equal) components \(\lambda_{\pm}\subset\lambda\). If one of \(S(\lambda_{\pm})\) has compressible boundary, there is a meridian disjoint from \(\lambda\), so we are in case (1) and are done. So, \(\partial S(\lambda_{\pm})\) is incompressible, and we're in the situation of Theorem 6.1 and Corollary 6.3. We now break into cases. If one of \(\lambda_{\pm}\) is an intrinsic limit of meridians, we're in case (2) and are done. If we're in case (3) of Theorem 6.1 and Corollary 6.3, we're in case (3) of the theorem and are done, unless the given interval bundle \(B\longrightarrow Y\) is over an annulus or Mobius band. But in that case, letting \(c\) be a boundary component of \(Y\), we can consider the geodesic meridian \(\mu\) on \(S\) homotopic to the \(m(c)\) constructed in Claim 2.11, using the compressing arc given by Corollary 6.3. This \(\mu\) is disjoint from \(\lambda\), so we're in case (1) of the theorem. Finally suppose that the two ends of \(h\) are asymptotic on \(S\), so that \(\lambda_{-}=\lambda_{+}\). Let's separate further into the cases (i) and (ii) in Corollary 6.3. In case (i), using the notation of the corollary, the curve \(c\cup h([-s.s])\) is a meridian disjoint from \(\lambda\). So, we're in case (1) of the theorem. In case (ii), let \(T\) be a neighborhood of \(h\cup\lambda_{\pm}\) that is either a punctured torus or a pair of pants, depending on whether the two ends of \(h\) limit onto opposite sides of \(\lambda_{\pm}\), or onto the same side. Because we're in case (ii), there is a meridian in \(T\). Hence, one of the boundary components of \(T\) is a meridian, and is disjoint from \(\lambda\) so we're done. (1), (2) or (3) \(\implies\) Hausdorff limit.Suppose (1), (2) or (3) holds. We want to show \(\lambda\) is strongly commensurable to a Hausdorff limit of meridians. If \(\lambda\) is disjoint from a meridian, then we're done by Lemma 7.2. If a component of \(\lambda\) is an intrinsic limit of meridians, we're done by the first part of Corollary 7.4. In case (3) above, we're done by the second part of Corollary 7.4. ## 8. Extending partial pseudo-Anosovs to compression bodies Let \(M\) be a compression body with exterior boundary \(\Sigma\). Let \(S\subset\Sigma\) be an essential subsurface such that \(\partial S\) is incompressible. In this section, we prove: **Theorem 8.1** (Extending partial pseudo-Anosovs).: _Suppose that \(f:\Sigma\longrightarrow\Sigma\) is a partial pseudo-Anosov supported on \(S\). Then \(f\) has a power that extends to a nontrivial subcompressionbody of \((M,S)\) if and only if the attracting lamination of \(f\) is a projective limit of meridians that lie in \(S\)._ When \(S=\Sigma\), this is a theorem of Biringer-Johnson-Minsky [2]. The proof of Theorem 8.1 is basically the same as their proof, but we need to go through it anyway, to note the places that parabolics appear, and to deal with the fact that we are looking at subcompression bodies of \((M,S)\) rather than of \(M\). Also, before starting on the bulk of the proof in SS8.2, we isolate part of the argument into a separate purely topological subsection, SS8.1. This separation of the argument into distinct topological and geometric parts makes it more understandable than the original version, we think. ### Dynamics on the space of marked compression bodies Let \(\Sigma\) be a closed, orientable surface, and let \(S\subset\Sigma\) be an essential subsurface. The _space of marked \(S\)-compression bodies_ is defined to be \[\operatorname{C{Bod}}(S)=\{(C,h:\Sigma\to\partial_{+}C)\}/\sim,\] where here \(C\) is a compression body, \(h\) is a homeomorphism, and * the multicurve \(h(\partial S)\subset\partial_{+}C\) is incompressible, * there is a multicurve on \(S\) such that \(h(S)\) is a cut system for \(C\), i.e. \(h(S)\) bounds a collection of disks that cut \(C\) into balls and trivial interval bundles over the interior boundary components. We declare \((C_{i},h_{i}:S\to\partial_{+}C_{i})\), \(i=1,2\) to be equivalent (written \(\sim\) as above) if there is a homeomorphism \(\phi:C_{1}\to C_{2}\) that respects the boundary markings: that is, \(\phi\circ h_{1}\) and \(h_{2}\) are homotopic maps \(S\to\partial_{+}C_{2}\). We write \((C_{1},h_{1})\subset(C_{2},h_{2})\) if there is an embedding \(\phi:(C_{1},\partial_{+}C_{1})\hookrightarrow(C_{2},\partial_{+}C_{2})\) that respects the boundary markings. This gives a partial ordering on \(\operatorname{C{Bod}}(S)\). We often identify \(\Sigma\) with \(\partial_{+}C\) instead of specifying the boundary marking, and simply write \(C\) for an element of \(\operatorname{C{Bod}}(S)\). So \(\operatorname{C{Bod}}(S)\) is the set of all compression bodies with exterior boundary \(\Sigma\) that one obtains by compressing curves in \(S\) (without compressing boundary curves) up to the obvious equivalence. A marked \(S\)-compression body \((C,h)\) has a _disk set_\(\mathcal{D}(C)\subset\mathcal{C}(S)\), where a simple closed curve \(\gamma\in\mathcal{C}(S)\) lies in the disc set if \(h(\gamma)\) is a meridian in \(C\). In fact, the disk set \(\mathcal{D}(C)\) determines \((C,h)\) up to equivalence, say by an argument similar to the last paragraph of the proof of Fact 2.4. The set \(\operatorname{CBool}(S)\) can then be identified with the'set of all disk sets' in \(\mathcal{C}(S)\). It then inherits a topology as a subset of the power set \(\mathcal{P}(\mathcal{C}(S))\), wherein \(D_{n}\to D\) if and only if for every \(c\in\mathcal{C}(S)\), we have either \(c\in D\) and \(c\in D_{n}\) for all large \(n\), or \(c\not\in D\) and \(c\not\in D_{n}\) for all large \(n\). **Lemma 8.2**.: _If \(C_{n}\to C\) in \(\operatorname{CBool}(S)\), then \(C\subset C_{n}\) for large \(n\)._ Proof.: Suppose that \(C\) is obtained by compressing a finite set \(\Gamma\) of disjoint simple closed curves on \(S\). For large \(n\), we have \(\Gamma\subset\mathcal{D}(C_{n})\), so \(\mathcal{C}\subset C_{n}\). **Lemma 8.3**.: \(\operatorname{CBool}(S)\) _is compact._ Proof.: As \(\mathcal{P}(\mathcal{C}(S))\) is compact, we want to show that \(\operatorname{CBool}(S)\) is closed. Suppose \(C_{n}\) is a sequence of marked compression bodies with disk sets \[D_{n}=\mathcal{D}(C_{n})\subset\mathcal{C}(S),\] and that \(D_{n}\to D\subset\mathcal{C}(S)\). Let \(\Gamma\) be a maximal set of disjoint, pairwise non-homotopic elements of \(D\). Compressing \(\Gamma\) yields a marked compression body \(C\). Since \(\Gamma\) is finite, \(\Gamma\subset D_{n}\) for large \(n\), so \(\mathcal{D}(C)\subset D_{n}\). Thus, \(\mathcal{D}(C)\subset D\). It therefore suffices to show \(D\subset\mathcal{D}(C)\). Suppose this is not the case, and pick \(\beta\in D\setminus\mathcal{D}(C)\) such that the intersection number of \(\beta\) and \(\Gamma\) is minimal. By maximality of \(\Gamma\), this intersection number cannot be zero. Since \(\beta\in D\), if \(n\) is large we have \(\beta\in\mathcal{D}(C_{n})\). By an outermost disk argument, if \(\gamma\in\Gamma\) is a component that intersects \(\beta\), there is an arc \(c\subset\gamma\) with endpoints on \(\beta\) and interior disjoint from \(\beta\), that is homotopic rel endpoints in \(C_{n}\) to the two arcs \(b_{1},b_{2}\subset\beta\) into which \(\beta\) is cut by \(\partial c\). Passing to a subsequence, we can assume that \(c,b_{1},b_{2}\) are independent of \(n\). Then \(c\cup b_{1}\) and \(c\cup b_{2}\) are both meridians in \(C_{n}\) for all large \(n\), and hence lie in \(D\). Since they intersect \(\Gamma\) fewer times than \(\beta\) does, they lie in \(\mathcal{D}(C)\). But then \(\beta\) (which is a band sum of the two curves) also lies in \(\mathcal{D}(C)\), a contradiction. Let \(f:\Sigma\longrightarrow\Sigma\) be a homeomorphism with \(f=id\) on \(\Sigma\setminus S\). Then \(f\) acts on \(\operatorname{CBool}(S)\) by \(f\cdot(C,h)=(C,h\circ f^{-1})\). When we regard marked \(S\)-compression bodies as compression bodies with exterior boundary _equal_ to \(\Sigma\), we'll just write \(C\) and \(f(C)\) for a marked compression body and its image. Note that \(f(C)=C\) if and only if \(f\) extends to a homeomorphism of \(C\). Fixing \(M\in\operatorname{CBool}(S)\) and \(f\) as above, let \(\mathcal{A}\) be the set of accumulation points in \(\operatorname{CBool}(S)\) of the \(f\)-orbit of \(M\), and let \[\mathcal{A}_{min}=\{C\in\mathcal{A}\mid\nexists D\in\mathcal{A}\text{ such that }D\subsetneq C\}\] be the subset consisting of all minimal elements of \(\mathcal{A}\). **Theorem 8.4** (Existence of maximal subcompression body).: \(\mathcal{A}_{min}\) _is a finite \(f\)-orbit that contains a single element \(C_{f}\) such that \(C_{f}\subset M\)._ _Moreover, \(C_{f}\) is the unique maximal element of \(\operatorname{CBod}(S)\) such that \(C_{f}\subset M\) and a power of \(f\) extends to \(C_{f}\)._ This result was proved in [2] when \(S=\Sigma\). Our proof follows the same general lines, but is topological instead of hyperbolic geometric. We proceed with a series of lemmas. **Lemma 8.5**.: \(\mathcal{A}_{min}\) _is nonempty, finite and \(f\)-invariant._ Proof.: \(\mathcal{A}\) is nonempty, since \(\operatorname{CBod}(S)\) is compact. This implies that \(\mathcal{A}_{min}\) is nonempty, for example since the 'height' of a compression body is nonnegative and decreases under strict containment, see SS3 of [3]. By Claim 8.2, \(\mathcal{A}_{min}\) is discrete. But \(\mathcal{A}_{min}\) is closed in \(\mathcal{A}\), which is closed in \(\operatorname{CBod}(S)\), which is compact. So, \(\mathcal{A}_{min}\) is compact, and must be finite. Finally, \(\mathcal{A}_{min}\) is \(f\)-invariant since \(\mathcal{A}\) is and the \(f\)-action respects containment. **Lemma 8.6**.: _Suppose that for \(i=1,2\) we have \(C_{i}\in\operatorname{CBod}(S)\) with \(C_{i}\subset M\), and that \(f^{i}(C_{1})=C_{1}\) while \(f^{j}(C_{2})=C_{2}\). Then there is an element \(C\in\mathcal{A}_{min}\) such that \(C_{1},C_{2}\subset C\subset M\)._ Proof.: Every element of \(\mathcal{A}\) is the image under a power of \(f\) of an accumulation point of the sequence \(f^{nij}(M)\), so since \(\mathcal{A}_{min}\) is \(f\)-invariant there is some \(C^{\prime}\in\mathcal{A}_{min}\) to which a subsequence of \(f^{nij}(M)\) limits. As \(C_{1},C_{2}\subset f^{nij}(M)\) for all \(n\), we must have \(C_{1},C_{2}\in C^{\prime}\). By Claim 8.2, there is some \(n\) such that \(f^{nij}(M)\supset C^{\prime}\). Then \(C:=f^{-nij}(C^{\prime})\in\mathcal{A}_{min}\) is contained in \(M\) and must contain \(C_{1},C_{2}\) as well. **Lemma 8.7**.: _There is a unique element \(C_{f}\in\mathcal{A}_{min}\) that is contained in \(M\), and \(A_{min}\) is an \(f\)-orbit._ Proof.: Applying the previous lemma to two copies of the trivial compression body \(\Sigma\times I\) shows that \(A_{min}\) has an element that is contained in \(M\). So, suppose that \(C,D\in A_{min}\) are both contained in \(M\). By the previous lemma, there is another element of \(A_{min}\) that contains them both, which contradicts the minimality assumption unless \(C=D\). Therefore, there is a unique element \(C_{f}\in\mathcal{A}_{min}\) that is contained in \(M\). To show that \(A_{min}\) is an \(f\)-orbit, suppose that \(C\in A_{min}\). Since \(C\) is an accumulation point, there is some \(n\) such that \(f^{n}(M)\supset C\). Then \(f^{-n}(C)\subset M\), implying that \(f^{-n}(C)=C_{f}\) by uniqueness. This finishes the proof of Theorem 8.4, since Lemma 8.5 shows that a power of \(f\) extends to \(C_{f}\) and Lemmas 8.6 and 8.7 imply that any subcompression body of \(M\) to which a power of \(f\) extends is contained in \(C_{f}\). ### The proof of Theorem 8.1 Let \(S\subset\Sigma=\partial M_{+}\) be a compact essential subsurface, with \(\partial S\) incompressible in \(M\), and let \(f:\Sigma\longrightarrow\Sigma\) be a pseudo-Anosov map on \(S\). The 'only if' direction of the theorem is trivial. Namely, suppose that some power \(f^{k}\) extends to a nontrivial subcompressionbody \(C\) of \((M,S)\). Pick a meridian \(m\subset S\) for \(C\). Then \((f^{k}(m))\) is a sequence of meridians in \(M\) that lie in \(S\), and converge to the attracting lamination of \(f\). For the 'if' direction of the theorem, assume that no nonzero power of \(f\) extends to a nontrivial subcompression body of \((M,S)\). We must show that the attracting lamination \(\lambda^{+}\) is not in the limit set \(\Lambda(S,M)\). The argument is similar to the proof of the main theorem in [2]. As such, we will sketch the argument in places and refer to [2] for details. Consider the sequence \(M_{n}=f^{-n}(M)\) of marked \(S\)-compression bodies, where we consider the exterior boundary of each \(M_{n}\) as identified with the surface \(\Sigma\). Fix a base point \([X]\in\mathcal{T}(\Sigma)\) and give the interior of each \(M_{n}\) a geometrically finite hyperbolic metric such that the end adjacent to the exterior boundary \(\Sigma=\partial_{+}M\) is convex cocompact, and when its conformal boundary is identified with \(\Sigma\), the conformal structure is \([X]\). Let \[\rho_{n}:\pi_{1}\Sigma\longrightarrow\operatorname{PSL}_{2}\mathbb{C},\ \ N_{n}:=\mathbb{H}^{3}/\rho_{n}(\pi_{1}\Sigma)\] be a representation (unique up to conjugacy) uniformizing the interior of \(M_{n}\) and compatible with our markings, in the sense that \(\rho_{n}\) is the composition of the map \(\pi_{1}\Sigma\longrightarrow\pi_{1}M_{n}\cong\pi_{1}N_{n}\) induced by inclusion and a faithful uniformizing representation of \(\pi_{1}N_{n}\). Note that the kernel of \(\rho_{n}\) is \[\ker(\rho_{n})=f_{*}^{-n}(\ker(\pi_{1}\Sigma\to\pi_{1}M)).\] By Theorem 8.4 and the assumption that no power of \(f\) extends to a nontrivial subcompression body of \(M\), the only minimal accumulation point of \((M_{n})\) in \(\operatorname{CBod}(S)\) is the trivial compression body. So in particular, we can choose a subsequence \((M_{n_{j}})\) that converges to the trivial compression body. By the compactness of generalized Bers slices (see [2, Theorem 4.3]), we may assume after appropriate conjugations and passing to a further subsequence that \((\rho_{n_{j}})\) converges algebraically to a representation \[\rho_{\infty}:\pi_{1}\Sigma\longrightarrow\operatorname{PSL}_{2}\mathbb{C}, \ \ N_{\infty}:=\mathbb{H}^{3}/\rho_{\infty}(\pi_{1}\Sigma)\] and that \(N_{\infty}\) can be identified with the interior of a compression body \(M_{\infty}\) with exterior boundary \(\Sigma\) in such a way that the end of \(N_{\infty}\) adjacent to \(\Sigma\) is convex cocompact with conformal structure \([X]\) and the representation \(\rho_{\infty}\) is compatible with the marking in the same way as before. The disk set \(\mathcal{D}(S,M_{\infty})\) consists of all simple closed curves on \(S\) represented by elements \(\gamma\in\pi_{1}\Sigma\) with \(\rho_{\infty}(\gamma)=1\). By Chuckrow's Theorem (see [2, Lemma 2.11]), \(\rho_{\infty}(\gamma)=1\) if and only if \(\rho_{n_{j}}(\gamma)=1\) for all sufficiently large \(i\). Since \((M_{n_{j}})\) converges to the trivial compression body in the topology of \(\operatorname{CBod}(S)\), it follows that the surface \(S\subset\Sigma=\partial_{+}M_{\infty}\) is incompressible in \(M_{\infty}\). **Claim 8.8**.: _The repelling lamination \(\lambda^{-}\) of \(f\) is unrealizable in \(N_{\infty}\)._ Proof.: The proof is almost identical to that of [2, Lemma 6.2], so we offer a sketch and we refer the reader to their paper for details. Fixing an \(M\)-meridian \(\gamma\subset S\), the sequence \(f^{-n_{j}}(\gamma)\) converges in the Hausdorff topology to a lamination \(\lambda_{M}\) that is the union of \(\lambda^{-}\) and finitely many leaves spiraling onto it. It suffices by [7, Theorem 2.3] to show that \(\lambda_{M}\) is unrealizable. So, hoping for a contradiction, assume \(\lambda_{M}\)_is_ realizable; then \(\lambda_{M}\) is carried by a train track \(\tau\) that maps nearly straightly into \(N_{\infty}\) (see [2]). By algebraic convergence, \(\tau\) also maps nearly straightly into \(N_{n_{j}}\) when \(i\) is large. Since \(f^{-n_{j}}(\gamma)\to\lambda_{M}\), for large \(i\) the curve \(f^{-n_{j}}(\gamma)\) is carried by \(\tau\). This implies that \(f^{-n_{j}}(\gamma)\) is geodesically realizable in \(N_{n_{j}}\) for large \(i\), contradicting the fact that it is homotopically trivial. By work of Thurston [51, Prop 9.7.1], the \(\pi_{1}\)-injective surface \(S\subset\partial_{+}M_{\infty}\) is isotopic into a degenerate end of \(N_{\infty}\) with ending lamination \(\lambda^{-}\). In particular, the peripheral curves of \(S\) represent cusps in \(N_{\infty}\) and every non-peripheral curve on \(S\) has hyperbolic type in \(N_{\infty}\). Any pair of disjoint non-peripheral simple closed curves on \(S\) can then be realized geodesically by a pleated surfaces \(S\longrightarrow N_{\infty}\) in the given homotopy class, and Thurston's compactness of pleated surfaces (see [43, Lemma 6.13]) implies the following. **Lemma 8.9** (compare with Lemma 6.3, [2]).: _Let \(\alpha\subset S\) be a simple closed curve. Then for every \(k\), there is some \(K\) such that for any other simple closed curve \(\beta\) in \(S\), we have_ \[\mathrm{d}_{\mathcal{C}(S)}(\alpha,\beta)\leq k\ \Longrightarrow\ \mathrm{d}_{N_{\infty}}(\alpha_{\infty},\beta_{\infty})\leq K,\] _where \(\alpha\) and \(\beta_{\infty}\) are the geodesics in \(N_{\infty}\) in the homotopy classes of \(\alpha\) and \(\beta\)._ Hoping for a contradiction, suppose now that \(\lambda^{+}\in\Lambda(S,M)\). When regarded as an element of \(\partial_{\infty}\mathcal{C}(S)\), the support of \(\lambda^{+}\) is then an accumulation point of \(\mathcal{D}(S,M)\subset\mathcal{C}(S)\). If \(\alpha\in\mathcal{C}(S)\), then for \(n=1,2,\ldots\) the sequence \((f^{n}(\alpha))\) is a quasi-geodesic path that limits to \(\lambda^{+}\in\partial_{\infty}\mathcal{C}(S)\), see [41]. Since \(\mathcal{D}(S,M)\) is a quasi-convex subset (see Theorem 2.3, due to Masur-Minsky), there is a constant \(C\) and for each \(n\) a meridian \(\gamma_{i}\in\mathcal{D}(S,M)\) with \[d_{\mathcal{C}(S)}(f^{n}(\alpha),\gamma_{i})\leq C.\] Translating the points \(f^{n}(\alpha)\) and \(\gamma_{i}\) by \(f^{-n}\), this becomes: \[d_{\mathcal{C}(S)}(\alpha,f^{-n}(\mathcal{D}(S,M))\leq C.\] By Lemma 8.9, an element \(\gamma_{n_{j}}\in f^{-n_{j}}(\mathcal{D}(S,M))\) at distance at most \(C\) from \(\alpha\) in \(\mathcal{C}(S)\) can be geodesically realized in some fixed compact subset \(A\subset N_{\infty}\). Algebraic convergence implies that for sufficiently large \(j\) this geodesic can be pulled back and tightened to a geodesic in \(N_{n_{j}}\). But by construction, \(\gamma_{n_{j}}\) is a meridian in \(M_{n_{j}}\), so it cannot possibly be realized geodesically in \(N_{n_{j}}\), which is a contradiction. ## 9. Extending reducible maps to compression bodies We present here a generalization of [2, Theorem 1.1] that characterizes which (possibly reducible) mapping classes of the boundary of a 3-manifold \(M\) have powers that extend to sub-compression bodies. In what follows, let \(M\) be a compression body with exterior boundary \(\partial_{+}M\). Let \(S\subset\partial_{+}M\) be an essential subsurface such that \(\partial S\) is incompressible. Let \(f:\partial_{+}M\longrightarrow\partial_{+}M\) be a homeomorphism that is'supported' in \(S\), meaning that \(f=id\) on \(\partial_{+}M\setminus S\). **Definition 9.1**.: We say that \(f\) is _pure_ if there are disjoint, compact, essential \(f\)-invariant subsurfaces \(S_{i}\subset S\), \(i=1,\ldots,n\), such that \(f=id\) on \(S_{id}:=S\setminus\cup_{i}S_{i}\), and where for each \(i\), if we set \(f_{i}:=f|_{S_{i}}\), then either 1. \(S_{i}\) is an annulus and \(f_{i}\) is a power of a Dehn twist, or 2. \(f_{i}\) is a pseudo-Anosov map on \(S_{i}\). It follows from the Nielsen-Thurston classification, see [16], that every \(f\) has a power that is isotopic to a pure homeomorphism. When \(f\) is pure, with associated restrictions \(f_{i}:S_{i}\longrightarrow S_{i}\) as above, we define a geodesic lamination \(\lambda=\cup_{i}\lambda_{i}\) on \(S\), where \(\lambda_{i}\subset S_{i}\) as follows. If \(f_{i}\) is pseudo-Anosov, we let \(\lambda_{i}\) be the support of the attracting lamination of \(f_{i}\). If \(f_{i}\) is a Dehn twist, we let \(\lambda_{i}\) be the core curve of the annulus \(S_{i}\). So defined, \(\lambda\) is called the _attracting lamination_ of the pure homeomorphism \(f\). **Theorem 9.2**.: _Suppose that \(S\subset\partial_{+}M\) is an essential subsurface such that the multicurve \(\partial S\) is incompressible. Let \(f:\partial_{+}M\longrightarrow\partial_{+}M\) be a pure homeomorphism supported in \(S\). Then \(f\) has a power that extends to a nontrivial subcompressionbody of \((M,S)\) if and only if either:_ 1. _there is a meridian in_ \(S_{id}\)_,_ 2. _for some_ \(i\)_, the map_ \(f_{i}\) _has a power that extends to a nontrivial subcompressionbody of_ \((M,S_{i})\)_, or_ 3. _there are (possibly equal) indices_ \(i,j\) _such that_ \(S_{i},S_{j}\) _bound an essential interval bundle_ \(B\) _in_ \(M\)_, such that some power of_ \(f|_{S_{i}\cup S_{j}}\) _extends to_ \(B\)_, and there is a compression arc_ \(\alpha\) _for_ \(B\) _whose interior lies in_ \(S_{id}\)_._ Recall from SS2.7 that a'subcompressionbody of \((M,S)\) is a compression body obtained from \(\partial_{+}M\) by compressing some meridian multicurve in \(S\). Theorem 8.1 says that (2) is equivalent to asking that the attracting lamination (say) of \(f\) is a projective limit of meridians in \(S_{i}\). In (3), the condition that a power of \(f|_{S_{i}\cup S_{j}}\) extends to \(B\) is easier to check. Indeed, if \(\sigma:\partial_{H}B\longrightarrow\partial_{H}B\) is the canonical involution, as defined in SS2.5, then by Fact 2.7 we have that \(f^{k}|_{\partial_{H}B}\) extends to \(B\) exactly when \(\sigma\circ f_{i}^{k}\) is isotopic to \(f_{j}^{k}\). When \(B\) is a twisted interval bundle, \(f_{i},f_{j}\) are both pseudo-Anosovs on \(\partial_{H}B\) and this means that as mapping classes we have \(f_{j}=g\circ f_{i}\) for some finite order \(g\) commuting with both \(f_{i},f_{j}\), see e.g. McCarthy's thesis [44]. When \(B\) is a trivial interval bundle, \(\sigma\) indentifies \(S_{i}\) and \(S_{j}\), and we have similarly that \(f_{j}=g\circ\sigma(f_{i})\) for some \(g\) commuting with both. Proof of Theorem 9.2.: Let's start with the 'if' direction, since that's easier. If there is a meridian in \(S_{id}\), then \(f\) extends to the compression body obtained by compressing that meridian. Suppose (2) holds, so that some power \(f_{i}^{k}\) extends to a nontrivial subcompression body \(C\) of \((M,S_{i})\). Then \(f\) also extends to \(C\), since all the \(S_{j}\), where \(j\neq i\), bound trivial interval bundles with subsurfaces of the interior boundary of \(C\). So we're done. The only interesting case is if (3) holds, so that some \(S_{i},S_{j}\) bound an essential interval bundle \(B\) in \(M\) such that some power of \(f^{k}|_{S_{i}\cup S_{j}}\) extends to \(B\), and there is a compression arc \(\alpha\) for \(B\) whose interior lies in \(S_{id}\). Here, let \(S^{\prime}\subset S\) be the smallest essential subsurface containing \(S_{i},S_{j}\) and \(\alpha\); so, \(S^{\prime}\) is obtained from a regular neighborhood of the union of these three subsets of \(S\) by capping off any inessential boundary components with discs. Let \(C\) be the characteristic compression body of the pair \((M,S^{\prime})\), as defined in Fact 2.4. We claim that \(f^{k}\) extends to \(C\). To see this, note that we can construct \(C\) as follows. For concreteness, first assume that the boundary components of \(S_{i},S_{j}\) that contain the endpoints of \(\alpha\) bound an annulus \(A\supset\alpha\) on \(S\). Then \(S^{\prime}=S_{i}\cup S_{j}\cup A\), the annulus \(A\) is parallel in \(M\) to component \(A^{\prime}\subset\operatorname{Fr}(B)\) that is an annulus with the same boundary curves as \(A\), and \(C\) is the union of the interval bundle \(B\), the solid torus bounded by \(A,A^{\prime}\), and a trivial interval bundle over \(\partial_{+}M\setminus S^{\prime}\). We can then extend \(f^{k}\) to \(C\) by letting it be the given extension of \(f^{k}|_{S_{i}\cup S_{j}}\) on \(B\), the identity on the solid torus, and the obvious fiber preserving extension of \(f^{k}|_{\partial_{+}M\setminus S^{\prime}}\) to the adjacent interval bundle. The case that the boundary components of \(S_{i},S_{j}\) that contain the endpoints of \(\alpha\) do not bound an annulus on \(S\) is similar, except that instead of the solid torus above we take a thickened disk bounded by a rectangular neighborhood of \(\alpha\) on \(S\setminus(S_{i}\cup S_{j})\), and a rectangular neighborhood of the homotopic arc on the frontier of \(B\). We now work on the 'only if' direction. Passing to a power, suppose that \(f\) extends to a nontrivial subcompressionbody \(C\) of \((M,S)\). We may assume that there is no proper, \(f\)-invariant essential subsurface \(S^{\prime}\subset S\) such that \(f|_{S^{\prime}}\) extends to a nontrivial subcompression body of \((M,S^{\prime})\). If there were such a subsurface \(S^{\prime}\), we could replace \(S\) by a minimal such \(S^{\prime}\), therefore reducing the argument to the minimal case we are assuming we are in above. If \(f=id\) on \(S\), the fact that there is a nontrivial subcompressionbody of \((M,S)\) means there is a meridian in \(S=S_{id}\), so we're in case (1) and are done. This case may seems silly, but observe that if \(f\) is some complicated pure homeomorphism where there's a meridian in \(S_{id}\), the associated'minimal' case that we pass to in the previous paragraph is where \(S\) is an annular neighborhood of some such meridian, and \(f=id\) on \(S\). Assume from now on that \(f\) is not the identity map of \(S\). We claim that every meridian \(m\in\mathcal{D}(C,S)\) intersects every component of \(\lambda\). Indeed, suppose some \(\lambda_{i}\) is disjoint from some such \(m\) and let \(S^{\prime}\) be the component of \(S\setminus S_{i}\) containing \(m\). Since \(f\) extends to \(C\) and \(S^{\prime}\subset\partial C_{+}\) is \(f^{k}\)-invariant, \(f\) extends to the characteristic subcompressionbody \(C^{\prime}\) of the pair \((C,S^{\prime})\), defined by starting with \(\partial_{+}M\) and compressing all meridians of \(C\) that lie in \(S^{\prime}\), see Fact 2.4. Since \(m\) is a meridian in \(C^{\prime}\), this \(C^{\prime}\) is nontrivial, which contradicts the mimimality assumption in the first paragraph. Pick a meridian \(m\in\mathcal{D}(C,S)\). Since \(m\) intersects all components of \(\lambda\), the sequence of meridians \(m_{i}:=f^{i}(m)\) Hausdorff converges to a lamination \(\lambda^{\prime}\) strongly commensurable to \(\lambda\). Applying Theorem 7.1 to the pair \((C,S)\) and using that all meridians in \(C\) intersect all components of \(\lambda\), we have that either: * some component \(\lambda_{a}\) is an intrinsic limit of meridians lying in \(S(\lambda_{a})\), in which case Proposition 8.1 (applied to \(f_{a}:S_{a}\longrightarrow S_{a}\)) says we're in case (2), or * there are indices \(a,b\) such that \(S_{a},S_{b}\) bound an essential interval bundle \(B\subset C\), where \(\lambda_{a},\lambda_{b}\) are essentially homotopic in \(B\), and where there is a compression arc \(\alpha\subset S\) for \(B\) that is disjoint from \(\lambda\), and hence can be isotoped so that its interior lies in \(S_{id}\). Let's assume we're in the last case, since otherwise we're done. We want to show that some power of \(f_{a}\cup f_{b}:\partial_{H}B\longrightarrow\partial_{H}B\) extends to \(B\). First, suppose that \(B\) is a twisted interval bundle, so that \(S_{a}=S_{b},\ f_{a}=f_{b},\ \lambda_{a}=\lambda_{b}\). Using just the index \(a\) from now on, if \(\sigma\) is the canonical involution of \(B\), then Fact 2.16 implies that \(\sigma(\lambda_{a})\) is isotopic to \(\lambda_{a}\). Let \(A\subset T(S)\) be the axis of \(f_{a}\) on the Teichmuller space \(T(S)\). By Theorem 12.1 of [17] and Theorem 2 of [38], we have that \(A,\sigma(A)\) are asymptotic, so as they are both pseudo-Anosov axes they must be equal by discreteness of the action of the mapping class group. Since \(\sigma\) has finite order, it then fixes \(A\) pointwise. Now the group \(\Gamma=\langle\sigma,f_{a}\rangle\subset\operatorname{Mod}(\partial_{H}B)\) is isomorphic to the direct product of a finite group fixing \(A\) pointwise and a cyclic group of pseudo-Anosovs, so for some positive \(k\) we have \(\sigma\circ f_{a}^{k}=f_{a}^{k}\) in \(\operatorname{Mod}(\partial_{H}B)\). By Theorem 3 of [4] we may isotope \(f_{a}^{k},\sigma\) so that they commute, while preserving that \(\sigma^{2}=id\); we can then alter the bundle map \(\pi:B\longrightarrow Y\) so that the new \(\sigma\) is still the canonical involution. It follows that \(f_{a}^{k}\) is a lift to \(\partial_{H}B\) of a pseudo-Anosov map \(g:Y\longrightarrow Y\), and hence \(f_{a}^{k}\) extends to \(B\) as desired, see Fact 2.7. Next, assume \(B\) is a trivial interval bundle, with canonical involution \(\sigma\) that switches \(S_{a},S_{b}\). As in the previous paragraph, we have that \(\sigma(\lambda_{b})\) is isotopic to \(\lambda_{a}\), so \(\Gamma=\langle f_{a},\sigma\circ f_{b}\circ\sigma^{-1}\rangle\subset\operatorname {Mod}(S_{a})\) is a direct product \(\Gamma=F\times\langle\phi\rangle\) of a finite group \(F\) and a cyclic group generated by a pseudo-Anosov \(\phi\), where if we quotient by \(F\) then \(f_{a}\) and \(\sigma\circ f_{b}\circ\sigma^{-1}\) both project to positive powers of \(\phi\). It suffices to show that they project to the _same_ positive power of \(\phi\), for then we are done by the same argument as in the previous paragraph. For this, recall that all meridians of \(C\) intersect \(S_{a},S_{b}\), so these surfaces are 'holes' for the disk set of \(C\), as discussed in [40]. So with \(m_{i}=f^{i}(m)\) the sequence of meridians in \(C\) constructed above, Lemma 12.20 of [40] says that for each \(i\), the distance in the arc complex of \(S_{a}\) between \[m_{i}\cap S_{a}=f^{i}_{a}(m\cap S_{a})\ \ \text{and}\ \ \sigma(m_{i}\cap S_{b})=( \sigma\circ f_{b}\circ\sigma^{-1})^{i}\sigma(m\cap S_{b})\] is at most \(6\). However, if \(f_{a}\) and \(\sigma\circ f_{b}\circ\sigma^{-1}\) project to different positive powers of \(\phi\), their stable translation lengths on the arc complex of \(S_{a}\) are different, which is a contradiction.
2303.00365
OpenTPS -- Open-source treatment planning system for research in proton therapy
Introduction. Treatment planning systems (TPS) are an essential component for simulating and optimizing a radiation therapy treatment before administering it to the patient. It ensures that the tumor is well covered and the dose to the healthy tissues is minimized. However, the TPS provided by commercial companies often come with a large panel of tools, each implemented in the form of a black-box making it difficult for researchers to use them for implementing and testing new ideas. To address this issue, we have developed an open-source TPS. Approach. We have developed an open-source software platform, OpenTPS (opentps.org), to generate treatment plans for external beam radiation therapy, and in particular for proton therapy. It is designed to be a flexible and user-friendly platform (coded with the freely usable Python language) that can be used by medical physicists, radiation oncologists, and other members of the radiation therapy community to create customized treatment plans for educational and research purposes. Result. OpenTPS includes a range of tools and features that can be used to analyze patient anatomy, simulate the delivery of the radiation beam, and optimize the treatment plan to achieve the desired dose distribution. It can be used to create treatment plans for a variety of cancer types and was designed to be extended to other treatment modalities. Significance. A new open-source treatment planning system has been built for research in proton therapy. Its flexibility allows an easy integration of new techniques and customization of treatment plans. It is freely available for use and is regularly updated and supported by a community of users and developers who contribute to the ongoing development and improvement of the software.
S. Wuyckens, D. Dasnoy, G. Janssens, V. Hamaide, M. Huet, E. Loÿen, G. Rotsart de Hertaing, B. Macq, E. Sterpin, J. A. Lee, K. Souris, S. Deffet
2023-03-01T09:46:36Z
http://arxiv.org/abs/2303.00365v1
# OpenTPS - Open-source treatment planning system for research in proton therapy ###### Abstract _Introduction._ Treatment planning systems (TPS) are an essential component for simulating and optimizing a radiation therapy treatment before administering it to the patient. It ensures that the tumor is well covered and the dose to the healthy tissues is minimized. However, the TPS provided by commercial companies often come with a large panel of tools, each implemented in the form of a black-box making it difficult for researchers to use them for implementing and testing new ideas. To address this issue, we have developed an open-source TPS. _Approach._ We have developed an open-source software platform, OpenTPS (opentps.org), to generate treatment plans for external beam radiation therapy, and in particular for proton therapy. It is designed to be a flexible and user-friendly platform (coded with the freely usable Python language) that can be used by medical physicists, radiation oncologists, and other members of the radiation therapy community to create customized treatment plans for educational and research purposes. _Result._ OpenTPS includes a range of tools and features that can be used to analyze patient anatomy, simulate the delivery of the radiation beam, and optimize the treatment plan to achieve the desired dose distribution. It can be used to create treatment plans for a variety of cancer types and was designed to be extended to other treatment modalities. _Significance._ A new open-source treatment planning system has been built for research in proton therapy. Its flexibility allows an easy integration of new techniques and customization of treatment plans. It is freely available for use and is regularly updated and supported by a community of users and developers who contribute to the ongoing development and improvement of the software. Proton Therapy Treatment Planning TPS Software ## 1 Introduction Thanks to the ballistic properties of heavy charged particles, proton therapy has the potential to better confine the dose to the target volume and spare healthy tissues more than conventional radiation therapy. In practice, however, the full potential of proton therapy is limited by its maturity gap with conventional radiation modalities. For example, the first 3D intensity modulated proton therapy (IMPT) treatment was delivered in 1999, at a time when volumetric arc therapy (VMAT) treatments were becoming popular in photon therapy [1]. Similarly, while VMAT was first used on a patient in 1994 [2], the first prototype of intensity modulated proton arc therapy was only tested in 2019 [3]. Despite challenges, proton therapy is widely acknowledged as a beneficial option for some patients undergoing radiation treatment. However, the full potential of proton therapy systems cannot yet be fully realized due to limitations in the software, particularly in the treatment planning aspect. Over the past three decades, proton therapy has benefited from a growing body of research aimed at realizing its full potential. This has led to the development of many computational tools, such as REGGUI [4], MCsquare [5], matRad, and MIROpt [6], among others. These tools are widely available and have helped accelerate research in proton therapy. In the early 2020s, we embarked on a project to develop our own common research platform for the various radiation therapy projects in tha lab. Using the widely-available and user-friendly Python programming language, the platform was designed to serve as a flexible framework for both personal and collective research efforts. Additionally, the platform was developed to be compatible with the increasing significance of artificial intelligence and to facilitate the distribution of computational tasks across multiple computers. We propose a treatment planning system (TPS) that is specifically designed for research, called openTPS. Its open-source code is publicly available under the Apache-2.0 license. The code is primarily written in python and designed to be easily extended and customized. The software is organized into two packages: the Core package, which is a library that defines data classes, data processing methods, and IO methods, and the GUI package, which offers a graphical interface for viewing and interacting with the data. The Core package of OpenTPS is the main software library and includes a range of features that are essential for proton therapy treatment planning. Some of the key features available in the Core package of OpenTPS include: 1. Data management and processing: importing, exporting and processing patient data. 2. Dose computation: computing the dose from proton therapy treatment plans using fast Monte Carlo simulations with MCsquare. 3. Treatment planning: creating treatment plans for individual patients, including the ability to define beam angles, energy levels, spot spacing, etc. 4. Treatment evaluation: performing quality assurance checks on treatment plans, including the ability to evaluate robustness, compare doses, and assess the conformity of the treatment plan to the patient's anatomy. 5. Data augmentation: creating artificial variations of patient data for robustness evaluation and for machine learning applications. In the next section, we detail these tools and give an example of their use for the development of new planning methods though an introduction to the 4D image support suite which is included in OpenTPS. Assurance tools for dose quality are discussed in section 2. ## 2 GUI package OpenTPS comes with an optional graphical user interface (GUI) for visualization of 3D images, dose maps, contours and dynamic images. Several other interactive tools are available: profile plots, visualization of DVH, segmentation, etc. The implementation is based on the PyQt, VTK, and PyQtGraph libraries, which are widely recognized for their capabilities in data visualization and analysis in scientific and medical applications. Figure 1 shows a screenshot of the GUI loaded with a head-and-neck patient geometry and the optimized treatment dose associated to the case. In addition, graphical modules have been implemented to design and optimize a treatment plan. The entire GUI is extensible and modifiable by the user. Examples of extensions can be found in the OpenTPS documentation. ## 3 Core package ### Data management and processing The current version of OpenTPS supports the import patient data in DICOM format (CT, RT structures, RT dose, RT ion plan and vector field), MHD format (3D images), and OpenTPS-specific serialized object format. DICOM export is currently supported for treatment plans and doses. Any images can also be exported in MHD format. The core library of OpenTPS includes a range of common image processing methods: resampling, coordinate system conversion, affine transformations, and filtering. Additionally, it has segmentation options such as automatic segmentation for body, lung and bones in CT images. Rigid and deformable image registration methods are available, including the diffeomorphic Morphous algorithm [4]. This algorithm computes 3D to 3D deformations that are consistent with the anatomy of the organs. The deformations are represented by a velocity field and/or the more usual displacement field. While algebraic operations - such as summing and scaling fields - can be performed on velocity fields, they cannot be applied directly on displacement fields. On the other hand, when a deformation is applied on 3D data, the displacement field must be used. The displacement is automatically computed from the velocity field when the deformation is applied using a field exponentiation operation. The morphons registration is also used to create motion models. Deformation fields between the 4DCT phases are computed. Then with the deformations from all 4DCT phases, the midposition (MidP) image [7] can be computed, as illustrated in Figure 2. A motion model, called Dynamic3DModels in OpenTPS, is composed of the MidP image and the various deformation fields from the MidP image to each of the 4DCT phases. Digitaly reconstructed radiographs (DRR) can also be computed using a 3DCT to simulate in room x-ray devices. ### Dose computation OpenTPS was designed with the flexibility to support multiple dose engines as a way to provide users with a range of options for proton therapy treatment planning. It also allows users to take advantage of the strengths of different dose engines, such as the accuracy and precision of Monte Carlo-based engines, or the speed and efficiency of other types of engines. A stable release of the fast Monte Carlo named Mcsquare is included in OpenTPS [5]. It is available via the class named McsquareDoseCalculator and can be used to compute dose distributions, dose influence matrices, and LET distributions. Input paramaters such as the number of primaries, the beam model and the CT calibration are required to compute a dose based on CT and a treatment plan. From those inputs, McsquareDoseCalculator generates the files required by Mcsquare. Then, Mcsquare simulates the interaction of each particle in batches until it reaches a user-defined minimal statistical uncertainty or it simulates all the primaries. Mcsquare generates the output dose in mhd format in the OpenTPS workspace, a folder located by default in the home directory. Figure 3 summarizes the dose calculation workflow in OpenTPS. Figure 1: Main window of the OpenTPS GUI. Figure 3: Example of dose workflow in OpenTPS Figure 2: The tumor motion is schematically represented by the hysteresis formed by the tumor position (gray circles) at each phase of the 4DCT (a). Deformation fields (blue vectors) are generated using registration between the first phase (P1) and all others phases (b). Then, all phases can be deformed to the average motion called the Mid- Position where the MidP-CT image is then created by taking the median in each voxel of all deformed phases (c). ### Treatment planning The treatment planning module of OpenTPS provides a set of tools and functions that are used to create and optimize a treatment plan based on several inputs. In general, the method followed depends on the treatment delivery modality chosen. As OpenTPS is initially built for research in proton therapy, the implementation of the treatment planning module conform to IMPT delivery. In pencil beam scanning (PBS), the dose is painted using scanning magnets that steer each pencil beam to a specific location in the target volume, namely a _spot_. Dividing the tumor into iso-energy layers, the treatment is therefore delivered by irradiating the volume spot-by-spot and layer-by-layer. This process is repeated for each irradiation direction. The intensity of each pencil beam being modulated, this treatment mode results in a very high dose conformity and a large flexibility in the planning allowing for better organ at risk sparing compared to others such as scattering techniques [8]. The modulation of the spot intensity is the aim of plan optimization. #### 3.3.1 Plan optimization The treatment planning module operates in several steps. The first stage consists in creating a PlanDesign object. Loading a CTImage and RTStruct associated to it, the planner defines the gantry and couch angles for the given disease site. Based on this information, spots are placed in beam-eye-view on a hexagonal grid through a ray-tracing procedure that computes min-max water-equivalent path lengths (translated into energies) covering the target volume and then spaced by pre-defined spot and layer spacings for each beam direction. Internally, a RTPlan object is initialized with the angles, energy layers and spot positions. The spot intensities are left behind for the moment and initialized to one. Before proceeding to the optimization, the dose distribution of each spot onto the patient voxelized-geometry, namely a _beamlet_, must be computed. By default, the Monte Carlo dose engine named MCsquare is used to simulate, for each spot, the physics of proton interactions with the matter and score the dose deposited in each voxel. Nevertheless, the framework is flexible enough to implement any other dose calculation engine. The output of this step is a beamlet matrix, named \(\mathcal{A}\), which is stored in memory. To conserve memory space and enhance computational efficiency during optimization, a sparse matrix format is utilized. The number of rows in \(\mathcal{A}\) corresponds to the number of voxels in the patient (resampled or not) and the number of columns corresponds to the number of spots in the plan. Finally, the clinical objective are added to the PlanDesign object. The region of interest (ROI), the metric (min, max or mean), the dose limit value and the objective weight must be specified for each FidObjective. An optional parameter can be set to make the objective robust against uncertainties. When the plan and the associated beamlets are created, the second stage of treatment planning can now take place. Knowing that the total dose delivered to the patient can be computed as the sum of all beamlets weighted by the intensity of the corresponding spots, the remaining task is to optimize the weights \(\mathbf{x}\) so that the total dose \(\mathbf{d}\) meets the Figure 4: Objectives panel of the OpenTPS GUI optimization objectives. The optimization problem is stated as follows: \[\min_{x}f(\mathbf{d})\] (1) s.t. \[\mathbf{d}=\mathcal{A}\mathbf{x}\] \[\mathbf{x}\geq\mathbf{0}\] The total objective function \(f(\mathbf{d})\) is defined as a weighted sum of asymmetric quadratic objectives with squared ramps used as surrogate to constraint on a minimum, maximum or on a mean dose. The total objective function has therefore usually four terms corresponding to the targets (\(T\)) and the OARs (\(O\)), which is written as follows: \[f(\mathbf{d})=\sum_{t\in T}\left(\frac{1}{|v_{t}|}\sum_{i\in v_{t}} \left(w_{t}^{+}\cdot(d_{i}-p_{t}^{max})_{+}^{2}+w_{t}^{-}\cdot(d_{i}-p_{t}^{ min})_{-}^{2}\right)\right)\\ +\sum_{o\in O}\left(\frac{w_{o}^{+}}{|v_{o}|}\sum_{i\in v_{o}}(d_ {i}-p_{o}^{max})_{+}^{2}\right)+\sum_{o\in O}\left(w_{o}(\overline{\mathbf{d}}-p_{ o}^{maxmean})_{+}^{2}\right) \tag{2}\] where \(p\) is the prescribed dose and the notations \((\cdot)_{+}\) and \(\overline{\cdot}\) represent \(max\{\cdot,0\}\) and \(\frac{1}{|v|}\sum_{i\in v}d_{i}\) respectively. OpenTPS comes with several built-in optimizers that solve the above problem. It is also designed so that other optimizers may be easily implemented. A variety of iterative solvers are provided: quasi-Newton's methods from Scipy, in-house implementation of LBFGS algorithms [9], gradient-based methods such as the classical gradient descent and the FISTA algorithm [10] from PyUNLockBox [11]. FISTA solves non-differentiable convex optimization problems by proximal splitting. For example, it can be of particular interest for ArcPT treatment plan optimization where one wants to maximize the spot sparsity by adding a \(l_{1}\)-norm penalty to the total objective function. Linear programming [12] methods are also available and can be used to solve the optimization problem. Based on the list of objectives in the plan design, a model is built with linear equations instead of the quadratic version defined in Eq. 2. The model and the solver are built on the Gurobipy package [13], a Python interface to the powerful Gurobi solver, which requires a specific license. Gurobi includes a variety of algorithms to solve linear programming problems, including the simplex algorithm [14], the interior point method [15], and the concurrent optimizer. Continual research is being conducted to investigate new methods, such as the beamlet-free algorithm. This innovative approach simulates the dose in proton batches of randomly selected spots and evaluates their impact on the objective function in each iteration. Using an estimated gradient, the spot weights are then updated, leading to a new spot probability distribution from which the next spot will be sampled. This approach holds potential for further improving the treatment planning process in terms of time and memory requirements. The convergence rate of each of these algorithms may depend on the specific characteristics of the optimization problem. A summary of available solvers and their capabilities is provided in Table 1. The "order" refers to the number of derivatives used in the optimization algorithm. First-order methods only use the first derivative (i.e., the gradient), while second-order methods use the first and second derivatives (i.e., the Hessian matrix). When the algorithm is selected and the plan is provided, an IMPTPlanOptimizer object is created. Additional parameters can be provided such as the desired stopping criterion for the optimization. The method optimize will start the optimization using the solver supplied. The result holds, among others, the optimized spot weights and the optimized dose computed from the beamlet matrix. Figure 5 summarizes the different steps in the optimization workflow in OpenTPS. The structure of the plan optimization module is divided in three main sub-modules. Firstly, the optimization engines described above are located in the solvers module. Secondly, to compose the objective function, several functions classes such as the classical dose objectives and more specialized objectives (norms, projections, etc.) are grouped in the objectives module. The user can also define a custom function for optimization by implementing an evaluation method and either a proximal operator or gradient method. Thirdly, acceleration schemes are included in the acceleration module such as backtracking linesearch typically used for gradient descent or Newton's methods, backtracking acceleration based on a quadratic approximation of the objective, FISTA acceleration for forward-backward solvers. The GUI package of OpenTPS provides tools to analyze the resulting dose distribution. The dose can directly be overlaid using a color map on the CT image and contours to examine its homogeneity and conformity. Dose-volume histograms can also be generated to inspect quality of the plan. Dose line profiles can be obtained by manually drawing on the image in the viewer. Finally, DVH metrics or important index (conformity [16] and homogeneity indices) can be extracted from the DVH. Figure 6 shows an example of optimization result display for a base-of-skull tumor brain scripted in the core library. It includes a plot of the convergence of the objective function evaluation along the iterations. #### 3.3.2 Robust optimization It is well-known that the PTV margin recipe [17] used in conventional photon treatments does not apply well in IMPT due to the presence of proton range uncertainties [18, 19, 20]. The alternative approach is to make use of robust optimization [21, 22]. In the robust scheme, the dose gradients per field are optimized so that they combine adequately even in the presence of range uncertainties. Specific combinations of uncertainties are commonly referred to as "scenarios". They include patient motion and changes in anatomy resulting in variations in the range of the proton beam. The main idea behind robust optimization is the evaluation of the objective function in a discrete set of scenarios. Multiple robust optimization approaches have been proposed depending on the definition of the objective function and \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Solver & Order & Solving function & \multicolumn{2}{|c|}{Convergence} \\ \cline{5-6} & & & Type & Rate \\ \hline Gradient descent & 1 & Smooth convex & Linear & O(1/k) where \(k\): \# of iterations \\ \hline FISTA & 1 & Smooth convex & Linear & O(\(1/k\)) for smooth convex functions \\ & & Proximable non-smooth & Linear & O(\(1/k^{2}\)) for proximable \\ & & & & non-smooth functions \\ \hline Stochastic gradient descent (“Beamlet-free”) & 1 & Smooth convex & / & Depends on the problem, parameters and learning rate. \\ \hline BFGS* & 2 & Smooth convex & Superlinear & O(\(1/k^{2}\)) where k: \# of iterations \\ \hline L-BFGS-B (Scipy-only) & & Smooth convex with bounds constraints & & \\ \hline LP** & Simplex & / & Linear objective functions & & O(\(2^{n}\)) - worst case where n: \# of variables \\ \cline{2-6} & Interior point & & Linear equality and inequality constraints & / & Polynomial \\ \cline{2-6} & Concurrent & & & Depends on the problem \\ \hline \end{tabular} \end{table} Table 1: Summary of available optimizers in OpenTPS. *Scipy and in-house versions available. **Gurobi optimizer. Figure 5: Optimization workflow in OpenTPS. on the scenarios [23, 19, 24, 25]. In OpenTPS, scenario-wise worst-case robust optimization is implemented. It minimizes the objective function in the worst scenario, within a certain confidence interval. The robust optimization problem, solved in OpenTPS, can now be summarized as, \[\begin{split}\min_{x}\max_{s\in\mathcal{S}}f(\mathbf{d},s)\\ \text{s.t.}&\mathbf{d}=\mathcal{A}\mathbf{x}\\ &\mathbf{x}\geq\mathbf{0}\end{split} \tag{3}\] where the max-operator applies to the scenarios \(\{s\in\mathcal{S}\}\). The current implementation allows the selection of scenarios in the error space. The planner specifies the setup error (systematic and random) in millimeter as well as the range uncertainties in percentage. A total of 21 scenarios will be simulated (7 setup x 3 range). In terms of workflow, after the robustness settings are entered, the TPS will simulate the beamlets for each scenario. Next, the planner must specify among the objective list which ones are robust for the optimization. With that information, the optimizer will solve the problem by updating the weights using the objective function gradient calculated from the worst-case scenario at each iteration. The robust optimized treatment plan produces the best possible outcome according to the worst case scenario and robustness parameter settings. ### Treatment evaluation Robust evaluation involves the examination of an optimized treatment plan under a variety of different scenarios that could occur during treatment delivery. This approach may be used to ensure that the treatment plan is effective and reliable under a range of possible conditions. #### 3.4.1 Robustness to setup and range uncertainties In OpenTPS, a dose-volume evaluation is performed to assess the quality of the treatment plan. Two strategies are available depending on the scenarios definition. The first method, which is based on good practice rules and is commonly used in clinical routine, involves verifying the dose in the _error space_ for a small number of extreme scenarios [26]. In OpenTPS, the dose is re-computed with MCsquare for 81 scenarios resulting from a combination of the setup and range scenarios previously described. The second strategy generates a confidence interval in the so-called _dosimetric space_. It refers to all possible realizations of the treatment plan, or the different dose distributions that could be achieved. Described in [27], the scenarios are randomly sampled by MCsquare. The analysis in the dosimetric space reports the percentage of evaluation scenarios that meet a certain criterion such as minimum dose coverage for a target percentage (e.g., 95% coverage for 90% of scenarios). In OpenTPS, to proceed to robust evaluation, a Robustness object (associated to a plan) must be created and configured for the intented goal. A MCsquare simulation will compute the robust scenarios based on these information. The scenarios can be analyzed following the two strategies previously described using the member methods analyzeErrorSpace or analyzeDosimetricSpace. The DVHs of the nominal scenario and DVH bands can be recomputed for a list of ROIs and plotted for better visualization of the robustness of the generated treatment plan. Figure 6: Robust optimized result obtained with scripting within the Core module. Left: dose distribution. Middle: DVHs. Right: Convergence plot #### 3.4.2 Robustness to intra-fraction motion (4D simulation) Two types of dose deterioration arise while treating moving targets in proton therapy using pencil beam scanning: **Motion-induced range variations**: The tumor motion can lead to a density variation in the beam path, thereby shifting the expected proton range and deteriorating the overall dose distribution. **Interplay effect**: The tumor and scanning beam motions happen at the same time scale, leading to an interplay effect that can lead to hot and cold spots in the target. To analyze those deteriorating effects, one can simulate the treatment delivery via Monte Carlo (MC) simulations on the 4DCT. In OpenTPS, those MC simulations are performed with MCsquare. 4D dose simulationTo study the impact of the proton range variation, one can perform a 4D dose (4DD) simulation, which consists of independently simulating the dose delivery on each motion phase of the 4DCT and computing the average of those doses. Thus, each motion phase is considered static and contributes equally to the total dose. Mathematically the 4D dose is given by \[\mathbf{F}_{i} =\mathrm{DIR}(\mathbf{CT}_{i},\mathbf{CT}_{\text{MidP}})\text{ for all }i=1,...,P \tag{4}\] \[\mathbf{d}_{tot} =\frac{1}{P}\sum_{i=1}^{P}\mathbf{d}_{i}^{\text{stat}}(\mathbf{v }+\mathbf{F}_{i}(\mathbf{v}))\text{ for all }\mathbf{v}\in\mathbb{R}^{3} \tag{5}\] where \(v\) is a voxel of the image, \(P\) is the number of motion phases, \(\mathbf{d}_{i}^{\text{stat}}\) is the dose resulting from the treatment plan simulation on the 3DCT of motion phase \(i\), \(\mathrm{DIR}(\cdot)\) is the deformable image registration operator, \(\mathbf{F}_{i}\) is the deformation field resulting from the registration, \(\mathbf{CT}_{i}\) is planning CT image \(i\) and \(\mathbf{CT}_{\text{MidP}}\) is the Mid-position CT. This resulting dose allows to analyze the impact of the proton range variation when motion is present and isolating it from the interplay effect. 4D dynamic dose simulationTo study the impact of the interplay effect, one can perform a 4D dynamic dose (4DDD) simulation, which consists of simulating the delivery of beam spots one by one, via a time delivery model, on a sequence of 3DCTs. The sequence of 3DCTs is generally a looped version of the planning 4DCT, but it could be a richer set of synthetic CTs representing multiple breathing motions. The time delivery model is a model that associates a timing with the delivery of each spot in the treatment plan. This model is generally machine-specific and gives an expected delivery timing depending on several parameters such as the delay between the delivery of two adjacent spots, the time required for switching energy, etc. Knowing the delivery timing of each spot in the treatment plan, we can create a breathing model and know exactly in which motion phase each spot will be delivered. The breathing model has two parameters in the simplest case: the breathing period and the number of motion phases of the 4DCT. Mathematically the total dose is given by \[\mathbf{F}_{i} =\mathrm{DIR}(\mathbf{CT}_{i},\mathbf{CT}_{\text{MidP}})\text{ for all }i=1,...,P \tag{6}\] \[\mathbf{d}_{tot} =\sum_{i=1}^{P}\mathbf{d}_{i}^{\text{dyn}}(\mathbf{v}+\mathbf{F} _{i}(\mathbf{v}))\text{ for all }\mathbf{v}\in\mathbb{R}^{3}. \tag{7}\] where \(\mathbf{d}_{i}^{\text{dyn}}\) is the dose associated with the motion phase \(i\) that includes all spots predicted to be delivered in that motion phase. Repeating the simulation by starting the delivery at different motion phases allows for quantifying the uncertainty of the final dose distribution. Note that we suppose each motion phase is a static image, thus neglecting motion during a motion phase. ### Data augmentation Moving targets, such as lung or liver tumors, come with an additional challenge: intra-fraction motion such as breathing, and usually higher inter-fraction motion than static target such as baseline shifts. To simulate treatment delivery on this kind of tumors, tools have been added to OpenTPS to add intra- and inter-fraction motions, as well as creating training and test data based on motion models (see section 3.1) to evaluate innovative methods or AI models. Inter-fraction motion simulationFour types of inter-fraction motion can be artificially generated. **Baseline shifts**: Artificial baseline shifts can be applied on 3DCT or Dynamic3DModels based on the ROIMask of the target or organ to be translated. The baseline shift is constructed in the form of a diffeomorphic displacement vector field modeling a local shift of the ROIMask while preserving surrounding bony structures. **Organ/target shrinks**: Organ or target shrinks can be applied on Image3D or Dynamic3DModel objects, using a ROIMask for the organ that must be shrinked. First, the ROIMask is dilated by 1 voxel and eroded by a number of pixels depending on the user input in mm. From these two new masks, the dilated and eroded bands corresponding to the difference with the original mask are computed. Then, one by one, every voxel of the eroded band is replaced by a random sample picked from a normal distribution \(N(\mu,\sigma^{2})\) where \(\mu\) is the average value of the 10 voxels from the dilated band closest to the voxel to fill. **Rotations**: Rotations around the 3 main axis can be applied to Image3D or Dynamic3DModel objects. The rotation is applied around an axis at the center of the image and not around the image origin. Note that in case of multiple rotations, the order is important as rotations are not commutative. **Translations**: Translations in the 3 main axis can be applied to Image3D or Dynamic3DModel objects. An example of synthetic inter-fraction modification on a 3DCT image can be seen in Figure 7. Intra-fraction breathing simulationUsing a motion model, a sequence of images following a specific breathing pattern can be created. First, synthetic 1D or 3D breathing signals can be generated with different parameters to change the breathing period, the noise or to add irregularities such as apneas (Figure 8). Then, one or multiple points can be selected in the midP image of a motion model to apply these breathing signals in a specified direction. If multiple points and signals are used (for example to partially decorrelate the motion of the tumor and the motion of the skin), they are combined using weight maps and linear interpolations [28] to create 3D image sequences containing any type of breathing patterns. These image sequences can be used to evaluate treatment robustness, new treatment methods [29] or create additional patient data to train AI methods [30]. Figure 8: Example of synthetic breathing signals, regular in blue and irregular in orange. Figure 7: Example of inter-fractions transforms applied to a 3DCT. Small translations and rotations were applied to the entire image, and additionally a baseline shift and a shrink were applied to the lung tumor. ## 4 Perspectives OpenTPS is a unique platform for research in proton therapy treatment planning, being an open-source tool written in a user-friendly programming language. This allows for both internal contributions and international collaboration to enhance the platform quickly, making it increasingly sophisticated over time. Our research group is currently focusing on several exciting areas within OpenTPS, including arc proton therapy treatment planning [31, 32] and FLASH proton therapy[33]. Depending on the needs of the users, new features might be added in the future such as (but not limited to) other treatment or image modalities, objects tracking or position prediction in dynamic sequences. Additionally, the platform has potential for real-time treatment delivery visualization and 2D to 3D image reconstruction. As the field of proton therapy continues to evolve, OpenTPS offers a flexible and dynamic platform for exploring new possibilities and pushing the boundaries of what is possible. The information to get started are available on the website1 or on gitlab2. Footnote 1: [http://opentps.org/](http://opentps.org/) Footnote 2: [https://gitlab.com/openmcsquare/opentps](https://gitlab.com/openmcsquare/opentps) ## 5 Acknowledgments During the development of OpenTPS, Dr. Sylvain Deffet received funding of the Walloon Region of Belgium through technology innovation partnership no. 8341 (EPT-1 - Emerging Proton Thereaxact Phase 1) co-led by MecaTech and BioWin clusters. Sophie Wuyckens and Kevin Souris are funded by the Walloon Region as part of the Arc Proton Therapy convention (Poles Mecatech et Biowim). Dasony Damien, Gauthier Rotstart de Hertaing and Valentin Hamaide are supported by the Walloon Region, SPWEER Win2Wal program project 2010149. Estelle Loyen is a Televie grantee of the Fonds de la Recherche Scientifique - F.N.R.S. Computational resources have been provided by the supercomputing facilities of the Universite catholique de Louvain (CISM/UCL) and the Consortium des Equipements de Calcul Intensif en Federation Wallonie Bruxelles (CECI) funded by the F.R.S.-FNRS under convention 2.5020.11.
2303.17619
Gaze-based Attention Recognition for Human-Robot Collaboration
Attention (and distraction) recognition is a key factor in improving human-robot collaboration. We present an assembly scenario where a human operator and a cobot collaborate equally to piece together a gearbox. The setup provides multiple opportunities for the cobot to adapt its behavior depending on the operator's attention, which can improve the collaboration experience and reduce psychological strain. As a first step, we recognize the areas in the workspace that the human operator is paying attention to, and consequently, detect when the operator is distracted. We propose a novel deep-learning approach to develop an attention recognition model. First, we train a convolutional neural network to estimate the gaze direction using a publicly available image dataset. Then, we use transfer learning with a small dataset to map the gaze direction onto pre-defined areas of interest. Models trained using this approach performed very well in leave-one-subject-out evaluation on the small dataset. We performed an additional validation of our models using the video snippets collected from participants working as an operator in the presented assembly scenario. Although the recall for the Distracted class was lower in this case, the models performed well in recognizing the areas the operator paid attention to. To the best of our knowledge, this is the first work that validated an attention recognition model using data from a setting that mimics industrial human-robot collaboration. Our findings highlight the need for validation of attention recognition solutions in such full-fledged, non-guided scenarios.
Pooja Prajod, Matteo Lavit Nicora, Matteo Malosio, Elisabeth André
2023-03-30T11:55:38Z
http://arxiv.org/abs/2303.17619v1
# Gaze-based Attention Recognition for Human-Robot Collaboration ###### Abstract. Attention (and distraction) recognition is a key factor in improving human-robot collaboration. We present an assembly scenario where a human operator and a robot collaborate equally to piece together a gearbox. The setup provides multiple opportunities for the cobot to adapt its behavior depending on the operator's attention, which can improve the collaboration experience and reduce psychological strain. As a first step, we recognize the areas in the workspace that the human operator is paying attention to, and consequently, detect when the operator is distracted. We propose a novel deep-learning approach to develop an attention recognition model. First, we train a convolutional neural network to estimate the gaze direction using a publicly available image dataset. Then, we use transfer learning with a small dataset to map the gaze direction onto pre-defined areas of interest. Models trained using this approach performed very well in leave-one-subject-out evaluation on the small dataset. We performed an additional validation of our models using the video snippets collected from participants working as an operator in the presented assembly scenario. Although the recall for the Distracted class was lower in this case, the models performed well in recognizing the areas the operator paid attention to. To the best of our knowledge, this is the first work that validated an attention recognition model using data from a setting that mimics industrial human-robot collaboration. Our findings highlight the need for validation of attention recognition solutions in such full-fledged, non-guided scenarios. visual attention, gaze estimation, neural networks, human-robot interaction, industry 4.0 20232 Gaze is an important social cue that has been demonstrated to facilitate collaboration (Gaze et al., 2017; Gaze et al., 2018; Gaze et al., 2019). It can be leveraged for resolving ambiguities, joint attention, etc. (Gaze et al., 2017). In this work, we focus on gaze direction, which communicates the current area of interest to the collaborating partners (Gaze et al., 2018). Previous studies (Gaze et al., 2018; Gaze et al., 2019; Gaze et al., 2019; Gaze et al., 2019) primarily focused on identifying an object the user is looking at. Inspired by the field of driver attention detection (Gaze et al., 2018; Gaze et al., 2019; Gaze et al., 2019), we use gaze direction for recognizing attention and distraction of the operators. This is a first step towards adapting cobot behavior (e.g. speed, specific action sequence) depending on the operator's attention, which is a largely unexplored topic. To recognize the attention of the operator, we first estimate their gaze direction using a frontal camera and then map the gaze to a pre-defined area of interest. We propose a novel deep-learning approach to realize this. First, we train a gaze estimation model using a large publicly available dataset. Then, we use transfer learning techniques to map the gaze directions to areas of interest. Previous studies (Gaze et al., 2019; Gaze et al., 2019; Gaze et al., 2019) have proposed camera-based solutions to identify the area/object that has the user's attention. A gap in the validation of these systems stems from the guided gaze behaviors in the setup. The setup typically involves a stationary participant (sitting or standing) who is asked to gaze at a labeled area. This heavily reduces variations in viewing angle, head poses, etc. Even in the driver attention use case, where the user is expected to be seated, Ahlstrom et al. (Gaze et al., 2018) point out that achieving "true distraction" in an artificial setting is difficult. In this work, we evaluate our attention recognition model using videos from a human-robot collaboration task that resembles an industrial assembly. Our observations highlight the importance of testing such models in a full-fledged setup than a well-controlled setting. To the best of our knowledge, this is the first work to evaluate attention recognition in a setup that imitates an industrial human-robot collaboration task. ## 2. Related Work Gaze-based attention recognition has been studied thoroughly for various application domains such as driver assistance, and education. However, in Human-Robot collaborations (HRC), the gaze is primarily used to detect the object the human partner is looking at. There is a lack of literature on using gaze to recognize the attention of the human partner or detect the activity they are carrying out. Consequently, the topic of how the robot should respond is also largely unexplored. In this section, we briefly discuss a few of the works which guided our research. We focus mainly on works with HRC use cases, although their goal is identifying the gaze object. We also discuss a few works from the driver attention detection domain since image/video-based attention recognition is prevalent in this field. Many works have demonstrated the advantages of using gaze information in HRC scenarios. Mehlmann et al. (Gaze et al., 2017) demonstrated that gaze can be used during collaboration to resolve ambiguities and to establish joint attention. They designed a collaborative puzzle game involving a social robot (Nao) and a human partner. When the robot followed the user's gaze, the interaction was more efficient and natural. Huang and Mutlu (Huang and Mutlu, 2018) demonstrated that human-robot collaboration can be improved when the robot can perceive the gaze object of the user. They designed a scenario where the user had to choose four items from a menu, which the robotic arm then proceeded to pick. Monitoring the user's gaze enabled the robot to anticipate their choices, and thus pick the object faster. Shi et al. (Shi et al., 2019) conducted a similar study, involving the robotic arm picking up an object chosen by the user. While the previous study involved verbalizing the user's choice, Shi et al. used gaze direction and fixation to determine the user's choice. Moniri et al. (Moniri et al., 2018) developed a prototype involving a robot, a local user, and a virtual reality user. The information about the object that a user is gazing at is provided in both virtual reality and the real setting. The prototype is an interesting Industry 4.0 scenario but, there was no user study. The above works use eye-tracking devices to recognize the gaze direction of the user. Although such devices can accurately measure the gaze direction, they are often intrusive and are not ideal for real-time applications (Vora et al., 2017; Vora et al., 2017). This has motivated increasing research interest in gaze estimation using camera images/videos. There are mainly two ways to recognize gaze direction using a camera image - using eye information and using head pose For example, Lemaignan et al. (Lemaignan et al., 2017) used head pose in a robot-child collaborative learning scenario to identify the objects that the child paid attention to. Palinko et al. (Palinko et al., 2017) compared the two methods of predicting gaze direction based on camera input. They designed an experiment where the participants had to assemble a tower using four blocks. The gaze behavior of the participant was used to select the blocks. Eye-based gaze estimation led to more efficient collaboration. Christiernin (Christiernin, 2017) defines three levels of collaboration - Idle Robot (Level 1), Human as Guide (Level 2), and Cooperation/Full Interaction (Level 3). All the aforementioned works fall into Level 1 or 2 which makes the collaboration heavily imbalanced i.e., most of the task was carried out by only one of the partners. This limits certain aspects of collaboration (e.g. waiting for the other partner to finish) and the associated gaze behavior. In this work, we use an HRC setup that achieves Level 3 collaboration. Driver distraction detection has gained a lot of traction over the years. The driver's attention or distraction is determined based on gaze zones in the car. Studies usually use datasets recorded using a frontal camera and involve participants looking at pre-defined gaze zones. Although the setting and challenges of driver distraction are different, the techniques used to solve this problem could be leveraged for attention recognition in HRC. Vora et al. (Vora et al., 2017) collected an image dataset with gaze zone labels for detecting driver distraction. Using this data, they trained two convolutional neural networks. They achieved good accuracy but, like typical deep learning networks, they required a large amount of data. To circumvent this, Tayibnapis et al. (Tayibnapis et al., 2017) adopted a transfer learning approach. They connected a pre-trained neural network to a support vector machine for classifying face images to gaze zones. The neural network was trained on a dataset of users' gaze toward different types of Apple phones and tablets. They trained the SVM part of their system using a small dataset that they collected. Although they achieved very high results on k-fold cross-validation, their model struggled with data from unseen users. ## 3. Assembly Task In order to train and test the gaze-based attention recognition model proposed in this study, the collaborative assembly scenario prepared for the study described in (Vora et al., 2017) is exploited. The product to be assembled is the 3D-printed planetary gearbox represented in the top left corner of Figure 1. Half of the components are put together by the cobot, while the remaining parts are assembled by the human operator. The two sub-assemblies are then joint collaboratively in order to obtain the finished product. As visualized in Figure 1, a Fanuc CRX10iA/L collaborative robot is mounted on a structure specifically built to guarantee a fixed relative position with respect to two tables arranged in an L-shaped formation. The table on the right is equipped with all the components required for the sub-assembly assigned to the cobot, together with a Pickit3D camera, used for the detection of parts. The table on the left is where the operator performs most of the activities and also where the collaborative session of the task takes place. The whole system is driven by a control architecture integrating ROS (Rogog et al., 2018), for controlling both the detection camera and the cobot. A simple pilot of the setup and of the assembly task is used to get a rough estimation of the duration of a single production cycle and of the time synchronization with the cobot. Generally, the complete cycle takes around 60-70 seconds with the operator finishing first and waiting for the cobot for around 10-15 seconds before tackling the collaborative assembly of the two sub-assemblies. The assembly is shared equally between the cobot and human operator, making this setup a level 3 (highest level) collaboration according to Christiernin's categorization (Christiern, 2017). With the advent of Industry 4.0, this level of collaboration is becoming more common in the manufacturing process. Thus, this setup allows us to explore how the collaborative experience of the operator can be improved. Similar to an industrial assembly scenario, the interaction is bound to the defined task and production protocol (Bradley et al., 2017). In other words, we can adapt how the cobot does the task (e.g. adapting speed) but the task load remains more or less the same. ## 4. GAZE-based attention recognition While piloting our setup, we observed that there are two key areas that the operator pays attention to for an extended period of time - cobot and work table. We also envision the operator getting distracted when working for long hours. So, we define three classes of attention based on the gaze of the operator - attention on the cobot, attention on the table, and distracted (looking in some other direction). Recognition of these areas can be valuable in reducing the psychological strain on the operator and in turn improve the collaboration experience. It is a reasonable heuristic that if the operator's attention is on the table, they are working on the sub-assembly. Similarly, if they are looking at the robot, they are plausibly waiting for the robot to bring its sub-assembly. If the cobot assembles faster and visibly waits for the operator, the operator might feel pressured to speed up in order to synchronize with the cobot. So, if the operator is still assembling (attention to the table), the robot should wait inconspicuously or proceed to assemble the next part till the operator is ready. On the other hand, if the operator Figure 1. A schematic overview of the experimental workspace with a representation of the complete sub-assembly at the top left corner. is faster and is waiting for the cobot (look at the cobot), then the cobot should increase its pace to avoid boredom. Our ultimate goal is to enable the cobot to adapt its behavior in response to the operator's social and affective cues (e.g. gaze). As a first step towards this goal, we train a gaze-based attention recognition model to detect the area the user is currently focusing on. We consider a transfer learning approach to train our attention recognition model. Transfer learning involves using the parameters learned for task A to train a related task B. Since our attention classification is based on the gaze of the operator, gaze estimation is an ideal source task. The training procedures for the gaze estimation and attention recognition models are described below. The deep learning models are implemented using TensorFlow and trained on NVIDIA GeForce GTX 1060 6GB GPU. ### Datasets #### 4.1.1. ETH-XGaze ETH-XGaze (Krizhevsky et al., 2014) is a large image dataset available for training gaze estimation models. The dataset contains over a million high-resolution images collected from 110 participants varying in gender, age, ethnicity, etc. It provides variations in gaze including extreme gaze angles, head poses, and differences in illumination. The ground-truth labels for 80 participants are available for training models, while the remaining labels (test set) have been withheld. #### 4.1.2. Visual Attention Dataset The _Visual Attention dataset_ is composed of images collected from 8 adult volunteers (3 females and 5 males, age: \(18-34\) years). A Logitech C920 Pro HD webcam camera is set up in front of the table, on the available support structure, around 1.5 meters far from the worker, as shown in Figure 1. Each participant is asked to stand in front of the camera and to look either towards the cobot, the work table, or anywhere else, with different configurations of their head orientation and gaze direction. For each of the three conditions, 30 pictures (\(1920\times 1080\)) per person are collected and labeled accounting for a total of 720 images. ### Gaze Estimation Model We use the ETH-XGaze dataset (see Section 4.1.1) for training a gaze estimation model. We use the VGG16 (Krizhevsky et al., 2014) neural network architecture pre-trained on ImageNet (Krizhevsky et al., 2014). The VGG16 network is connected to a fully connected layer, followed by a prediction layer for estimating the gaze direction (pitch and yaw). We use SGD optimizer (learning rate = 0.001), a batch size of 32, and Mean Absolute Error as the loss function. The input images are scaled to the standard VGG16 input dimension of \(224\times 224\). From the labeled dataset (80 participants), we reserve 8 participants for validation and the remaining are used for training. If the validation loss does not decrease for 5 epochs, we reduce the learning rate by a factor of 0.1. To avoid over-fitting, we employ an early-stopping mechanism, i.e., we stop the training if the validation loss has not decreased in the last 7 epochs. ### Transfer Learning Attention Recognition We employ a transfer learning technique for re-using the weights learned by the gaze estimation model. That is, we copy the weights of the gaze estimation model and freeze the convolutional layers. The weights of the frozen layers are not modified in further training steps. We modify the prediction layer to classify the input image into three classes (cobot, table, and distracted). The attention classes we chose are tailored to our setup. We use the Visual Attention Dataset to map the gaze direction to the defined areas using transfer learning. Here we use SGD optimizer (learning rate = 0.01) with a batch size of 15 and Categorical Cross-Entropy as the loss function. The input images are first cropped based on MediaPipe face detection (Beng et al., 2015). Then, they are scaled to VGG16 input dimensions (\(224\times 224\)). During attention recognition training, we modify the brightness of the image in the range \(\pm 25\%\). We train and evaluate our model using leave-one-subject-out (LOSO) validation. This yields a total of 8 models (one per participant). We calculate the Accuracy and F1-score as the measures of attention recognition performance. ## 5. Evaluation The LOSO validation measures how well the models recognize the attention of previously unseen participants. In addition, we explore the robustness of such a model during human-robot collaborative tasks. To this end, we collected and annotated video data of participants working with the cobot on a collaborative assembly task. We use this dataset to validate our gaze-based attention recognition model. ### Assembly Task Dataset Our _Assembly Task Dataset_ was collected in a lab setting that emulates an Industry 4.0 assembly process using the setup described in Section 3. We collected video data from 8 adult healthy participants (5 females and 3 males, age: \(18-30\) years). Each participant is asked to work as an operator on the task for 3.5 hours a day, for 5 consecutive days. This simulates the experience of an operator working a week. We aimed to obtain a naturalistic human-robot collaboration dataset and hence, we did not guide the gaze of the participants. Using the Logitech C920 Pro HD webcam camera, three sessions of approximately 10 minutes each are recorded (\(1280\times 720\), 25 fps) during the first workday (beginning, middle, and end of the workday). Likewise, three additional videos are acquired during the last workday of the experiment. With this approach, one hour of videos for each participant, for a total of 8 hours of recording, is now available and exploited in this study to test and validate the gaze-based attention recognition model. ### Annotations The assembly task has mainly three phases - Gathering parts, Assembly, and Collaborative joining of sub-assemblies. Figure 2 depicts one assembly cycle involving these three phases. The assembly task is shared equally between the cobot and the human operator. However, there is a disparity in speed between the cobot and the operator which results in considerable waiting time. This leads to the operator either looking at the cobot or in a random direction (looking at the watch, window, etc.) We annotated the videos indicating the activity the operator is doing. We label the waiting sequence depending on whether the operator is looking at the cobot or elsewhere (distracted). ### Test Set Naturally, some of the labels can be mapped to our attention classification. During the assembly phase, the operator typically looks down at the table. Similarly, the waiting labels can be mapped to attention on cobot and distracted classes. Figure 3 shows examples of images from the Assembly Task Dataset that are labeled depending on the context. Each video segment corresponding to a label lasts a few seconds. There is very little variance within a segment. Hence, we choose three representative frames from a segment - the first, middle, and last frames. First, we detect the face and crop the image. We reject the image if face detection failed. We noticed that because of the movements, some of the images were blurry. Additionally, in a few cases, the face was occluded by some objects. We reject the images where the eyes are not clearly visible. This yields a test set with 833 images of attention to cobot, 940 images of attention on the table, and 962 images of distraction. ## 6. Results and Discussions In this section, we discuss the results of evaluations conducted on our attention recognition models. The results of LOSO validation are presented in Table 1. Model\(<i>\) refers to the model trained using the data of \(i^{th}\) participant as the validation set and the remaining as the training set. All the models perform well in LOSO validation scoring an average of Accuracy = 94.3% and F1-score = 94%. Model3 performs the best (Accuracy = 99%, F1-score = 99%). We evaluate these models in a collaborative assembly scenario using the Assembly Task Dataset. The results of this evaluation are presented in Table 2. Although the recalls are slightly lower than the Visual Attention Dataset, the models perform well and are robust in the assembly setting. All the models have similar performance and achieve an Accuracy and F1-score of \(81-82\%\). In all the models, the Distracted class is the main contributor to the reduction in performance, followed by the Attention to Cobot class. To gain insights into this reduction in the recall, we first investigate the confusion matrix of predictions from the models. In Figure 4, we visualize the confusion matrix corresponding to Model7 predictions on Figure 3. Some example test images from the Assembly Task Dataset. From left to right - Attention to the cobot (while waiting), Attention to the table (while assembling), and Distracted (while waiting) Figure 2. An image sequence visualizing the collaborative assembly task. From left to right - the operator brings the parts to the table (gathering parts), then proceeds to assemble their part of the assembly, and finally the operator and cobot collaboratively join the sub-assemblies. Assembly Task Dataset. Interestingly, most of the misclassified Distracted images are predicted as Attention to the table. All the models yield similar confusion matrices. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Model & Recall Cobot & Recall Table & Recall Distracted & Accuracy & F1-score \\ \hline Model1 & 1.0 & 1.0 & 0.90 & 0.97 & 0.97 \\ Model2 & 0.97 & 0.87 & 0.90 & 0.91 & 0.91 \\ Model3 & 1.0 & 1.0 & 0.97 & 0.99 & 0.99 \\ Model4 & 0.97 & 1.0 & 0.97 & 0.98 & 0.98 \\ Model5 & 0.93 & 1.0 & 0.93 & 0.96 & 0.95 \\ Model6 & 0.90 & 1.0 & 0.89 & 0.93 & 0.93 \\ Model7 & 0.83 & 0.90 & 0.74 & 0.83 & 0.82 \\ Model8 & 1.0 & 1.0 & 0.90 & 0.97 & 0.97 \\ \hline Average & & & & 0.943 & 0.94 \\ \hline \hline \end{tabular} \end{table} Table 1: Results of LOSO Evaluation on Visual Attention Dataset Figure 4: Confusion matrix for Model7 predictions on Assembly Task Dataset \begin{table} \begin{tabular}{l c c c c c} \hline \hline Model & Recall Cobot & Recall Table & Recall Distracted & Accuracy & F1-score \\ \hline Model1 & 0.85 & 0.98 & 0.61 & 0.81 & 0.81 \\ Model2 & 0.87 & 0.95 & 0.66 & 0.82 & 0.82 \\ Model3 & 0.87 & 0.95 & 0.65 & 0.82 & 0.82 \\ Model4 & 0.83 & 0.95 & 0.67 & 0.81 & 0.82 \\ Model5 & 0.89 & 0.98 & 0.61 & 0.82 & 0.82 \\ Model6 & 0.86 & 0.96 & 0.62 & 0.81 & 0.81 \\ Model7 & 0.87 & 0.96 & 0.63 & 0.82 & 0.82 \\ Model8 & 0.83 & 0.94 & 0.67 & 0.82 & 0.82 \\ \hline Average & & & & 0.816 & 0.818 \\ \hline \hline \end{tabular} \end{table} Table 2: Results of Evaluation on Assembly Task Dataset We further investigate this trend by manually inspecting the misclassified images. We observed that, in many cases, the participants were distracted by items on the table. For example, the participants often look at the sub-assembly on the table while waiting. Figure 5 shows some examples of Distracted images where the participants are looking in the direction of the table even though they are not assembling parts. Based on our inspection, we hypothesize that the classification performance could be further improved by incorporating additional data such as the proximity of the hand to the table, and the body pose of the operator. Through this study, we demonstrate the effectiveness of our gaze-based attention recognition model in an industrial scenario involving human-robot collaboration. Typically studies (Beng et al., 2016; Chen et al., 2017; Chen et al., 2018) evaluate their models in a setup where the participant is stationary and their gaze behavior is guided (specifically asked to look at certain objects/areas). This type of evaluation is more similar to the LOSO evaluation that we performed using the Attention Detection Dataset. However, such an evaluation lacks variances in gaze, extreme head positions, etc. Hence, we conducted an additional evaluation of our model on the Assembly Task Dataset which is derived from human-robot collaboration sessions. This resulted in a test set containing varying head poses, distance from the camera, etc. We did not envision the participants getting distracted by sub-assemblies on the table. Our observations highlight the need for evaluating attention recognition models in a non-guided setting. ## 7. Conclusion In this work, we presented a human-robot collaborative assembly scenario that has been inspired by an Industry 4.0 use case. As a first step towards human-centered adaptation of cobot behavior, we developed a gaze-based attention recognition model. Based on our assembly task setup, we defined areas of interest as recognition classes - cobot, table, and anywhere else (distracted). We collected a small dataset where the participants were instructed to look in these directions. We used this dataset to map the gaze directions to areas of interest by performing transfer learning on a gaze estimation model. In a leave-one-subject-out evaluation, our models performed well, achieving an average Accuracy and F1 score of approx. 94%. We further evaluated our models using video snippets of participants from week-long assembly sessions using our HRC setup. Our models again performed well, but the recall of the Distracted class was lower. Upon manual inspection of the dataset, we found that there were many instances where the participants were distracted by sub-assemblies on the table (i.e., they were looking at the table even when they were not assembling). Our Figure 5. Few examples of Distracted images misclassified as attention to table observations highlight the need for validating attention models without any guided gaze behavior, a step that is usually missing in prior works. Our transfer learning approach demonstrates that it is feasible to recognize human attention by fine-tuning pre-trained models, which can be deployed in realistic application scenarios where human operators work on assembly tasks with a cobot. The investigation of the benefit of gaze-based attention recognition in an interactive real-time setting with a cobot will be part of our future research. We also plan to investigate a multi-modal approach using additional non-intrusive data to improve our attention recognition. ## Acknowledgments This work is funded by the European Union's Horizon 2020 research and innovation programme under grant agreement No 847926 MindBot.
2307.02085
Finite period vectors and Gauss sums
We study four sums including the Jacquet--Piatetski-Shapiro--Shalika, Flicker, Bump--Friedberg, and Jacquet--Shalika sums associated to irreducible cuspidal representations of general linear groups over finite fields. By computing explicitly, we relate Asai and Bump--Friedberg gamma factors over finite fields to those over nonarchimedean local fields through level zero supercuspidal representation. Via Deligne--Kazhdan close field theory, we prove that exterior square and Bump--Friedberg gamma factors agree with corresponding Artin gamma factors of their associated tamely ramified representations through local Langlands correspondence. We also deduce product formulae for Asai, Bump--Friedberg, and exterior square gamma factors in terms of Gauss sums. By combining these results, we examine Jacquet--Piatetski-Shapiro--Shalika, Flicker--Rallis, Jacquet--Shalika, and Friedberg--Jacquet periods and vectors and their connections to Rankin-Selberg, Asai, exterior square, and Bump-Friedberg gamma factors, respectively.
Yeongseong Jo
2023-07-05T07:48:04Z
http://arxiv.org/abs/2307.02085v2
# Finite period vectors and Gauss sums ###### Abstract. We study four sums including the Jacquet-Piatetski-Shapiro-Shalika, Flicker, Bump-Friedberg, and Jacquet-Shalika sums associated to irreducible cuspidal representations of general linear groups over finite fields. By computing explicitly, we relate Asai and Bump-Friedberg gamma factors over finite fields to those over nonarchimedean local fields through level zero supercuspidal representation. Via Deligne-Kazhdan close field theory, we prove that exterior square and Bump-Friedberg gamma factors agree with corresponding Artin gamma factors of their associated tamely ramified representations through local Langlands correspondence. We also deduce product formulae for Asai, Bump-Friedberg, and exterior square gamma factors in terms of Gauss sums. By combining these results, we examine Jacquet-Piatetski-Shapiro-Shalika, Flicker-Rallis, Jacquet-Shalika, and Friedberg-Jacquet periods and vectors and their connections to Rankin-Selberg, Asai, exterior square, and Bump-Friedberg gamma factors, respectively. Key words and phrases:Close field theory, Gamma and epsilon factors, Gauss sums, Integral representations, Level zero representations, Period vectors and integrals 2020 Mathematics Subject Classification: Primary : 11F70; Secondary: 11F66, 20C33, 22E50 ## 1. Introduction In a classical analytic number theory and related branches of mathematics, one of the main themes is to analyze a complex valued arithmetic function called a _Dirichlet character_. One of its prominent property is what is known as the _functional equation_ establishing the symmetry across the critical strip. Viewing the Euler's \(\Gamma\)-function as the \(L\)-factor in the archimedean context, the symmetric version of this global functional equation involves the epsilon function which can be presented as the product of the _classical Gauss sum_ and the conductor. In the 1960's, the analytic paradigm for understanding Dirichlet characters shifted from real or complex analytic functions to the study of automorphic forms on \(\mathrm{GL}_{n}\) and automorphic representations of \(\mathrm{GL}_{n}\). This naturally leads us to ask ourselves whether Gauss sums that appear in the global epsilon function have a representation theoretic interpretation. Since the representation of nonarchimedean local fields \(F\) occurs as factors of cuspidal representations, it is not so surprising for us to notice the imitation of such a interpretation for a pair of supercuspidal representations \(\rho_{1}\) and \(\rho_{2}\) of \(\mathrm{GL}_{n}(F)\) and \(\mathrm{GL}_{r}(F)\). In particular, when \(\rho=\rho_{1}\) and \(\rho_{2}=\mathbf{1}_{F^{\times}}\) becomes the trivial representation of \(\mathrm{GL}_{1}(F)\), the formula given in [7, 8] defines the _Godement-Jacquet gamma factor_\(\Gamma(s,\rho,\psi_{F})\)[19] in terms of _non-abelian Gauss sums_, where \(\psi_{F}\) is a fixed non-trivial additive character of \(F\). The identical Gauss sum emerges in _Tate's local gamma factor_ for \(n=1\), and in the seminal book of Bushnell and Henniart [9], albeit for \(n=2\). In the twisted case, we find the explicit formula for _Rankin-Selberg gamma factor_\(\Gamma(s,\rho_{1}\times\rho_{2},\psi_{F})\) only adhering to the conductor of the local constant [10]. Regarding many questions about representations of nonarchimedean local fields, oftentimes insights can be gained by inspecting the analogue question over a finite field \(\mathbb{F}_{q}\). Before the Rankin-Selberg gamma factor \(\Gamma(s,\rho_{1}\times\rho_{2},\psi_{F})\) was established in the pioneering work of Jacquet-Piatetski-Shapiro-Shalika (cf. [39, Theorem 1.2]), the parallel gamma factors \(\gamma(\pi_{1}\times\pi_{2},\psi)\) associated to a pair of irreducible generic representations \(\pi_{1}\) and \(\pi_{2}\) of \(\mathrm{GL}_{n}(\mathbb{F}_{q})\) and \(\mathrm{GL}_{r}(\mathbb{F}_{q})\) had already been investigated in Piatetski-Shapiro's unpublished note [43]. The finite gamma factor \(\gamma(\pi_{1}\times\pi_{2},\psi)\), where now \(\pi_{2}\) is a multiplicative character of \(\mathbb{F}_{q}^{\times}\), is revisited by Nien [37, 39] in the hope of resolving local converse theorem and distinction problems [38], following the lead from nonarchimedean local fields situation. As a by-product, Nien does something more intriguing, namely that \(\gamma(\pi_{1}\times\pi_{2},\psi)\) is associated to the _abelian Gauss sum_[37, Theorem 1.1]. In this article, we put the Asai, Bump-Friedberg, and exterior square setting on an equal footing with the Rankin-Selberg setting by expressing such finite gamma factors in terms of abelian Gauss sums. We summarize our main results concerning irreducible cuspidal representation \(\pi\) of \(\operatorname{GL}_{n}(\mathbb{F}_{q})\), or \(\operatorname{GL}_{n}(\mathbb{F}_{q^{2}})\) if necessary, as follows; * In Theorem 3.2, the _Asai gamma factor_\(\gamma(\pi,\operatorname{As},\psi)\) is defined as a proportionality of bilinear forms arising from the _Flicker sums_ given by (3.1). We prove the prodcut formula for \(\gamma(\pi,\operatorname{As},\psi)\) in terms of abelian Gauss sums (Theorem 3.13). * The _exterior square gamma factor_\(\gamma(\pi,\wedge^{2},\psi)\) is defined as a proportionality of bilinear forms arising from the _Jacquet-Shalika sums_ given in (4.1) and (4.2). We prove the product formula for \(\gamma(\pi,\wedge^{2},\psi)\) in terms of abelian Gauss sums (Theorem 5.21). * In Theorem 5.2, the _Bump-Friedberg gamma factor_\(\gamma(\pi,\operatorname{BF},\psi)\) is defined as a proportionality of bilinear forms arising from the _Bump-Friedberg sums_ given by (5.1) and (5.2). This represents \(\varepsilon_{0}(\varphi,\psi)\varepsilon_{0}(\wedge^{2}\circ\varphi,\psi)\), the product of Deligne's arithmetic standard \(\varepsilon_{0}\)-factor and Deligne's arithmetic exterior square \(\varepsilon_{0}\)-factor (Proposition 5.20). We prove the product formula for \(\gamma(\pi,\operatorname{BF},\psi)\) in terms of abelian Gauss sums (Theorem 5.21). Returning to the motivating question over the finite field \(\mathbb{F}_{q}\), Nien and Zhang [40, Conjecture 2.2] subsequently propose the conjectural formula with regard to abelian Gauss sums for Rankin-Selberg gamma factor \(\gamma(\pi_{1}\times\pi_{2},\psi)\) for a pair of irreducible cuspidal representations \(\pi_{1}\) and \(\pi_{2}\) of \(\operatorname{GL}_{n}(\mathbb{F}_{q})\) and \(\operatorname{GL}_{r}(\mathbb{F}_{q})\) with different ranks \(n\neq r\). A slightly modified formula is settled by Yang [48, Theorem 0.1] and, independently, by Ye-Zelingher [51, Corollary 4.5]. Afterwords, Zelingher generalizes the method to \(\gamma(\pi_{1}\times\pi_{2},\psi)\) with the same rank \(n=r\)[52, Theorem 2.18]. A key strategy of establishing the explicit formula boils down to computing gamma factors for level zero supercuspidal representations. When it comes to an irreducible cuspidal representation \(\pi\) (or a pair of irreducible cuspidal representations \(\pi_{1}\) and \(\pi_{2}\) on occasion), which does not possess suitable period vectors, we can further relate them with their counterparts for irreducible cuspidal representations over finite fields. The calculation of Rankin-Selberg gammas factors \(\Gamma(s,\rho_{1}\times\rho_{2},\psi_{F})\) for a pair of irreducible cuspidal representations \(\rho_{1}\) and \(\rho_{2}\) is attributed to Ye [49], and later Ye and Zelingher [50] performed exterior square gamma factors \(\Gamma(s,\rho,\wedge^{2},\psi_{F})\). However the computation for the Asai gamma factor and the Bump-Friedberg gamma factor is newly explored in this paper. Let us give a precise statement pertaining to level zero supercuspidal representations \(\rho\) of \(\operatorname{GL}_{n}(F)\), or \(\operatorname{GL}_{n}(E)\) with \(E\) an unramified quadratic extension over \(F\) if necessary, here. * The _Asai gamma factor_\(\Gamma(s,\rho,\operatorname{As},\psi_{F})\) satisfies the local functional equation given in (3.3). We prove in Theorem 3.10 that this is equal to the rational function \[q^{n(s-1/2)}\omega_{\rho}^{-1}(\varpi)L(n(1-s),\omega_{\rho}^{-1}\upharpoonright_{F^ {\times}})/L(ns,\omega_{\rho}\upharpoonright_{F^{\times}})\] in \(\mathbb{C}(q^{-s})\) if \(n=2m+1\) and \(\pi\) has a _Flicker-Rallis vector_, and to a complex number \(\gamma(\pi,\operatorname{As},\psi)\) otherwise. * The _Bump-Friedberg factor_\(\Gamma(s,t,\rho,\operatorname{BF},\psi_{F})\) satisfies the local functional equation given in (5.3). We prove in Theorem 5.9 that this equals to the rational function \[\varepsilon(s,\rho,\psi_{F})q^{m\left(2s-\frac{1}{2}\right)}\omega_{\rho}^{-1 }(\varpi)L(m(1-2s),\omega_{\rho}^{-1})/L(2ms,\omega_{\rho})\] in \(\mathbb{C}(q^{-2s})\) if \(n=2m\) and \(\pi\) has a _Friedberg-Jacquet vector_, and to a complex number \(\gamma(\pi,\operatorname{BF},\psi)\) otherwise. Another main ingredient toward the product formula is to associate analytic gamma factors over finite fields with the corresponding Deligne's arithmetic \(\varepsilon_{0}\)-factor. An instant benefit from such definition is that arithmetic \(\varepsilon_{0}\)-factors inherit multiplicative property which in turn make it feasible to express as product of Gauss sums [51, Theorem 2.4]. Part of reasons that Ye and Zelingher [51, SS5] primarily considered the product formula for Rankin-Selberg factor \(\gamma(\pi_{1}\times\pi_{2},\psi)\) is that the matching between Jacquet-Shalika gamma factor \(\Gamma(s,\rho(\varphi),\wedge^{2},\psi_{F})\) and Artin exterior square gamma factor \(\Gamma(s,\wedge^{2}(\varphi),\psi_{F})\) under the local Langlands correspondence was not available at that time. Another aim of this paper is to take up this issue and remove the constant ambiguity "\({}_{cf}\)" lingering in [51, SS1]. We present an accurate statement regarding the identity in the following way: * Let \(\varphi\) be a \(n\)-dimensional tamely ramified representation of \(W_{F}\) corresponding to the level zero supercuspidal representation \(\rho(\varphi)\) of \(\operatorname{GL}_{n}(F)\) via local Langlands correspondence. We prove in Theorem 4.9 and Theorem 5.16 that * \(\Gamma(s,\rho(\varphi),\wedge^{2},\psi_{F})=\Gamma(s,\wedge^{2}(\varphi),\psi _{F})\). * \(\Gamma(s,t,\rho(\varphi),\operatorname{BF},\psi_{F})=\varepsilon(s+t+1/2, \varphi,\psi_{F})\Gamma(2s,\wedge^{2}(\varphi),\psi_{F})\). In late 1980's, Bump and Friedberg predicted that \(\Gamma(s,t,\rho(\varphi),\operatorname{BF},\psi_{F})\) is a product of the exterior square \(\gamma\)-factor and the standard \(\gamma\)-factor, based on the pattern in a spherical situation. We partially confirm their conjecture [6, Conjecture 4] for level zero supercuspidal reprsentations. Our important tactic here is to utilize a globalization of level zero supercuspidal representations over local function fields equipped with a close field theory. In a series of versions of globalizing supercuspidal representations over number fields, one typically loses control of the local component of a cuspidal automorphic representation exactly at one place, an archimedean place. While the Langlands-Shahidi theory over archimedean local fields has been fairly well navigated since the seminal work of Shahidi [45], there has been little progress on the desired archimedean input for Bump-Friedberg and Jacquet-Shalika integrals. However the globalization of level zero supercuspidal representations in positive characteristic gives rather good control at all places. Although one may sacrifice a few places, the necessary equalities of exterior square \(\gamma\)-factors for irreducible constituents of spherical representations at those bad places (Lemmata 4.4, 5.13) do not appear to be insurmountable. Having a solid matching of \(\gamma\)-factors for level zero supercuspidal representations over positive characteristic (Theorems 4.6, 5.14) in hand, we then incorporate \(\gamma\)-factors arising from Langlands-Shahidi methods and integral representations with Deligne-Kazhdan theory over close (nonarchimedean local) fields. Deligne proved that Artin local factors remain the same for parallel representations over close local fields via Deligne isomorphisms [14, Proposition 3.7.1]. An analogous result has been studied by Ganapathy and Lomeli [16, 17] for Langlands-Shahidi local factors on analytic sides, though that time they only considered Kazhdan isomorphisms over sufficiently closed fields. So far, there seems to have been no previous discourse in the literature about the very basic case of "1-close fields". To be precise, we will show that local exterior square \(\gamma\)-factors for level zero supercuspidal representations via Jacquet-Shalika integrals are compatible with the Kazhdan correspondence over 1-close fields (Propositions 4.8, 5.15). In doing so, we are allowed to transport the identity of \(\gamma\)-factors over positive characteristic to characteristic zero. The fact that the pole of local \(L\)-functions characterizes the existence of linear functionals for supercuspidal representations has been used to great impetus in recent years on constructing multiplicative relations of \(L\)-factors [26, 27, 35, 36]. In particular, for a supercuspidal representation \(\rho_{1}\) of \(\operatorname{GL}_{n}(F)\) and an irreducible admissible generic representation \(\rho_{2}\) of \(\operatorname{GL}_{n}(F)\), the local Rankin-Selberg \(L\)-function \(L(s,\rho_{1}\times\rho_{2})\) is an important tool to detect whether \(\rho_{1}\) appears in the standard module that defies \(\check{\rho}_{2}\). This comes down to the observation that a twist of \(\rho_{1}\) occurs in the standard module for \(\check{\rho}_{2}\) if and only if local \(L\)-functions \(L(s,\rho_{1}\times\rho_{2})\) has a pole of which the location determines that unramified twist [11]. Lately, Soudry and Zelingher [47, Theorem 1.3] suggest that the absolute value of the normalized finite Rankin-Selberg gamma factor \(\gamma^{\star}(\pi_{1}\times\pi_{2},\psi)\) might serve as an alternative for the order of the pole of local \(L\)-functions \(L(s,\rho_{1}\times\rho_{2})\). These results parallel analogous results of Cogdell and Piatetski-Shapiro in the nonarchimedean local field setting [11]. As long expected, for a pair of level zero supercuspidal representations \(\rho_{1}\) and \(\rho_{2}\) of \(\operatorname{GL}_{n}(F)\), we are able to show that the existence of poles of \(L(s,\rho_{1}\times\rho_{2})\) indeed contributes to the absolute value of \(\gamma^{\star}(\pi_{1}\times\pi_{2},\psi)\) other than one, and vice versa. More precisely, we list below the periods and vectors of interest provided in Theorem 6.1, which naturally arise from certain local \(L\)-functions and particular finite gamma factors. * The occurrence of _Jacquet-Piatetski-Shapiro-Shalika periods_ and _vectors_ is equivalent to having a pole of the Rankin-Selberg \(L\)-factor \(L(s,\rho_{1}\times\rho_{2})\), or the absolute value of the _normalized_ Rankin-Selberg gamma factor \(\gamma^{\star}(\pi_{1}\times\pi_{2},\psi)\) different to one. * The occurrence of _Flicker-Rallis periods_ and _vectors_ is equivalent to having a pole of the Asai \(L\)-factor \(L(s,\rho,\operatorname{As})\), or the absolute value of the Asai gamma factor \(\gamma(\pi,\operatorname{As},\psi)\) different to one. * The occurrence of _Jacquet-Shalika periods_ and _vectors_ is equivalent to have a pole of the exterior square \(L\)-factor \(L(s,\rho,\wedge^{2})\), or the absolute value of the exterior square gamma factor \(\gamma(\pi,\wedge^{2},\psi)\) different to one. * The occurrence of _Friedberg-Jacquet periods_ and _vectors_ is equivalent to having a pole of the Bump-Friedberg \(L\)-factor \(L(s,\rho,\operatorname{BF})\), or the absolute value of the Bump-Friedberg gamma factor \(\gamma(\pi,\operatorname{BF},\psi)\) different to one. Via the theory of new forms for \(\operatorname{GL}_{n}\), the parallel analogue results in the nonarchimedean setting is completed by the author [28], and the method is further generalized to archimedean setting by the joint work with Humphries [23]. We can further explorer the absolute value of gamma factors to irreducible generic representations \(\pi\) of \(\operatorname{GL}_{n}(\mathbb{F}_{q})\), not just limited to irreducible cuspidal ones. Soudry and Zelingher recently worked it out for Rankin-Selberg gamma factor \(\gamma(\pi_{1}\times\pi_{2},\psi)\) over finite fields [47, Theorem 1.3]. Along the line of this philosophy, it is likely that so-called "multiplicativity" of gamma over finite fields needs to be well-understood ahead of time. In the case of \(p\)-adic fields, it was the Langlands-Shahidi method that came to fruition first (cf. [31, 32]) and enabled Zelingher [52] to realize the goal of establishing multiplicativity via Shahidi gamma factors. It turns out that long before the computation was investigated by Soudry, when he was a graduate student in 1979 [47]. In a joint work of Soudry and Zelingher, the finite field analogue of Shahidi gamma factors for pairs \((\pi_{1},\pi_{2})\) is completed very recently [47]. However it is still required to show that Shahidi gamma factors agrees with Rankin-Selberg gamma factor \(\gamma(\pi_{1}\times\pi_{2},\psi)\) to validate this robust argument. The author has carried out a similar matching question for the Asai gamma factor \(\gamma(\pi,\operatorname{As},\psi)\), which will appear elsewhere. In conjunction with computing Rankin-Selberg sums for classical groups, the author currently pursues this topic in depth with Zelingher [52]. Although all these analysis seem to be doable, if somewhat involved, we think that our current formulation keeps our exposition a reasonable length and we plane to do so in near future. The structure of this article is the following. Section 2 contains a brief review of the theory of Jacquet-Piatetski-Shapiro-Shalika sums, and Rankin-Selberg gamma factors. We continue to survey Deligne-Kazhdan close field theory and give its application to Rankin-Selberg gamma factors. Section 3, and Section 5 are devoted to presenting an analogue theory for Asai, and Bump-Friedberg gamma factors, orderly. We concern with exterior square gamma factor in Section 4, where these results are all essentially analogue, although several of the results regarding close field theory are mostly addressed. We discuss the relation between period vectors, integrals, absolute values of gamma factors, and poles of \(L\)-factors in Section 6. ## 2. The Rankin-Selberg Gamma Factor We now detail the theory of Jacquet-Piatetski-Shapiro-Shalika sums as well as the relation between Jacquet-Piatetski-Shapiro-Shalika vectors and Rankin-Selberg \(\gamma\)-factors. The results herein are all well-known. However, the normalization of Haar measure [52], the choice of subspace of Schwartz-Bruhat functions, and the Fourier transformation [47, 49, 50] in the current literature are slightly different from this paper. We recall them as motivation for Sections 3 and 5, in which we discuss analogous yet new results for Asai and exterior square \(\gamma\)-factors. Section 2 serves to overview the breakdown of our computation that is repeated throughout the paper. ### The Jacquet-Piatetski-Shapiro-Shalika sum We let \(N_{n}\) be the unipotent radical of the standard Borel subgroup \(B_{n}\) of \(\operatorname{GL}_{n}\) and \(A_{n}\) the Levi subgroup of \(B_{n}\), consisting of diagonal matrices in \(\operatorname{GL}_{n}\). We denote by \(P_{n}\) the mirabolic subgroup of \(\operatorname{GL}_{n}\), consisting of matrices in \(\operatorname{GL}_{n}\) with last row equal to \((0,\cdots,0,1)\). We write \(1_{n}\) to denote the \(n\times n\) identity matrix. Let \(\mathbb{F}_{q}\) be a finite field of \(q=p^{k}\) elements with characteristic \(p\). Let \(\mathbb{F}=\mathbb{F}_{q}\). We fix a non-trivial additive character \(\psi:=\psi_{\mathbb{F}}\) of \(\mathbb{F}\), and extend it to the character of \(N_{n}(\mathbb{F})\) by setting \(\psi(n)=\psi(n_{1,2}+\cdots+n_{n-1,n})\) with \(n\in N_{n}(\mathbb{F})\). Let \(\mathcal{S}(\mathbb{F}^{n})=\{\phi\,|\,\phi:\mathbb{F}^{n}\to\mathbb{C}\}\) be the set of complex valued functions on \(\mathbb{F}^{n}\). Let \(\{e_{i}\,|\,1\leq i\leq n\}\) be the standard row basis of \(\mathbb{F}^{n}\). We let \(\langle x,y\rangle:=x\cdot^{t}y\) be the standard bilinear form on \(\mathbb{F}^{n}\). The Fourier transform of \(\phi\in\mathcal{S}(\mathbb{F}^{n})\) with respect to \(\psi\) is given by (cf. [38, (2.3); 50, SS2.1.1] \[\mathcal{F}_{\psi}(\phi)(y)=q^{-\frac{n}{2}}\sum_{x\in\mathbb{F}_{q}}\phi(x) \psi(\langle x,y\rangle).\] The Fourier inversion formula takes the form \[(\mathcal{F}_{\psi}\circ\mathcal{F}_{\psi})(\phi)(x)=\phi(-x)\quad\text{and} \quad(\mathcal{F}_{\psi^{-1}}\circ\mathcal{F}_{\psi})(\phi)(x)=\phi(x).\] Given an irreducible cuspidal representation \(\pi\), we fix a non-trivial \(\operatorname{GL}_{n}(\mathbb{F})\)-invariant unitary form \((\cdot,\cdot)\) on \(V_{\pi}\times V_{\pi}\). Then there exists a non-trivial vector \(v_{0}\in V_{\pi}\) satisfying \(\pi(n)v_{0}=\psi(n)v_{0}\) for all \(n\in N_{n}(\mathbb{F})\). Such a vector \(v_{0}\) is called a _Whittaker vector_. A _Whittaker function_ of \(\pi\) is a matrix coefficient of the form \(W(g)=(\pi(g)v,v_{0})\) for all \(g\in\operatorname{GL}_{n}(\mathbb{F})\) and \(v\in V_{\pi}\). Whittaker functions satisfy \(W(ng)=\psi(n)W(g)\) for every \(n\in N_{n}(\mathbb{F})\). The subspace generated by all Whittaker function \(W(g)\) is unique [18, Theorem 0.5], and will be denoted by \(\mathcal{W}(\pi,\psi)\) with \(\operatorname{GL}_{n}(\mathbb{F})\) acting by right translations. This space is called the _Whittaker model_ of \(\pi\). For an irreducible cuspidal representation \(\pi\) of \(\operatorname{GL}_{n}(\mathbb{F})\), we have its contragredient \(\check{\pi}\) that is isomorphic to \(\pi^{\iota}\), where \(\pi^{\iota}\) is the representation acting on the same underlying space \(V_{\pi}\) of \(\pi\) by \(\pi^{\iota}(g)=\pi(^{t}g^{-1})\) for \(g\in\operatorname{GL}_{n}(\mathbb{F})\). Under \(\check{\pi}\cong\pi^{\iota}\), we achieve an isomorphism of vector spaces \(\mathcal{W}(\pi,\psi)\to\mathcal{W}(\check{\pi},\psi^{-1})\), given by \(W_{\pi}\mapsto\check{W}_{\pi}\), where \[\check{W}_{\pi}(g)=W_{\pi}(w_{n}\,^{t}g^{-1}),\quad g\in\operatorname{GL}_{n} (\mathbb{F}), \tag{2.1}\] and where \(w_{n}=\begin{pmatrix}1&\cdot&1\\ 1&\cdot&1\end{pmatrix}\) is the longest Weyl element of \(\operatorname{GL}_{n}(\mathbb{F})\). The _Bessel function_\(\mathcal{B}_{\pi,\psi}\) of \(\pi\) is the Whittaker function of the normalized Whittaker vector by setting \[\mathcal{B}_{\pi,\psi}(g)=\frac{(\pi(g)v,v_{0})}{(v_{0},v_{0})}=W(1_{2})^{-1}W( g).\] Let \(\omega_{\pi}\) denote the central character of \(\pi\) on \(\mathbb{F}^{\times}\). Some of the elementary properties of the Bessel function \(\mathcal{B}_{\pi,\psi}\) are (cf. [38, Proposition 2.8]): 1. \(\mathcal{B}_{\pi,\psi}(n_{1}gn_{2})=\psi(n_{1})\psi(n_{2})\mathcal{B}_{\pi, \psi}(g)\) for all \(n_{1},n_{2}\in N_{n}(\mathbb{F})\). 2. \(\mathcal{B}_{\pi,\psi}(1_{n})=1\) and \(\mathcal{B}_{\pi,\psi}(a1_{n})=\omega_{\pi}(a)\) for all \(a\in\mathbb{F}^{\times}\). 3. \(\mathcal{B}_{\pi,\psi}(g^{-1})=\overline{\mathcal{B}_{\pi,\psi}(g)}=\mathcal{B }_{\check{\pi},\psi^{-1}}(g)\) for all \(g\in\operatorname{GL}_{n}(\mathbb{F})\). Let \(\pi_{1}\) and \(\pi_{2}\) be irreducible cuspidal representations of \(\operatorname{GL}_{n}(\mathbb{F})\). Let \(\mathcal{S}_{0}(\mathbb{F}^{n})\) denote the set of \(\mathbb{C}\)-valued functions on \(\mathbb{F}^{n}\) such that \(\phi(0)=0\). For every \(W_{\pi_{1}}\in\mathcal{W}(\pi_{1},\psi)\) and \(W_{\pi_{2}}\in\mathcal{W}(\pi_{2},\psi^{-1})\), and for any \(\phi\in\mathcal{S}_{0}(\mathbb{F}^{n})\), there exists a complex number \(\gamma^{\star}(\pi_{1}\times\pi_{2},\psi)\) such that \[\gamma^{\star}(\pi_{1}\times\pi_{2},\psi)\sum_{g\in N_{n}(\mathbb{F})\setminus \operatorname{GL}_{n}(\mathbb{F})}W_{\pi_{1}}(g)W_{\pi_{2}}(g)\phi(e_{n}g)=\sum_{g \in N_{n}(\mathbb{F})\setminus\operatorname{GL}_{n}(\mathbb{F})}W_{\pi_{1}}(g)W _{\pi_{2}}(g)\mathcal{F}_{\psi}(\phi)(e_{1}{}^{t}g^{-1}). \tag{2.2}\] _Remark 2.1_.: As explained in [38, p.23 footnote], we normalize the Fourier transform and gamma factors, which are different from what is commonly adopted at least since Roditty's master thesis (cf. [49, SS2.2]). In order to distinguish the normalized gamma factor from unnormalized one \(\gamma(\pi_{1}\times\pi_{2},\psi)\) in [49, SS2.2], we add the superscript \(\star\) to emphasize the normalization. In doing so, the absolute value of \(\gamma^{\star}(\pi_{1}\times\pi_{2},\psi)\) becomes \(1\), as shown in Theorem 2.5. For \(W_{\pi_{1}}\in\mathcal{W}(\pi_{1},\psi)\) and \(W_{\pi_{2}}\in\mathcal{W}(\pi_{2},\psi^{-1})\), and \(\phi\in\mathcal{S}(\mathbb{F}^{n})\), we define the _Jacquet-Piatetski-Shapiro-Shalika sum_\(\Psi(W_{\pi_{1}},W_{\pi_{2}},\phi)\) by \[\Psi(W_{\pi_{1}},W_{\pi_{2}},\phi):=\sum_{g\in N_{n}(\mathbb{F})\setminus \operatorname{GL}_{n}(\mathbb{F})}W_{\pi_{1}}(g)W_{\pi_{2}}(g)\phi(e_{n}g).\] In a similar manner, the _dual Jacquet-Piatetski-Shapiro-Shalika sum_\(\check{\Psi}(W_{\pi_{1}},W_{\pi_{2}},\phi)\) is defined by \[\check{\Psi}(W_{\pi_{1}},W_{\pi_{2}},\phi):=\sum_{g\in N_{n}(\mathbb{F}) \setminus\operatorname{GL}_{n}(\mathbb{F})}\check{W}_{\pi_{1}}(g)\check{W}_ {\pi_{2}}(g)\mathcal{F}_{\psi}(\phi)(e_{n}g).\] A non-zero vector \(v_{1}\otimes v_{2}\in V_{\pi_{1}}\otimes V_{\pi_{2}}\) is called a _Jacquet-Piatetski-Shapiro-Shalika vector_, if for every \(g\in\operatorname{GL}_{n}(\mathbb{F})\), we have \((\pi_{1}\otimes\pi_{2})(g)(v_{1}\otimes v_{2}):=\pi_{1}(g)v_{1}\otimes\pi_{2 }(g)v_{2}=v_{1}\otimes v_{2}\). This condition is equivalent to \(\pi_{1}\cong\check{\pi}_{2}\). _Remark 2.2_.: The definition of \(\gamma^{\star}(\pi\times\check{\pi},\psi)\) taken in [50] differs slightly from herein by the inclusion of all Schwartz-Bruhat functions \(\phi\in\mathcal{S}(\mathbb{F}^{n})\) in the Jacquet-Piatetski-Shapiro-Shalika sum. As stressed in [50, Corollary 2.27], the functional equation in (2.2) does not hold when \(\pi_{1}\times\pi_{2}\) has the Jacquet-Piatetski-Shapiro-Shalika vector. In a pioneering work [43] (cf. [49, SS2.2]), Piatetski-Shapiro has already realized that it is advantageous to restrict the space \(\mathcal{S}(\mathbb{F}^{n})\) to \(\mathcal{S}_{0}(\mathbb{F}^{n})\). In order to define gamma factors over the finite field uniformly for every irreducible cuspidal representations, Schwartz-Bruhat functions \(\phi\) are only taken over \(\mathcal{S}_{0}(\mathbb{F}^{n})\) throughout the paper. We express \(\gamma^{\star}(\pi_{1}\times\pi_{2},\psi)\) in terms of the Bessel functions associated with \(\pi_{1}\) and \(\pi_{2}\). **Theorem 2.3**.: _[_47_, Corollary 3.4; 49, Equation (16)]_ _Let \(\pi_{1}\) and \(\pi_{2}\) be irreducible cuspidal representations of \(\operatorname{GL}_{n}(\mathbb{F})\). Then_ \[\gamma^{\star}(\pi_{1}\times\pi_{2},\psi)=q^{-\frac{n}{2}}\sum_{g\in N_{n}( \mathbb{F})\operatorname{GL}_{n}(\mathbb{F})}\mathcal{B}_{\pi_{1},\psi}(g) \mathcal{B}_{\pi_{2},\psi^{-1}}(g)\psi(e_{1}{}^{t}g^{-1}\ {}^{t}e_{n}). \tag{2.3}\] _In particular, we have \(\gamma^{\star}(\check{\pi}_{1}\times\check{\pi}_{2},\psi^{-1})=\overline{ \gamma^{\star}(\pi_{1}\times\pi_{2},\psi)}\)._ We can precisely evaluate the sum (2.3) in two different ways, when \(\pi_{1}\times\pi_{2}\) has the Jacquet-Piatetski-Shapiro-Shalika vector. The method of Ye [49, Corollary 4.3] uses the system of linear equations and the theory of level zero supercuspidal representations, whereas Soudry and Zelingher [47, Theorem A.1] explicitly compute \(\gamma^{\star}(\pi\times\check{\pi},\psi)\) within the context of the representation theory of groups over finite fields. We transport the former approach to the Bump-Friedberg setting in Proposition 5.11, while the latter path is adapted to the Asai setting in Proposition 3.6 and the exterior square setting in Proposition 4.2. **Theorem 2.4**.: _[_47_, Theorem A.1; 49, Corollary 4.3]_ _Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{n}(\mathbb{F})\). Then we have_ \[\gamma^{\star}(\pi\times\check{\pi},\psi)=-q^{-\frac{n}{2}}.\] We end this section by summarizing functional equations for \(\gamma^{\star}(\pi_{1}\times\pi_{2},\psi)\) over finite fields. The result is a direct consequence of Theorem 2.3 and Theorem 2.4. **Theorem 2.5**.: (cf. [47, Corollary 2.2 and Proposition 2.5]) _Let \(\pi_{1}\) and \(\pi_{2}\) be irreducible cuspidal representations of \(\operatorname{GL}_{n}(\mathbb{F})\)._ 1. _If_ \(\pi_{1}\not\cong\check{\pi}_{2}\)_, we have_ \(\gamma^{\star}(\pi_{1}\times\pi_{2},\psi)\gamma^{\star}(\check{\pi}_{1}\times \check{\pi}_{2},\psi^{-1})=1\) _and_ \(|\gamma^{\star}(\pi_{1}\times\pi_{2},\psi)|=1\)_._ 2. _If_ \(\pi_{1}\cong\check{\pi}_{2}\)_, we have_ \(\gamma^{\star}(\pi_{1}\times\pi_{2},\psi)\gamma^{\star}(\check{\pi}_{1}\times \check{\pi}_{2},\psi^{-1})=q^{-n}\) _and_ \(|\gamma^{\star}(\pi_{1}\times\pi_{2},\psi)|=q^{-\frac{n}{2}}\) ### The Jacquet-Piatetski-Shapiro-Shalika period and level zero supercuspidal representations Let \(F\) be a non-archimedean local field with its residual finite field \(\mathfrak{o}/\mathfrak{p}\cong\mathbb{F}_{q}\) of order \(q=q_{F}\). The base field \(F\) is a finite extension of \(\mathbb{Q}_{p}\) or \(\mathbb{F}_{p}((t))\), called a _\(p\)-adic field_ in characteristic \(0\), or a _local function field_ in characteristic \(p>0\). We write \(\mathfrak{o}:=\mathfrak{o}_{F}\) and \(\mathfrak{p}:=\mathfrak{p}_{F}\) for the ring of its integers and the maximal ideal, respectively. We fix a generator \(\varpi:=\varpi_{F}\) of \(\mathfrak{p}\) and normalize the absolute value \(|\cdot|\) of \(F\) so that \(|\varpi|=q^{-1}\). Let \(\psi_{F}\) be a fixed non-trivial additive character of \(F\) such that \(\psi_{F}\) is trivial on \(\mathfrak{p}\) and nontrivial on \(\mathfrak{o}\). The self-dual Haar measure for \(\psi_{F}\)[9, SS23] then satisfies \[\int_{\mathfrak{o}}dx=q^{\frac{1}{2}}.\] For the purpose of calculation, it will be convenient to choose the Haar measure \(d^{\times}x\) on \(F^{\times}\) such that \[\int_{\mathfrak{o}^{\times}}d^{\times}x=1.\] We denote by \(\mathrm{pr}:\mathfrak{o}\to\mathfrak{o}/\mathfrak{p}\cong\mathbb{F}\) the quotient map. We define \(\psi(\mathrm{pr}(k))=\psi_{F}(k)\) for \(k\in\mathfrak{o}\). Let \(\mathcal{S}(F^{n})\) be the space of locally constant and compactly supported functions \(\Phi:F^{n}\to\mathbb{C}\). We denote its Fourier transform \[\mathcal{F}_{\psi_{F}}(\Phi)(y)=\int_{F^{n}}\Phi(x)\psi_{F}(\langle x,y \rangle)\,dx,\] for \(\Phi\in\mathcal{S}(F^{n})\). The Fourier inversion formulas are given by \[(\mathcal{F}_{\psi_{F}}\circ\mathcal{F}_{\psi_{F}})(\Phi)(x)=\Phi(-x)\quad \text{and}\quad(\mathcal{F}_{\psi_{F}^{-1}}\circ\mathcal{F}_{\psi_{F}})(\Phi )(x)=\Phi(x).\] For \(\phi\in\mathcal{S}(\mathbb{F}^{n})\), we define a lift \(\Phi_{\circ}\in\mathcal{S}(F^{n})\) of \(\phi\) by \[\Phi_{\circ}(x)=\begin{cases}\phi(\mathrm{pr}(x)),&\text{if }x\in\mathfrak{o}^{n},\\ 0,&\text{otherwise}.\end{cases}\] Then \(\mathcal{F}_{\psi_{F}}(\Phi_{\circ})\) is a lift of \(\mathcal{F}_{\psi}(\phi)\)[50, Proposition 3.6] in the sense of \[\mathcal{F}_{\psi_{F}}(\Phi_{\circ})(x)=\begin{cases}\mathcal{F}_{\psi}(\phi) (\mathrm{pr}(x)),&\text{if }x\in\mathfrak{o}^{n},\\ 0,&\text{otherwise}.\end{cases}\] We let \(K_{n}=\mathrm{GL}_{n}(\mathfrak{o})\) be the standard maximal compact subgroup of \(\mathrm{GL}_{n}(F)\). A _level zero supercuspidal representation_ of \(\mathrm{GL}_{n}(F)\) is given by \[\rho\cong\mathrm{c}\mathrm{-Ind}_{F^{\times}\mathrm{GL}_{n}(\mathfrak{o})}^{ \mathrm{GL}_{n}(F)}\widetilde{\mu},\] where a representation \(\mu:=\widetilde{\mu}\restriction_{\mathrm{GL}_{n}(\mathfrak{o})}\) is inflated from an irreducible cuspidal representation \(\pi\) of \(\mathrm{GL}_{n}(\mathbb{F})\) and the central character \(\omega_{\pi}\) of \(\pi\) satisfies \(\widetilde{\mu}\restriction_{\mathfrak{o}^{\times}=F^{\times}\cap\mathrm{GL}_{n }(\mathfrak{o})}(a)=\omega_{\pi}(\mathrm{pr}(a))\). We let \(\mathcal{A}_{0}(\mathrm{GL}_{n}(F))\) be the set of isomorphism classes of level zero supercuspidal representations of \(\mathrm{GL}_{n}(F)\) and let \(\mathcal{A}_{0}(\mathrm{GL}_{n}(\mathbb{F}))\) denote the set of equivalence classes of irreducible cuspidal representations of \(\mathrm{GL}_{n}(\mathbb{F})\). Then [48, SS3.3] and [50, Theorem 3.5] give rise to a bijection \[\mathcal{A}_{0}(\mathrm{GL}_{n}(F)) \longleftrightarrow\mathbb{C}^{\times}\times\mathcal{A}_{0}( \mathrm{GL}_{n}(\mathbb{F}))\] \[\rho \longleftrightarrow(\omega_{\rho}(\varpi),\pi). \tag{2.4}\] The contragredient representation \(\tilde{\rho}\) of \(\mathrm{GL}_{n}(F)\) is again a level zero supercuspidal representation constructed from an irreducible cuspidal representation \(\tilde{\pi}\) of \(\mathrm{GL}_{n}(\mathbb{F})\)[49, Lemma 2.1]. If \(W_{\rho}\in\mathcal{W}(\rho,\psi_{F})\), then \(\tilde{W}_{\rho}(g):=W_{\rho}(w_{n}\,^{t}g^{-1})\in\mathcal{W}(\tilde{\rho}, \psi_{F}^{-1})\). We denote by \(\langle\cdot,\cdot\rangle\) the pairing \(V_{\tilde{\rho}}\times V_{\rho}\) and \(V_{\tilde{\pi}}\times V_{\pi}\) given by the evaluation. Let \(\lambda\in\mathrm{Hom}_{N_{n}(\mathbb{F})}(\pi,\psi)\) be a non-zero Whittaker functional of \(\pi\). We define the linear functional \(\lambda_{\circ}:V_{\rho}\to\mathbb{C}\) by \[\langle\lambda_{\circ},f\rangle:=\int_{N_{n}(F)\cap K_{n}\setminus N_{n}(F)} \langle\lambda,f(u)\rangle\psi^{-1}(u)\,d^{\times}u,\] where \(f\in V_{\rho}\). We view \(f\) as a function \(f:\operatorname{GL}_{n}(F)\to V_{\pi}\). Then \(\lambda_{\circ}\in\operatorname{Hom}_{N_{n}(F)}(\rho,\psi_{F})\) is a non-zero Whittaker function of \(\operatorname{GL}_{n}(F)\), which is a lift of \(\lambda\)[50, Proposition 3.7]. Let \(W_{\pi}\in\mathcal{W}(\pi,\psi)\) and \(v_{W_{\pi}}\in V_{\pi}\) a unique vector such that \(W_{\pi}(g)=\langle\lambda,\pi(g)v_{W_{\pi}}\rangle\) for every \(g\in\operatorname{GL}_{n}(\mathbb{F})\). Let \(\mathcal{K}:=F^{\times}K_{n}\). We define \(f_{W_{\pi}}\) to be \[f_{W_{\pi}}(g)=\begin{cases}\omega_{\rho}(a)\pi(\operatorname{pr}(k))v_{W_{\pi }},&\text{if $g=ak\in\mathcal{K}$ with $a\in F^{\times},k\in K_{n}$}\\ 0,&\text{otherwise}.\end{cases}\] We define \(W_{\rho}^{\circ}\in\mathcal{W}(\rho,\psi_{F})\) by \(W_{\rho}^{\circ}(g)=\langle\lambda_{\circ},\rho(g)f_{W_{\pi}}\rangle\) for \(g\in\operatorname{GL}_{n}(F)\). Then the support of \(W_{\rho}^{\circ}\) is contained in \(N_{n}(F)F^{\times}K_{n}\). The two Whittaker functions \(W_{\rho}^{\circ}\) and \(W_{\pi}\) are related by \[W_{\rho}^{\circ}(g)=\omega_{\rho}(a)W_{\pi}(\operatorname{pr}(k)),\] for any \(g=ak\in\mathcal{K}\) with \(a\in F^{\times}\) and \(k\in K_{n}\)[50, Proposition 3.9]. In particular, if \(W_{\pi}\) is a Bessel function \(\mathcal{B}_{\pi,\psi}\) over finite fields \(\mathbb{F}\), the lift \(W_{\rho}^{\circ}\) is nothing but the _Paskunas-Stevens partial Bessel function_\(\operatorname{B}_{\rho,\psi_{F}}\) in [40, SS3.4] and [42, Theorem 5.8]. Let \(G\) be a group and \(L\) a subgroup of \(G\). A representation \(\rho\) of \(G\) is called \((L,\xi)\)-_distinguished_ if \[\operatorname{Hom}_{L}(\rho,\xi)\neq 0. \tag{2.5}\] If \(\xi=\mathbf{1}_{L}\) is a trivial character of \(L\), we simply say that \(\rho\) is \(L\)-_distinguished_. In particular, let \(G=\operatorname{GL}_{n}(F)\times\operatorname{GL}_{n}(F)\) and \(L=\operatorname{GL}_{n}(F)\) embedded in \(G\) diagonally. We also say that \(\rho_{1}\times\rho_{2}\) has a _Jacquet-Piatetski-Shapiro-Shalika period_ if the representation \(\rho_{1}\times\rho_{2}\) of \(G\) is \((\operatorname{GL}_{n}(F),\mathbf{1}_{\operatorname{GL}_{n}(F)})\)-distinguished, and it is equivalent to the condition that \(\rho_{1}\cong\check{\rho}_{2}\). Because of this property, these distinguished representations appears naturally in the theory of Rankin-Selberg \(L\)-functions that we describe in a moment. Let \(\rho_{1}\) and \(\rho_{2}\) be level zero supercuspidal representations of \(\operatorname{GL}_{n}(F)\) constructed from irreducible cuspidal representations \(\pi_{1}\) and \(\pi_{2}\) of \(\operatorname{GL}_{n}(\mathbb{F})\) with attached Whittaker models, \(\mathcal{W}(\rho_{1},\psi_{F})\) and \(\mathcal{W}(\rho_{2},\psi_{F}^{-1})\), respectively. We take each pair of Whittaker functions \(W_{\rho_{1}}\in\mathcal{W}(\rho_{1},\psi_{F})\), \(W_{\rho_{2}}\in\mathcal{W}(\rho_{2},\psi_{F}^{-1})\), and a Schwartz-Bruhat function \(\Phi\in\mathcal{S}(F^{n})\), and form the _Jacquet-Piatetski-Shapiro-Shalika integral_ defined by \[\Psi(s,W_{\rho_{1}},W_{\rho_{2}},\Phi)=\int_{N_{n}(F)\setminus\,\operatorname{ GL}_{n}(F)}W_{\rho_{1}}(g)W_{\rho_{2}}(g)\Phi(e_{n}g)|\det g|^{s}\,dg.\] The integral converges absolutely for \(\operatorname{Re}(s)\) sufficiently large, and extend meromorphically to the entire complex plane. Moreover there exists a rational function \(\Gamma(s,\rho_{1}\times\rho_{2},\psi_{F})\in\mathbb{C}(q^{-s})\) satisfying the functional equation [35, Proposition 4.2]: \[\Psi(1-s,\check{W}_{\rho_{1}},\check{W}_{\rho_{2}},\mathcal{F}_{\psi_{F}}( \Phi))=\Gamma(s,\rho_{1}\times\rho_{2},\psi_{F})\Psi(s,W_{\rho_{1}},W_{\rho_{ 2}},\Phi).\] It is worthwhile to emphasize that the gamma factor \(\Gamma(s,\rho_{1}\times\rho_{2},\psi_{F})\) defined above differs by a sign from the traditional one defined by Jacquet-Piatetski-Shapiro-Shalika in [42, SS7.1]. The _local Rankin-Selberg \(L\)-function \(L(s,\rho_{1}\times\rho_{2})\)_ is the generator of the \(\mathbb{C}[q^{\pm s}]\)-fractional ideal of the Jacquet-Piatetski-Shapiro-Shalika integrals \(\Psi(s,W_{\rho_{1}},W_{\rho_{2}},\Phi)\) with \(W_{\rho_{1}}\in\mathcal{W}(\rho_{1},\psi_{F})\), \(W_{\rho_{2}}\in\mathcal{W}(\rho_{2},\psi_{F}^{-1})\), and \(\Phi\in\mathcal{S}(F^{n})\) normalized to be of the form \(P(q^{-s})^{-1}\) for some \(P(X)\in\mathbb{C}[X]\) with \(P(0)=1\). We take pairs of Whittaker functions \(W_{\rho_{1}}=W_{\rho_{1}}^{\circ}\) and \(W_{\rho_{2}}=W_{\rho_{2}}^{\circ}\), which are lifts of corresponding pairs of Whittaker functions \(W_{\pi_{1}}\) and \(W_{\pi_{2}}\) over finite fields, and we insert certain test functions \(\Phi_{\circ}\), the lift of \(\phi\), for Schwartz-Bruhat functions \(\Phi\). With lifting datum \((W_{\rho_{1}}^{\circ},W_{\rho_{2}}^{\circ},\Phi_{\circ})\), Jacquet-Piatetski-Shapiro-Shalika integrals can be reduced to Jacquet-Piatetski-Shapiro-Shalika sums, we obtain so-called _modified functional equations_ just as in [50, SS3.3]. \[\check{\Psi}(W_{\pi_{1}},W_{\pi_{2}},\phi)+q^{-n(1-s)}(\omega_{\rho_ {1}}\omega_{\rho_{2}})^{-1}(\varpi)\mathcal{F}_{\psi}(\phi)(0)L(n(1-s),(\omega_{ \rho_{1}}\omega_{\rho_{2}})^{-1})\Psi(W_{\pi},\mathbf{1}_{\mathbb{F}^{n}})\\ =\Gamma(s,\rho_{1}\times\rho_{2},\psi_{F})(\Psi(W_{\pi_{1}},W_{\pi_ {2}},\phi)+q^{-ns}\omega_{\rho_{1}}\omega_{\rho_{2}}(\varpi)\phi(0)L(ns,\omega_{ \rho_{1}}\omega_{\rho_{2}})\Psi(W_{\pi_{1}},W_{\pi_{2}},\mathbf{1}_{\mathbb{F}^ {n}})).\] As a result of the modified function equation, we recover the following main theorem of [49]. **Theorem 2.6**.: _Let \(\rho_{1}\) and \(\rho_{2}\) be level zero supercuspidal representations of \(\operatorname{GL}_{n}(F)\)._ 1. [49, Theorem 4.1] _If_ \(\pi_{1}\not\cong\check{\pi}_{2}\)_, we have_ \[\Gamma(s,\rho_{1}\times\rho_{2},\psi_{F})=\gamma^{\star}(\pi_{1}\times\pi_{2}, \psi).\] 2. [49, SS4 Right after Corollary 4.3] _If_ \(\pi_{1}\cong\check{\pi}_{2}\)_, we have_ \[\Gamma(s,\rho_{1}\times\rho_{2},\psi_{F})=q^{n\left(s-\frac{1}{2}\right)}( \omega_{\rho_{1}}\omega_{\rho_{2}})^{-1}(\varpi)\frac{L(n(1-s),(\omega_{\rho_ {1}}\omega_{\rho_{2}})^{-1})}{L(ns,\omega_{\rho_{1}}\omega_{\rho_{2}})}.\] ### The Rankin-Selberg epsilon factor and the Gauss sum Let \(\overline{\mathbb{F}}\) be an algebraic closure of \(\mathbb{F}\) and \(\widehat{\mathbb{F}}_{q^{n}}^{\times}\) the multiplicative group of \(\mathbb{F}_{q^{n}}^{\times}\). A multiplicative character \(\alpha\in\widehat{\mathbb{F}}_{q^{d}}^{\times}\) is called _regular_ if \(\{\alpha,\alpha^{q},\cdots,\alpha^{q^{n-1}}\}\) is of size \(n\). Two characters \(\alpha\) and \(\beta\) are called _equivalent_ if \(\alpha=\beta^{q^{d}}\) for some integer \(d\). In next paragraph, we see that this amounts to saying that \(\alpha\) and \(\beta\) are in the same Frobenius orbit. Let \(\mathcal{R}_{n}(\mathbb{F}_{q})\) denote the set of equivalence classes of regular characters of \(\mathbb{F}_{q^{n}}^{\times}\). Green's parameterization [37, 3.1; 48, 3.2] gives a bijection \[\mathcal{A}_{0}(\operatorname{GL}_{n}(\mathbb{F}) \longleftrightarrow\mathcal{R}_{n}(\mathbb{F}_{q})\] \[\pi \longleftrightarrow\alpha.\] For each \(d\,|\,n\), we have a norm map \(\operatorname{Nr}_{n:d}:\mathbb{F}_{q^{n}}^{\times}\to\mathbb{F}_{q^{d}}^{\times}\), which induces a dual (embedding) map \(\widehat{\operatorname{Nr}}_{n:d}:\widehat{\mathbb{F}}_{q^{d}}^{\times} \to\widehat{\mathbb{F}}_{q^{n}}^{\times}\) by assigning \(\beta\in\widehat{\mathbb{F}}_{q^{d}}^{\times}\) to \(\beta\circ\operatorname{Nr}_{n:d}\in\mathbb{F}_{q^{n}}^{\times}\). In this way, \((\widehat{\mathbb{F}}_{q^{n}}^{\times})_{n\in\mathbb{N}}\) with the embedding map \((\widehat{\operatorname{Nr}}_{n:d})_{d\,|\,n}\) forms a direct system. We denote by \(\Omega:=\varinjlim\mathbb{F}_{q^{n}}^{\times}\) its direct limit. Let \(W_{F}\) be a Weil group of \(F\), \(I_{F}\) the inertia subgroup, and \(P_{F}\) the wild inertia subgroup. Then \(W_{F}\cong I_{F}\rtimes(\operatorname{Fr})\), where \(\operatorname{Fr}\in\operatorname{Gal}(\overline{\mathbb{F}}/\mathbb{F})\) is the geometric Frobenius automorphism given by \(\operatorname{Fr}(x^{q})=x\) for every \(x\in\overline{\mathbb{F}}\). The Frobenius map \(\operatorname{Fr}\) acts on \(\Omega\) via \(\operatorname{Fr}\cdot\beta=\beta^{q}\). We identify \(\mathbb{F}_{q^{n}}^{\times}\) with the subgroup \(\Omega_{n}:=\{\beta\in\Omega\,|\,\mathrm{Fr}^{n}\cdot\beta=\beta\}\) of \(\Omega\). A Galois orbit is a set of the form \(\mathcal{O}=\mathcal{O}(\beta):=\{\operatorname{Fr}^{i}\cdot\beta\,|\,i\in \mathbb{Z}\}\) for \(\beta\in\Omega\). Given a Galois orbit \(\mathcal{O}\), we define its _degree_\(\deg(\mathcal{O})\) to be the cardinality of \(\mathcal{O}\). Then for \(\beta\in\mathcal{O}\), we have \(\beta\in\Gamma_{\deg(\mathcal{O})}\). We denote by \(\operatorname{Fr}\backslash\Omega\) the set of Galois orbits. Let \(\psi_{n}:=\psi\circ\operatorname{Tr}_{\mathbb{F}_{q^{n}}/\mathbb{F}}\). We define the _Gauss sum_\(\tau(\alpha,\psi_{n})\) by \[\tau(\alpha,\psi_{n}):=-\sum_{x\in\mathbb{F}_{q^{n}}^{\times}}\alpha^{-1}(x) \psi_{n}(x).\] Let \(\varphi:W_{F}\to\operatorname{GL}(V)\) be a \(n\)-dimensional Frobenius semisimple representation of the Weil group \(W_{F}\). The representation \(\varphi\) is said to be _unramified_ (resp. _tamely ramified_), if \(\ker\varphi\) contains \(I_{F}\) (resp. \(P_{F}\)). We let \(r\) be an operation on the Frobenius semisimple representation of \(W_{F}\) that preserves tame ramification. In particular, we take \(r\) to be an identity operation, \(id\), a tensor product, \(\otimes\), a twisted tensor product, As (known as an _Asai_ representation), and an exterior square, \(\wedge^{2}\). In a spirit of Deligne [13, SS4 and SS5], \(\varepsilon(s,r(\varphi),\psi_{F})\) and \(\varepsilon_{0}(r(\varphi),\psi_{F})\) are related by \[\varepsilon(s,r(\varphi),\psi_{F})=\varepsilon_{0}(r(\varphi),\psi_{F})\det \left(-\mathrm{Fr},r(V)^{I_{F}}\right)^{-1}q^{\left(\dim r(V)^{I_{F}}\right)s}. \tag{2.6}\] Following the literature in [16, 48], we set \(\mathcal{G}^{t}(\operatorname{GL}_{n}(F))\) to be the isomorphism classes of tamely ramified representations of \(W_{F}\) of degree \(n\). Since the local Langlands reciprocity map preserves the conductor and the depth of the representation [16, Theorem 7.3], the correspondence induces a natural bijective map \(\operatorname{LLC}:\mathcal{A}_{0}(\operatorname{GL}_{n}(F))\to\mathcal{G}^{t} (\operatorname{GL}_{n}(F))\)[46, Appendix A] (cf. [16, SS7]). Two Weil representations are called \(I_{F}\)-_equivalent_ if their restrictions to \(I_{F}\) are equivalent, and we write \(\mathcal{G}^{t}_{I}(\operatorname{GL}_{n}(F))\) for the set of \(I_{F}\)-equivalence classes of tamely ramified \(n\)-dimensional representations of \(W_{F}\). Long before the local Langlands correspondence is established, Macdonald [51, Theorem 3.1] has already obtained a canonical bijection \(\mathcal{M}\) from \(\mathcal{A}_{0}(\operatorname{GL}_{n}(\mathbb{F})\) to \(\mathcal{G}^{t}_{I}(\operatorname{GL}_{n}(F))\). Hence we get a diagram where \(p_{1}\) is a projection map induced from (2.4), and \(p_{2}\) is a canonical projection map sending a representation to its \(I_{F}\)-equivalence class. In particular, it is a consequence of [46, Appendix A] that the above diagram is commutative. Composing the Macdonald correspondence \(\mathcal{M}\) with the Green's parameterization then yields a bijection between \(\mathcal{G}^{t}_{I}(\mathrm{GL}_{n}(F))\) and \(\mathcal{R}_{n}(\mathbb{F}_{q})\), which by abuse of teminology we again refer to as Green's parametrization. **Theorem 2.7**.: _[_51_, Theorem 4.2]_ _Let \(\varphi_{1}\) and \(\varphi_{2}\) be \(n\)-dimensional tamely ramified representations of \(W_{F}\). Then_ \[\varepsilon_{0}(\varphi_{1}\otimes\varphi_{2},\psi_{F})=(-1)^{n}q^{-\frac{n^{ 2}}{2}}\prod_{i=0}^{n-1}\tau(\alpha\beta^{q^{i}},\psi_{n}),\] _where \(\alpha\) and \(\beta\) are regular characters of \(\mathbb{F}_{q^{n}}^{\times}\) corresponding to \(\varphi_{1}\) and \(\varphi_{2}\), respectively, via Green's parametrization._ Rankin-Selberg \(\gamma\)-factors and tensor product \(\varepsilon_{0}\)-factors over finite fields are compatible with the Macdonald correspondence. **Proposition 2.8**.: _[_52_, Theorem 2.18]_ _Let \(\pi_{1}(\varphi_{1})\) and \(\pi_{2}(\varphi_{2})\) be irreducible cuspidal representations of \(\mathrm{GL}_{n}(\mathbb{F})\) associated to \(n\)-dimensional tamely ramified representations \(\varphi_{1}\) and \(\varphi_{2}\) of \(W_{F}\) via Macdonald correspondence. Then we have_ \[\gamma^{\star}(\pi_{1}(\varphi_{1})\times\pi_{2}(\varphi_{2}),\psi)=\omega_{ \pi_{2}}^{n-1}(-1)\varepsilon_{0}(\varphi_{1}\otimes\varphi_{2},\psi_{F}).\] As a corollary of Theorem 2.7 and Proposition 2.8, we gain a product formula for \(\gamma^{\star}(\pi_{1}\times\pi_{2},\psi)\) with regard to Gauss sums. **Theorem 2.9** (Gauss sum).: _Let \(\pi_{1}\) and \(\pi_{2}\) be irreducible cuspidal representations of \(\mathrm{GL}_{n}(\mathbb{F})\). We let \(\alpha\) and \(\beta\) are regular characters of \(\mathbb{F}_{q^{n}}^{\times}\) corresponding to \(\pi_{1}\) and \(\pi_{2}\), respectively, via Green's parametrization. Then we have_ \[\gamma^{\star}(\pi_{1}\times\pi_{2},\psi)=\omega_{\pi_{2}}^{n-1}(-1)\cdot(-1) ^{n}q^{-\frac{n^{2}}{2}}\prod_{i=0}^{n-1}\tau(\alpha\beta^{q^{i}},\psi_{n}).\] ### Deligne-Kazhdan close field theory We turn our attention to Deligne-Kazhdan close local field theory. Two non-archimedean local fields \(F\) and \(F^{\prime}\) are _m-close_ if \(\mathfrak{o}_{F}/\mathfrak{p}_{F}^{m}\cong\mathfrak{o}_{F^{\prime}}/\mathfrak{ p}_{F^{\prime}}^{m}\). For example, the fields \(\mathbb{F}_{p}((t))\) and \(\mathbb{Q}_{p}(p^{1/m})\) are \(m\)-close. We follow the elaboration about Deligne's theory in [16, SS2.1] and [17, SS6.3]. If \(F\) and \(F^{\prime}\) is \(1\)-close, Deligne (cf. [16, SS2.1]) gave a bijection: \[\{\text{Isomorphism classes of Frobenius semisimple representations }\varphi\text{ of }W_{F}\text{ trivial on }P_{F}\} \tag{2.7}\] \[\stackrel{{\mathrm{Del}}}{{\longleftrightarrow}}\{\text{Isomorphism classes of Frobenius semisimple representations }\varphi^{\prime}\text{ of }W_{F^{\prime}}\text{ trivial on }P_{F^{\prime}}\}.\] Elements \(\varphi\) and \(\varphi^{\prime}\) are nothing but tamely ramified representations. The triplet \((F,\varphi,\psi_{F})\) is said to be _\(\mathrm{Del}\)-associated to \((F^{\prime},\varphi^{\prime},\psi^{\prime}_{F^{\prime}})\)_ if * \(F\) and \(F^{\prime}\) are \(1\)-close; * \(\varphi^{\prime}=\mathrm{Del}(\varphi)\); * an character \(\psi^{\prime}_{F^{\prime}}\) of \(F^{\prime}\) satisfies \(\mathrm{cond}(\psi^{\prime}_{F^{\prime}})=\mathfrak{p}_{F^{\prime}}\) and the character induced by \(\psi^{\prime}_{F^{\prime}}\) on \(\mathfrak{o}_{F^{\prime}}/\mathfrak{p}_{F^{\prime}}\) coincides with that induced by \(\psi_{F}\) on \(\mathfrak{o}_{F}/\mathfrak{p}_{F}\) under the isomorphism implicit in (a). The analogous isomorphism of Deligne on the analytic side over close local fields has been studied by Kazhdan [16, SS2.3]. We provide a revamped version of the Kazhdan isomorphism [17, SS6.2], which can be directly verified from [50, Theorem 3.5]: \[\left\{\begin{array}{c}\text{Level zero supercuspidal}\\ \text{representations }(\rho,V_{\rho})\text{ of }\text{GL}_{n}(F)\end{array}\right\} \stackrel{{\text{Kaz}}}{{\longleftrightarrow}}\left\{\begin{array} []{c}\text{Level zero supercuspidal}\\ \text{representations }(\rho^{\prime},V^{\prime}_{\rho^{\prime}})\text{ of }\text{GL}_{n}(F^{\prime}) \end{array}\right\}, \tag{2.8}\] where \(\rho\cong\text{c-Ind}_{F^{\prime}\text{c-GL}_{n}(\mathfrak{e}_{F})}^{\text{ GL}_{n}(F)}\widetilde{\mu}\) and \(\rho^{\prime}\cong\text{c-Ind}_{F^{\prime}\text{c-GL}_{n}(\mathfrak{e}_{F^{ \prime}})}^{\text{GL}_{n}(F^{\prime})}\widetilde{\mu}^{\prime}\) under the isomorphism "Kaz" satisfy 1. \(\omega_{\rho}(\varpi_{F})=\omega_{\rho^{\prime}}(\varpi_{F^{\prime}})\); 2. \(\mu:=\widetilde{\mu}\upharpoonright_{\text{GL}_{n}(\mathfrak{e}_{F})}\) and \(\mu^{\prime}:=\widetilde{\mu}^{\prime}\upharpoonright_{\text{GL}_{n}(\mathfrak{e }_{F^{\prime}})}\) are an inflation of a common irreducible cuspidal representation \(\tau\) via the canonical projections: \[\left(\text{GL}_{n}(\mathfrak{o}_{F}),\mu\right)\xrightarrow{\text{mod }\mathfrak{p}_{F}}\left(\text{GL}_{n}(\mathbb{F}_{q}),\pi\right) \xrightarrow{\text{mod }\mathfrak{p}_{F^{\prime}}}\left(\text{GL}_{n}( \mathfrak{o}_{F^{\prime}}),\mu^{\prime}\right).\] We say that the triplet \((F,\rho,\psi_{F})\) is Kaz-_associated to \((F^{\prime},\rho^{\prime},\psi^{\prime}_{F^{\prime}})\)_ if 1. \(F\) and \(F^{\prime}\) are \(1\)-close; 2. \(\rho^{\prime}=\text{Kaz}(\rho)\); 3. an character \(\psi^{\prime}_{F^{\prime}}\) of \(F^{\prime}\) satisfies \(\text{cond}(\psi^{\prime}_{F^{\prime}})=\mathfrak{p}_{F^{\prime}}\) and the character induced by \(\psi^{\prime}_{F^{\prime}}\) on \(\mathfrak{o}_{F^{\prime}}/\mathfrak{p}_{F^{\prime}}\) coincides with that induced by \(\psi_{F}\) on \(\mathfrak{o}_{F}/\mathfrak{p}_{F}\) under the isomorphism implicit in (a). Let \(\pi_{1}\) be an irreducible cuspidal representation of \(\text{GL}_{n}(\mathbb{F})\) and \(\pi_{2}\) an irreducible cuspidal representation of \(\text{GL}_{r}(\mathbb{F})\) with \(n>r\). Then there exists a complex number \(\gamma(\pi_{1}\times\pi_{2},\psi)\in\mathbb{C}\) such that \[\gamma(\pi_{1}\times\pi_{2},\psi)\sum_{g\in N_{r}(\mathbb{F})\text{GL}_{r}( \mathbb{F})}W_{\pi_{1}}\begin{pmatrix}g&0\\ 0&1_{n-r}\end{pmatrix}W_{\pi_{2}}(g)=\sum_{g\in N_{r}(\mathbb{F})\text{GL}_{r }(\mathbb{F})}W_{\pi_{1}}\begin{pmatrix}0&1_{n-r}\\ g&0\end{pmatrix}W_{\pi_{2}}(g),\] for all \(W_{\pi_{1}}\in\mathcal{W}(\pi_{1},\psi)\) and \(W_{\pi_{2}}\in\mathcal{W}(\pi_{2},\psi^{-1})\)[40, Theorem 3.4]. Let \(\rho_{1}\) be a level zero supercuspidal representation of \(\text{GL}_{n}(F)\) associated to \(\pi_{1}\) and \(\rho_{2}\) a level zero supercuspidal representation of \(\text{GL}_{r}(F)\) associated to \(\pi_{2}\), with \(n>r\). Let \(\Gamma(s,\rho_{1}\times\rho_{2},\psi_{F})\) denote the Rankin-Selberg \(\gamma\)-factor defined by Jacquet, Piatetski-Shapiro, and Shalika (cf. [40, Theorem 3.8]). **Lemma 2.10**.: _For \((F,\rho_{i},\psi_{F})\) that is \(\text{Kaz}\)-associated to \((F^{\prime},\rho^{\prime}_{i},\psi^{\prime}_{F^{\prime}})\) with \(i=1,2\), we have_ \[\Gamma(s,\rho_{1}\times\rho_{2},\psi_{F})=\Gamma(s,\rho^{\prime}_{1}\times\rho ^{\prime}_{2},\psi^{\prime}_{F^{\prime}}).\] Proof.: With aid of [40, Theorem 3.11], we can relate gamma factors for a pair of level zero supercuspidal representations with those for the corresponding cuspidal representations over finite fields: \[\omega^{n-1}_{\rho_{2}}(-1)\Gamma(s,\rho_{1}\times\rho_{2},\psi_{F})=\text{ Vol}(\mathfrak{p}_{F})^{r(n-r-1)}\gamma(\pi_{1}\times\pi_{2},\psi).\] Since we have normalized Haar measures on \(F\) so that the volume of \(\mathfrak{o}_{F}\) is \(q^{1/2}\) (and similarly for \(\mathfrak{o}_{F^{\prime}}\)), we have \(\text{Vol}(\mathfrak{p}_{F})=\text{Vol}(\mathfrak{p}_{F^{\prime}})\) and the result follows. The assignment "LLC" is now reconciled with the Deligne-Kazhdan theory (2.7) and (2.8). **Proposition 2.11**.: _We assume that non-archimedean local fields \(F\) and \(F^{\prime}\) are \(1\)-close. Then the following diagram commutes:_ Proof.: We will prove this theorem by induction on \(n\). When \(n=1\), the Deligne-Kazhdan philosophy is compatible with local class field theory [16, Property (i) of SS2.1]. Now we assume that Proposition 2.11 holds for \(1\leq d\leq n-1\). Let \(\rho_{1}\in\mathcal{A}_{0}(\operatorname{GL}_{d}(F))\) and \(\sigma\in\mathcal{A}_{0}(\operatorname{GL}_{d}(F))\). Let \(\varphi_{\rho_{1}}\) and \(\varphi_{\sigma}\) denote the local Langlands parameter attached to \(\rho_{1}\) and \(\sigma\), respectively. We put \(\rho_{1}^{\prime}=\operatorname{Kaz}(\rho_{1})\) and \(\sigma^{\prime}=\operatorname{Kaz}(\sigma)\). Writing \(\rho_{2}=\operatorname{LLC}^{-1}\circ\operatorname{Del}^{-1}(\varphi_{\rho_{1 }^{\prime}})\), the corresponding local Langlands parameter \(\varphi_{\rho_{2}}\) is \(\operatorname{Del}\)-associated to \(\varphi_{\rho_{1}^{\prime}}\). In view of [16, (d) of Theorem 7.1] along with [16, Property (i) of SS2.1] again, \(\rho_{1}\) and \(\rho_{2}\) share the same central character \(\omega_{\rho_{1}}=\omega_{\rho_{2}}\). By induction hypothesis, we have \[\sigma=\operatorname{LLC}^{-1}\circ\operatorname{Del}^{-1}(\varphi_{\sigma^{ \prime}}).\] This leads us to a chain of identities: \[\Gamma(s,\rho_{1}\times\sigma,\psi_{F})=\Gamma(s,\rho_{1}^{ \prime}\times\sigma^{\prime},\psi_{F^{\prime}}^{\prime})=\Gamma(s,\varphi_{ \rho_{1}^{\prime}}\otimes\varphi_{\sigma^{\prime}},\psi_{F^{\prime}}^{\prime})\] \[\qquad\qquad=\Gamma(s,\varphi_{\rho_{2}}\otimes\operatorname{Del }^{-1}(\varphi_{\sigma^{\prime}}),\psi_{F})=\Gamma(s,\rho_{2}\times \operatorname{LLC}^{-1}\circ\operatorname{Del}^{-1}(\varphi_{\sigma^{\prime}}),\psi_{F})=\Gamma(s,\rho_{2}\times\sigma,\psi_{F})\] for all \(\sigma\in\mathcal{A}_{0}(\operatorname{GL}_{d}(F))\) and \(1\leq d\leq n-1\). Here, the second and fourth equalities are a part of local Langlands correspondence [16, (b) of Theorem 7.1], the third equality follows from [16, Property (iii) of SS2.1] due to Deligne, and the first equality is clear from Lemma 2.10. Then by the local converse theorem for level zero supercuspidal representations [49, Theorem 5.3], we conclude that \(\rho_{1}\cong\rho_{2}\) from which the desired commutative diagram follows. ## 3. The Asai Gamma Factor ### The Flicker sum Let \(\mathbb{E}=\mathbb{F}_{q^{2}}\). We fix a non-trivial additive character \(\psi_{\mathbb{E}/\mathbb{F}}\) of \(\mathbb{E}\) such that \(\psi_{\mathbb{E}/\mathbb{F}}\upharpoonright_{\mathbb{F}}=\mathbf{1}_{\mathbb{F}}\). It is worth pointing out that \(\psi_{\mathbb{E}/\mathbb{F}}\) can be constructed starting from a non-trivial additive character \(\psi\) (cf.[4, SS2; 38, SS1]). We define the character of \(\psi_{\mathbb{E}/\mathbb{F}}\) to be \(\psi_{\mathbb{E}/\mathbb{F}}(x)=\psi(\operatorname{Tr}_{\mathbb{E}/\mathbb{F }}(\Delta x))\), where \(\Delta\in\mathbb{E}^{\times}\) is of trace zero. Let \(c:x\mapsto\overline{x}\) be the nontrivial Galois element in \(\operatorname{Gal}(\mathbb{E}/\mathbb{F})\). Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{n}(\mathbb{E})\) with its associated Whittaker model \(\mathcal{W}(\pi,\psi_{\mathbb{E}/\mathbb{F}})\). For \(W_{\pi}\in\mathcal{W}(\pi,\psi_{\mathbb{E}/\mathbb{F}})\) and \(\phi\in\mathcal{S}(\mathbb{F}^{n})\), we define the _Flicker sum_ \[I(W_{\pi},\phi):=\sum_{g\in N_{n}(\mathbb{F})\setminus\operatorname{GL}_{n}( \mathbb{F})}W_{\pi}(g)\phi(e_{n}g). \tag{3.1}\] Similarly, we define the _dual Flicker sum_ \[\check{I}(W_{\pi},\phi):=\sum_{g\in N_{n}(\mathbb{F})\setminus\operatorname{GL }_{n}(\mathbb{F})}\check{W}_{\pi}(g)\mathcal{F}_{\psi}(\phi)(e_{n}g).\] **Lemma 3.1**.: _Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{n}(\mathbb{F})\). Then we have_ \[\check{I}(W_{\pi},\phi)=\sum_{g\in N_{n}(\mathbb{F})\setminus\operatorname{GL }_{n}(\mathbb{F})}W_{\pi}(g)\mathcal{F}_{\psi}(\phi)(e_{1}{}^{t}g^{-1}).\] Proof.: We insert the definition (2.1). Performing the change of variables \(g\mapsto w_{n}{}^{t}g^{-1}w_{n}\) and then \(g\mapsto gw_{n}\) yields the result. We now aim to prove functional equation \(\gamma(\pi,\operatorname{As},\psi)I(W_{\pi},\phi)=\check{I}(W_{\pi},\phi)\) satisfied by the Flicker sum \(I(W_{\pi},\phi)\). This allows us to define the _Asai gamma factor_\(\gamma(\pi,\operatorname{As},\psi)\) of an irreducible cuspidal representation \(\pi\) of \(\operatorname{GL}_{n}(\mathbb{E})\). **Theorem 3.2**.: _Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{n}(\mathbb{F})\). For every \(W_{\pi}\in\mathcal{W}(\pi,\psi_{\mathbb{E}/\mathbb{F}})\) and for any \(\phi\in\mathcal{S}_{0}(\mathbb{F}^{n})\), there exists a complex number \(\gamma(\pi,\operatorname{As},\psi)\) satisfying_ \[\gamma(\pi,\operatorname{As},\psi)\sum_{g\in N_{n}(\mathbb{F})\setminus \operatorname{GL}_{n}(\mathbb{F})}W_{\pi}(g)\phi(e_{n}g)=\sum_{g\in N_{n}( \mathbb{F})\setminus\operatorname{GL}_{n}(\mathbb{F})}W_{\pi}(g)\mathcal{F}_{ \psi}(\phi)(e_{1}{}^{t}g^{-1}).\] Proof.: It can be verified from Lemma 3.1 that \(L_{1}:(W_{\pi},\phi)\mapsto I(W_{\pi},\phi)\) and \(L_{2}:(W_{\pi},\phi)\mapsto\tilde{I}(W_{\pi},\phi)\) correspond to elements of \(\operatorname{Hom}_{\operatorname{GL}_{n}(\mathbb{F})}(\pi\otimes\mathcal{S}_{ 0}(\mathbb{F}^{n}),\mathbf{1}_{\operatorname{GL}_{n}(\mathbb{F})})\). It is then enough to show that such forms are unique up to scalars, that is to say, \(\dim\operatorname{Hom}_{\operatorname{GL}_{n}(\mathbb{F})}(\pi\otimes \mathcal{S}_{0}(\mathbb{F}^{n}),\mathbf{1}_{\operatorname{GL}_{n}(\mathbb{F})})\leq 1\). We identify \(P_{n}(\mathbb{F})\backslash\operatorname{GL}_{n}(\mathbb{F})\) with \(\mathbb{F}^{n}-\{0\}\), and then employ the Frobenius reciprocity law to find isomorphisms \[\operatorname{Hom}_{\operatorname{GL}_{n}(\mathbb{F})}(\pi\otimes \mathcal{S}_{0}(\mathbb{F}^{n}),\mathbf{1}_{\operatorname{GL}_{n}(\mathbb{F})}) \cong\operatorname{Hom}_{\operatorname{GL}_{n}(\mathbb{F})}(\pi \restriction_{\operatorname{GL}_{n}(\mathbb{F})}\otimes\operatorname{Ind}_{P_{ n}(\mathbb{F})}^{\operatorname{GL}_{n}(\mathbb{F})}(\mathbf{1}),\mathbf{1}_{ \operatorname{GL}_{n}(\mathbb{F})})\] \[\cong\operatorname{Hom}_{P_{n}(\mathbb{F})}(\pi\restriction_{P_{ n}(\mathbb{F})},\mathbf{1}_{P_{n}(\mathbb{F})}).\] The proof of at most one dimension of the space \(\operatorname{Hom}_{P_{n}(\mathbb{F})}(\pi\restriction_{P_{n}(\mathbb{F})}, \mathbf{1}_{P_{n}(\mathbb{F})})\) is then parallel to that for nonarchimedean local fields [1, Theorem 1.1], relying on the theory of Bernstein and Zelevinsky's derivatives for finite fields established in [15, SS4]. In the course of the proof of proceeding theorem, we get the following multiplicity one result as a byproduct, which is used repeatedly in the proof of Proposition 3.6 and Lemma 3.8. **Corollary 3.3** (Multiplicity one result).: _Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{n}(\mathbb{F})\). Then we have_ \[\dim\operatorname{Hom}_{P_{n}(\mathbb{F})}(\pi\restriction_{P_{n}(\mathbb{F})},\mathbf{1}_{P_{n}(\mathbb{F})})\leq 1.\] We express \(\gamma(\pi,\operatorname{As},\psi)\) in terms of the Bessel functions associated with \(\pi\). **Theorem 3.4**.: _Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{n}(\mathbb{E})\). Then we have_ \[\gamma(\pi,\operatorname{As},\psi)=q^{-\frac{n}{2}}\sum_{g\in N_{n}(\mathbb{F })\backslash\operatorname{GL}_{n}(\mathbb{F})}\mathcal{B}_{\pi,\psi_{\mathbb{ E}/\mathbb{F}}}(g)\psi(e_{1}{}^{t}g^{-1}\ {}^{t}e_{n}). \tag{3.2}\] _In particular, we have \(\gamma(\check{\pi},\operatorname{As},\psi^{-1})=\overline{\gamma(\pi, \operatorname{As},\psi)}\)._ Proof.: We take \(W_{\pi}=\mathcal{B}_{\pi,\psi_{\mathbb{E}/\mathbb{F}}}\) and \(\phi\) to be an indicator function \(\delta_{e_{n}}\) on \(e_{n}\). It can be seen from [38, Lemma 2.7] that \(I(\mathcal{B}_{\pi,\psi_{\mathbb{E}/\mathbb{F}}},\delta_{e_{n}})=1\) and \(\mathcal{F}_{\psi}(\delta_{e_{n}})(y)=q^{-\frac{n}{2}}\psi(e_{n}{}^{t}y)\) from which (3.2) shall follow. We now take the complex conjugate to reach \[\overline{\gamma(\pi,\operatorname{As},\psi)}=q^{-\frac{n}{2}}\sum_{g\in N_{n }(\mathbb{F})\backslash\operatorname{GL}_{n}(\mathbb{F})}\mathcal{B}_{\check {\pi},\psi_{\mathbb{E}/\mathbb{F}}^{-1}}(g)\psi^{-1}(e_{1}{}^{t}g^{-1}\ {}^{t}e_{n})=\gamma(\check{\pi},\operatorname{As},\psi^{-1}).\qed\] The following general lemma plays a crucial role to evaluate the sums of Bessel functions against additive characters for later purpose. **Lemma 3.5**.: _[_47_, Lemma A.1]_ _Let \(G\) be a finite group and \(L\) a subgroup of \(G\). Suppose that \(L\) is a semidirect of the form \(L=Z\rtimes\operatorname{GL}_{n}(\mathbb{F})\). Let \(\Xi:L\to\mathbb{C}^{\times}\) be a character which is trivial on \(\operatorname{GL}_{n}(\mathbb{F})\). Let \(\Pi\) be an irreducible representation of \(G\) satisfying_ * \(\dim\operatorname{Hom}_{L}(\Pi\restriction_{L},\Xi)=1\)_._ * \(\dim\operatorname{Hom}_{Z\rtimes P_{n}(\mathbb{F})}(\Pi\restriction_{Z\rtimes P _{n}(\mathbb{F})},\Xi)=1\)_._ * _There exists a linear functional_ \(\Lambda\in\operatorname{Hom}_{N_{n}(\mathbb{F})}(\Pi\restriction_{N_{n}( \mathbb{F})},\mathbf{1}_{N_{n}(\mathbb{F})})\) _and a vector_ \(v_{0}\in V_{\Pi}\) _such that_ \[\sum_{p\in N_{n}(\mathbb{F})\backslash P_{n}(\mathbb{F})}\sum_{z\in Z}\Lambda( \Pi(zp)v_{0})\Xi^{-1}(z)=1.\] _Then we have_ \[\sum_{g\in N_{n}(\mathbb{F})\backslash\operatorname{GL}_{n}(\mathbb{F})}\sum_{z \in Z}\Lambda(\Pi(zg)v_{0})\Xi^{-1}(z)\psi^{-1}(e_{n}g\ {}^{t}e_{1})=-1.\] A non-zero vector \(v\in V_{\pi}\) is said to be a _Flicker-Rallis vector_ if \(\pi(g)v=v\) for all \(g\in\operatorname{GL}_{n}(\mathbb{F})\). Using [44, Proposition 5.1] in combination with [44, Corollary 2.4], it is noteworthy that \(\pi\) does not have the Flicker-Rallis vector whenever \(n=2m\) is even. (Refer to [38, Theorem 3.9] for \(n=2\)). **Proposition 3.6**.: _Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{n}(\mathbb{E})\). Suppose that \(n=2m+1\) and \(\pi\) admits a Flicker-Rallis vector. Then we have_ \[\gamma(\pi,\operatorname{As},\psi)=\gamma(\check{\pi},\operatorname{As},\psi^{- 1})=-q^{-\frac{n}{2}}.\] Proof.: Thanks to Corollary 3.3, we apply Lemma 3.5 to \(V_{\Pi}=\mathcal{W}(\pi,\psi_{\mathbb{E}/\mathbb{F}})\), \(G=\operatorname{GL}_{n}(\mathbb{E})\), \(L=\operatorname{GL}_{n}(\mathbb{F})\), \(Z=\{1_{n}\}\), and \(\Xi=\mathbf{1}_{L}\) a trivial character. We define a non-trivial linear functional \(\Lambda\in\operatorname{Hom}_{N_{n}(\mathbb{F})}(\pi\upharpoonright_{N_{n}( \mathbb{F})},\mathbf{1}_{N_{n}(\mathbb{F})})\) on \(\mathcal{W}(\pi,\psi_{\mathbb{E}/\mathbb{F}})\) by \(\Lambda(W_{\pi})=W_{\pi}(1_{n})\). By choosing \(W_{\pi}=\mathcal{B}_{\pi,\psi_{\mathbb{E}/\mathbb{F}}}\), it is clear from [38, Lemma 2.7] that \[\sum_{p\in N_{n}(\mathbb{F})\setminus P_{n}(\mathbb{F})}\Lambda(\pi(p) \mathcal{B}_{\pi,\psi_{\mathbb{E}/\mathbb{F}}})=\sum_{p\in N_{n}(\mathbb{F}) \setminus P_{n}(\mathbb{F})}\mathcal{B}_{\pi,\psi_{\mathbb{E}/\mathbb{F}}}(p )=1.\] With aid of Theorem 3.4 coupled with Lemma 3.5 again, and then making the change of variables \(g\mapsto g^{-1}\), we find that \[\gamma(\check{\pi},\operatorname{As},\psi^{-1})=q^{-\frac{n}{2}} \sum_{g\in N_{n}(\mathbb{F})\setminus\operatorname{GL}_{n}(\mathbb{F})}\mathcal{ B}_{\check{\pi},\psi_{\mathbb{E}/\mathbb{F}}^{-1}}(g)\psi^{-1}(e_{1}{}^{t}g^{-1} \,{}^{t}e_{n})\\ =q^{-\frac{n}{2}}\sum_{g\in N_{n}(\mathbb{F})\setminus \operatorname{GL}_{n}(\mathbb{F})}\Lambda(\pi(g)\mathcal{B}_{\pi,\psi_{ \mathbb{E}/\mathbb{F}}})\psi^{-1}(e_{n}g^{t}e_{1})=-q^{-\frac{n}{2}}.\] All that remains is to take the complex conjugate to conclude from Theorem 3.4 that \[\gamma(\pi,\operatorname{As},\psi)=\overline{\gamma(\check{\pi},\operatorname{ As},\psi^{-1})}=\overline{-q^{-\frac{2m+1}{2}}}=-q^{-\frac{2m+1}{2}}.\qed\] We end this section with functional equations for \(\gamma(\pi,\operatorname{As},\psi)\) of an irreducible cuspidal representation \(\pi\) of \(\operatorname{GL}_{n}(\mathbb{E})\). **Theorem 3.7** (Functional equation).: _Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{n}(\mathbb{E})\)._ 1. _If_ \(\pi\) _does not admit a Flicker-Rallis vector, then we have_ \[\gamma(\pi,\operatorname{As},\psi)\gamma(\check{\pi},\operatorname{As},\psi^{- 1})=1\quad\text{and}\quad|\gamma(\pi,\operatorname{As},\psi)|=1.\] 2. _If_ \(n=2m+1\) _and_ \(\pi\) _admits a Flicker-Rallis vector, then we have_ \[\gamma(\pi,\operatorname{As},\psi)\gamma(\check{\pi},\operatorname{As},\psi^{- 1})=q^{-n}\quad\text{and}\quad|\gamma(\pi,\operatorname{As},\psi)|=q^{-\frac{n }{2}}.\] Proof.: Appealing to Lemma 3.1, when \(\pi\) does not admit a Flicker-Rallis vector, the functional equation is a direct consequence of the double-duality \[\check{I}(\check{W}_{\pi},\mathcal{F}_{\psi^{-1}}(\phi))=I(W_{\pi},\phi).\] just like [50, Proposition 2.12]. The rest of assertions can be verified from Theorem 3.4 along with Proposition 3.6. ### The Flicker-Rallis period and level zero supercuspidal representations We set out to investigate the existence of Flicker-Rallis vectors which characterizes the non-vanishing sum. **Lemma 3.8**.: _Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{n}(\mathbb{E})\) with \(n=2m+1\) odd. Then \(\pi\) admits a Flicker-Rallis vector if and only if there exists \(W_{\pi}\in\mathcal{W}(\pi,\psi_{\mathbb{E}/\mathbb{F}})\) such that_ \[\sum_{g\in N_{n}(\mathbb{F})\setminus\operatorname{GL}_{n}(\mathbb{F})}W_{\pi} (g)\neq 0.\] Proof.: We assume that \(\pi\) has a Flicker-Rallis vector. We endow \(\mathcal{W}(\pi,\psi_{\mathbb{E}/\mathbb{F}})\) with an inner product \((\cdot,\cdot)\) in which \(\pi\) is unitary. We define \(W_{\mathrm{FR}}\in\mathcal{W}(\pi,\psi_{\mathbb{E}/\mathbb{F}})\) by \[W_{\mathrm{FR}}(g)=\frac{1}{|\operatorname{GL}_{n}(\mathbb{F})|}\sum_{p\in P_{n }(\mathbb{F})}\mathcal{B}_{\pi,\psi_{\mathbb{E}/\mathbb{F}}}(gp).\] for \(g\in\operatorname{GL}_{n}(\mathbb{E})\). Benefited from the average, we find that \(W_{\operatorname{FR}}(gh)=W_{\operatorname{FR}}(g)\) for all \(h\in P_{n}(\mathbb{F})\). Using the containment \(\operatorname{Hom}_{\operatorname{GL}_{n}(\mathbb{F})}(\pi\upharpoonright_{ \operatorname{GL}_{n}(\mathbb{F})},\mathbf{1})\subseteq\operatorname{Hom}_{P_ {n}(\mathbb{F})}(\pi\upharpoonright_{P_{n}(\mathbb{F})},\mathbf{1})\), we deduce the equality \(\operatorname{Hom}_{\operatorname{GL}_{n}(\mathbb{F})}(\pi\upharpoonright_{ \operatorname{GL}_{n}(\mathbb{F})},\mathbf{1})=\operatorname{Hom}_{P_{n}( \mathbb{F})}(\pi\upharpoonright_{P_{n}(\mathbb{F})},\mathbf{1})\) by the one-dimensionality of both spaces, Corollary 3.3. In the same fashion, \(W_{\operatorname{FR}}\) produces an element \(T_{W_{\operatorname{FR}}}\in\operatorname{Hom}_{\operatorname{GL}_{n}( \mathbb{F})}(\pi\upharpoonright_{\operatorname{GL}_{n}(\mathbb{F})},\mathbf{1})\) stated by \(T_{W_{\operatorname{FR}}}(W^{\prime})=(W^{\prime},W_{\operatorname{FR}})\) for \(W^{\prime}\in\mathcal{W}(\pi,\psi_{\mathbb{E}/\mathbb{F}})\), from which it follows that \(W_{\operatorname{FR}}\) is a Flicker-Rallis vector. Furthermore, a non-trivialness of the given summation can be verified, because [38, Lemma 2.7] yields \[\sum_{g\in N_{n}(\mathbb{F})\setminus\operatorname{GL}_{n}(\mathbb{F})}W_{ \operatorname{FR}}(g)=\frac{1}{|N_{n}(\mathbb{F})|}\sum_{p\in P_{n}(\mathbb{F })}\mathcal{B}_{\pi,\psi_{\mathbb{E}/\mathbb{F}}}(p)=1.\] Conversely, we assume that there exists \(W_{\pi}\in\mathcal{W}(\pi,\psi_{\mathbb{E}/\mathbb{F}})\) such that \[\sum_{g\in N_{n}(\mathbb{F})\setminus\operatorname{GL}_{n}(\mathbb{F})}W_{ \pi}(g)\neq 0.\] We define \(W_{\operatorname{FR}}^{\sharp}\in\mathcal{W}(\pi,\psi_{\mathbb{E}/\mathbb{F}})\) by \[W_{\operatorname{FR}}^{\sharp}(h)=\frac{1}{|N_{n}(\mathbb{F})|}\sum_{g\in \operatorname{GL}_{n}(\mathbb{F})}W_{\pi}(hg).\] for \(h\in\operatorname{GL}_{n}(\mathbb{E})\). Combining \[W_{\operatorname{FR}}^{\sharp}(1_{n})=\sum_{g\in N_{n}(\mathbb{F})\setminus \operatorname{GL}_{n}(\mathbb{F})}W_{\pi}(g)\neq 0.\] along with the quasi-invariance property that \(W_{\operatorname{FR}}^{\sharp}(hh^{\prime})=W_{\operatorname{FR}}^{\sharp}(h)\) for all \(h^{\prime}\in\operatorname{GL}_{n}(\mathbb{F})\), \(W_{\operatorname{FR}}^{\sharp}\) is indeed a Flicker-Rallis vector that we seek for. Let \(E\) be a quadratic unramified extension of nonarchimedean local fields \(F\). Let \(\psi_{E/F}\) be a fixed non-trivial character of \(E\) that is trivial on \(F\). Then \(\psi_{E/F}\) will be of the form \(\psi_{E/F}(x)=\psi_{F}(\operatorname{Tr}_{E/F}(\delta x))\), where \(\delta\in E^{\times}\) is an element of trace zero. According to [3, Remark 6], \(\delta\) is in fact a unit in \(\mathfrak{o}_{E}^{\times}\) of trace zero. For the purpose of relating \(\psi_{E/F}\) to \(\psi_{\mathbb{E}/\mathbb{F}}\), we take \(\delta\) to be \(\operatorname{pr}^{-1}(\Delta)\), so that \[\psi_{\mathbb{E}/\mathbb{F}}(\operatorname{pr}(k))=\psi_{E/F}(k)\] for \(k\in\mathfrak{o}_{E}\). Let \(\rho\) be level zero supercuspidal representations of \(\operatorname{GL}_{n}(E)\) constructed from irreducible cuspidal representations \(\pi\) of \(\operatorname{GL}_{n}(\mathbb{E})\) with its attached Whittaker models \(\mathcal{W}(\rho,\psi_{E/F})\). We take a Whittaker function \(W_{\rho}\in\mathcal{W}(\rho,\psi_{E/F})\) and a Schwartz-Bruhat function \(\Phi\in\mathcal{S}(F^{n})\), and form the _Flicker integral_ defined by \[I(s,W_{\rho},\Phi)=\int_{N_{n}(F)\setminus\operatorname{GL}_{n}(F)}W_{\rho}(g) \Phi(e_{n}g)|\det g|^{s}\,dg.\] The integral converges absolutely for \(\operatorname{Re}(s)\) sufficiently large, and extend meromorphically to the entire complex plane. Furthermore there exists a rational function \(\Gamma(s,\rho,\operatorname{As},\psi_{F})\in\mathbb{C}(q^{-s})\) satisfying the functional equation [2, SS8]: \[I(1-s,\check{W}_{\rho},\mathcal{F}_{\psi_{F}}(\Phi))=\Gamma(s,\rho, \operatorname{As},\psi_{F})I(s,W_{\rho},\Phi) \tag{3.3}\] As before, interested reader may notice that the gamma factor \(\Gamma(s,\rho,\operatorname{As},\psi_{F})\) defined above differs by a sign from the conventional one defined by Flicker in [33, Theorem 1.1]. The _local Asai \(L\)-function_\(L(s,\rho,\operatorname{As})\) is the generator of the \(\mathbb{C}[q^{\pm s}]\)-fractional ideal of \(\mathbb{C}(q^{-s})\) generated by the family of Flicker integrals \(I(s,W_{\rho},\Phi)\) with \(W_{\rho}\in\mathcal{W}(\rho,\psi_{E/F})\) and \(\Phi\in\mathcal{S}(F^{n})\), which is normalized to be of the form \(P(q^{-s})^{-1}\) for some \(P(X)\in\mathbb{C}[X]\) with \(P(0)=1\). **Proposition 3.9** (The modified functional equation).: _Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{n}(\mathbb{E})\). Then for every \(W_{\pi}\in\mathcal{W}(\pi,\psi_{\mathbb{E}/\mathbb{F}})\), \(\phi\in\mathcal{S}(\mathbb{F}^{n})\), and \(s\in\mathbb{C}\), there exists \(\Gamma(s,\rho,\operatorname{As},\psi_{F})\) such that_ \[\bar{I}(W_{\pi},\phi)+q^{-n(1-s)}\omega_{\rho}^{-1}(\varpi)\mathcal{ F}_{\psi}(\phi)(0)L(n(1-s),\omega_{\rho}^{-1}\upharpoonright_{F^{\times}})I(W_{\pi}, \mathbf{1}_{\mathbb{F}^{n}})\\ =\Gamma(s,\rho,\operatorname{As},\psi_{F})(I(W_{\pi},\phi)+q^{- ns}\omega_{\rho}(\varpi)\phi(0)L(ns,\omega_{\rho}\upharpoonright_{F^{\times}})I(W_{\pi}, \mathbf{1}_{\mathbb{F}^{n}})).\] Proof.: Since \(\operatorname{Supp}(W_{\rho}^{\circ})\subseteq N_{n}(E)E^{\times} \operatorname{GL}_{n}(\mathfrak{o}_{E})=\amalg_{\mathbb{E}}\varpi_{E}^{l}N_{n }(E)\operatorname{GL}_{n}(\mathfrak{o}_{E})\), for \(\operatorname{Re}(s)\gg 0\), our integral can be decomposed as an infinite series \[I(s,W_{\rho}^{\circ},\Phi_{\circ})=\sum_{l\in\mathbb{Z}}q^{-nls}\int_{\mathfrak{ o}^{\times}}\omega_{\rho}(x\varpi^{l})\int_{N_{n}(F)\cap K_{n}\setminus K_{n}}W_{ \rho}^{\circ}(k)\Phi_{\circ}(e_{n}kx\varpi^{l})\,dkd^{\times}x.\] With \(\Phi_{\circ}\) being a lift of \(\phi\), \(\Phi_{\circ}(e_{n}kx\varpi^{l})=0\) for \(l<0\), whereas \(\Phi_{\circ}(e_{n}kx\varpi^{l})=\phi(0)\) for \(l>0\). When \(l=0\), we make the change of variables \(k\mapsto kx^{-1}\) to obtain \[I(s,W_{\rho}^{\circ},\Phi_{\circ})=\sum_{l=1}^{\infty}q^{-nls} \omega_{\rho}(\varpi^{l})\phi(0)\int_{\mathfrak{o}^{\times}}\omega_{\rho}\,(x)d ^{\times}x\int_{N_{n}(F)\cap K_{n}\setminus K_{n}}W_{\rho}^{\circ}(k)\,dk\\ +\int_{N_{n}(F)\cap K_{n}\setminus K_{n}}W_{\rho}^{\circ}(k)\Phi_ {\circ}(e_{n}k)\,dk.\] Just as in the proof of [40, Theorem 3.1], we express the integrals as the sum \[I(s,W_{\rho}^{\circ},\Phi_{\circ})=\operatorname{Vol}(N_{n}( \mathfrak{o})(1_{n}+\mathcal{M}_{n}(\mathfrak{p})))\\ \times\left(\sum_{l=1}^{\infty}q^{-nls}\omega_{\rho}(\varpi^{l}) \phi(0)\cdot\int_{\mathfrak{o}^{\times}}\omega_{\rho}\,(x)d^{\times}x\cdot I(W _{\pi},\mathbf{1}_{\mathbb{F}^{n}})+I(W_{\pi},\phi)\right).\] The first sum becomes \(q^{-ns}\omega_{\rho}(\varpi)\phi(0)L(ns,\omega_{\rho}\upharpoonright_{F^{ \times}})I(W_{\pi},\mathbf{1}_{\mathbb{F}^{n}})\) if \(\omega_{\rho}\) is unramified, while the first term vanishes if \(\omega_{\rho}\) is ramified, but it is still equal to \(q^{-ns}\omega_{\rho}(\varpi)\phi(0)L(ns,\omega_{\rho}\upharpoonright_{F^{ \times}})I(W_{\pi},\mathbf{1}_{\mathbb{F}^{n}})\), since \(\omega_{\pi}\) is non-trivial so that \(\pi\) dose not possess a non-zero Flicker-Rallis vector. This is equivalent to saying that \(I(W_{\pi},\mathbf{1}_{\mathbb{F}^{n}})=0\) in virtue of Lemma 3.8. Combining all together, we find \[I(s,W_{\rho}^{\circ},\Phi_{\circ})=\operatorname{Vol}(N_{n}(\mathfrak{o})(1_ {n}+\mathcal{M}_{n}(\mathfrak{p})))(I(W_{\pi},\phi)+q^{-ns}\omega_{\rho}(\varpi )\phi(0)L(ns,\omega_{\rho}\upharpoonright_{F^{\times}})I(W_{\pi},\mathbf{1}_{ \mathbb{F}^{n}})).\] Regarding the dual side, we analogously follow the proof of Lemma 3.1 to write it as \[I(1-s,\mathring{W}_{\rho}^{\circ},\mathcal{F}_{\psi_{F}}(\Phi_{\circ}))=\int_ {N_{n}(F)\setminus\operatorname{GL}_{n}(F)}W_{\rho}^{\circ}(g)\mathcal{F}_{ \psi_{F}}(\Phi_{\circ})(e_{1}^{\,t}g^{-1})\,|\mathrm{det}\,g|^{s-1}\ dg.\] We iterate the process for \(I(1-s,\mathring{W}_{\rho}^{\circ},\mathcal{F}_{\psi_{F}}(\Phi_{\circ}))\) in order to produce \[I(1-s,\mathring{W}_{\rho}^{\circ},\mathcal{F}_{\psi_{F}}(\Phi_{ \circ}))=\operatorname{Vol}(N_{m}(\mathfrak{o})(1_{m}+\mathcal{M}_{m}( \mathfrak{p})))\\ \times(\tilde{I}(W_{\pi},\phi)+q^{-n(1-s)}\omega_{\rho}^{-1}( \varpi)\mathcal{F}_{\psi}(\phi)(0)L(n(1-s),\omega_{\rho}^{-1}\upharpoonright_{F^{ \times}})I(W_{\pi},\mathbf{1}_{\mathbb{F}^{n}})).\] It remains to use the functional equation (3.3) as well as to cross out the common volume term. Let \(G=\operatorname{GL}_{n}(E)\) and \(L=\operatorname{GL}_{n}(F)\) in (2.5). Analogously to the finite field case, we say that a representation \(\rho\) of \(\operatorname{GL}_{n}(E)\) admits a _Flicker-Rallis period_ if \(\rho\) is \(\operatorname{GL}_{n}(F)\)-distinguished. According to [30, Theorem 6.4], which is based on [20, Theorems 1.1], a level zero supercuspidal representation \(\rho\) does not have the Flicker-Rallis period as long as \(n=2m\) is even. The keen reader may notice that this property is completely analogously to the finite field case, but it is not so surprising to see it in Theorem 6.1 that both conditions are equivalent. **Theorem 3.10**.: _Let \(\rho\) be a level zero supercuspidal representation of \(\operatorname{GL}_{n}(F)\)._ 1. _If_ \(\pi\) _does not admit a Flicker-Rallis vector, then we have_ \[\Gamma(s,\rho,\mathrm{As},\psi_{F})=\gamma(\pi,\mathrm{As},\psi).\] 2. _If_ \(n=2m+1\) _and_ \(\pi\) _admits a Flicker-Rallis vector, then we have_ \[\Gamma(s,\rho,\mathrm{As},\psi_{F})=q^{n\left(s-\frac{1}{2}\right)}\omega_{\rho }^{-1}(\varpi)\frac{L(n(1-s),\omega_{\rho}^{-1}\upharpoonright_{F^{\times}})}{L (ns,\omega_{\rho}\upharpoonright_{F^{\times}})}.\] Proof.: We begin with the case when \(\pi\) does not admit a Flicker-Rallis vector. We apply Lemma 3.8 to see that \(I(W_{\pi},\mathbf{1}_{\mathbb{F}^{n}})=0\) for any \(W_{\pi}\in\mathcal{W}(\pi,\psi_{\mathbb{E}/\mathbb{F}})\). Therefore, Proposition 3.9 boils down to the equality \(\check{I}(W_{\pi},\phi)=\Gamma(s,\rho,\mathrm{As},\psi_{F})I(W_{\pi},\phi)\) for any \(W_{\pi}\in\mathcal{W}(\pi,\psi_{\mathbb{E}/\mathbb{F}})\) and \(\phi\in\mathcal{S}_{0}(\mathbb{F}^{n})\). This relation tells us that \(\Gamma(s,\rho,\mathrm{As},\psi_{F})\) is exactly \(\gamma(\pi,\mathrm{As},\psi)\). In what follows, we focus on the case when \(n=2m+1\) and \(\pi\) admits a Flicker-Rallis vector. We take \(\phi\) to be \(\mathbf{1}_{\mathbb{F}^{n}}\) an indicator function on \(\mathbf{1}_{\mathbb{F}}^{n}\). The relation \(\mathcal{F}_{\psi}(\mathbf{1}_{\mathbb{F}^{n}})=q^{\frac{n}{2}}\delta_{0}\) implies that \(\check{I}(W_{\pi},\mathbf{1}_{\mathbb{F}^{n}})=0\). In addition, Lemma 3.8 allows us to choose \(W_{\pi}\in\mathcal{W}(\pi,\psi_{\mathbb{E}/\mathbb{F}})\) such that \(I(W_{\pi},\mathbf{1}_{\mathbb{F}^{n}})=1\), and consequently we obtain from Proposition 3.9 that \[q^{-n(1-s)+\frac{n}{2}}\omega_{\rho}^{-1}(\varpi)L(n(1-s),\omega_ {\rho}^{-1}\upharpoonright_{F^{\times}})=\Gamma(s,\rho,\mathrm{As},\psi_{F})(1 +q^{-ns}\omega_{\rho}(\varpi)\phi(0)L(ns,\omega_{\rho}\upharpoonright_{F^{ \times}}))\\ =\Gamma(s,\rho,\mathrm{As},\psi_{F})L(ns,\omega_{\rho}).\] We are left with solving it for \(\Gamma(s,\rho,\mathrm{As},\psi_{F})\). When \(E=F\times F\) and \(\rho\cong\rho_{1}\times\rho_{2}\) is a representation of \(\mathrm{GL}_{n}(F)\times\mathrm{GL}_{n}(F)\), Theorem 3.10 coincides with Theorem 2.6. ### The Asai epsilon factor and the Gauss sum Let \(V\) be a \(n\)-dimensional vector space over \(\mathbb{E}\). We consider the semi-direct product \[(\mathrm{GL}_{n}(\mathbb{C})\times\mathrm{GL}_{n}(\mathbb{C}))\rtimes\mathrm{ Gal}(\mathbb{E}/\mathbb{F}),\] where the non-trivial Galois element \(c\) in \(\mathrm{Gal}(\mathbb{E}/\mathbb{F})\) acts on \(\mathrm{GL}_{n}(\mathbb{C})\times\mathrm{GL}_{n}(\mathbb{C})\) by \((g_{1},g_{2})\rtimes c:=(g_{2},g_{1})\). This is the Langlands dual group of \(\mathrm{Res}_{\mathbb{E}/\mathbb{F}}(V/\mathbb{E})\). Let \(s\) be an element of \(W_{F}\) which generates the quotient group \(W_{F}/W_{E}\cong\mathrm{Gal}(E/F)\). Let \(\varphi:W_{E}\to\mathrm{GL}_{n}(\mathbb{C})\) be an \(n\)-dimensional representation of the Weil group \(W_{E}\). We obtain the _Asai representation_, "As", \[\mathrm{As}(\varphi):W_{F}\to(\mathrm{GL}_{n}(\mathbb{C})\times\mathrm{GL}_{n}( \mathbb{C}))\rtimes\mathrm{Gal}(\mathbb{E}/\mathbb{F})\] by setting \[\mathrm{As}(\varphi)(\tau)=(\varphi(\tau),\varphi(s\tau s^{-1}))\in\mathrm{GL}_ {n}(\mathbb{C})\times\mathrm{GL}_{n}(\mathbb{C})\] for \(\tau\in W_{E}\), and \[\mathrm{As}(\varphi)(s)=(1_{n},\varphi(s^{2}))\rtimes c\in(\mathrm{GL}_{n}( \mathbb{C})\times\mathrm{GL}_{n}(\mathbb{C}))\rtimes\mathrm{Gal}(\mathbb{E}/ \mathbb{F})\setminus(\mathrm{GL}_{n}(\mathbb{C})\times\mathrm{GL}_{n}(\mathbb{C })).\] For \(v_{1},v_{2}\in\mathbb{C}^{n}\), we have \(\mathrm{As}(\varphi)(\tau)(v_{1}\otimes v_{2})=\varphi(\tau)v_{1}\otimes \varphi(s\tau s^{-1})v_{2}\) and \(\mathrm{As}(\varphi)(s)(v_{1}\otimes v_{2})=\varphi(s^{2})v_{2}\otimes v_{1}\). For \(x\) a real number, let \(\lfloor x\rfloor\) be the greatest integer less than or equal to \(x\). **Theorem 3.11** (E. Zelingher).: _Let \(\varphi\) be \(n\)-dimensional tamely ramified representations of \(W_{E}\). Let \(\alpha\) be a regular character of \(\widehat{\mathbb{F}}_{q^{2n}}^{\times}\) corresponding to \(\varphi\) via Green's parametrization and \(m=\lfloor\frac{n-1}{2}\rfloor\). Then we have_ \[\varepsilon_{0}(\mathrm{As}(\varphi),\psi_{F})=(-1)^{n}q^{-\frac{n^{2}}{2}}\tau( \alpha^{1+q^{2m+1}},\psi_{d})\prod_{i=0}^{m-1}\tau(\alpha^{1+q^{2i+1}},\psi_{2 n}),\] _where \(d=n\) if \(n\) is odd, and \(d=2n\) if \(n\) is even._ Proof.: It is worthwhile to point out that the local class field theory gives \[I_{E}/P_{E}\cong\varprojlim\mathbb{F}_{q^{n}}^{\times},\] where the transition maps are given by the norm maps \((\operatorname{Nr}_{n:d})_{d\,|\,n}\) as seen before. Henceforth we may consider \(\alpha\) as a character of \(I_{E}/P_{E}\). With respect to a suitably chosen basis (cf. [51, Theorem 2.4]), we have \[\varphi\upharpoonright_{I_{E}}(i_{E})=\operatorname{diag}(\alpha(i_{E}), \alpha^{q^{2}}(i_{E}),\cdots,\alpha^{q^{2n-2}}(i_{E}))\in\operatorname{GL}_{n} (\mathbb{C})\] for \(i_{E}\in I_{E}\), which induces that \[\operatorname{As}(\varphi)\upharpoonright_{I_{F}}(i_{E})=\left(\left(\begin{array} []{ccccc}\alpha(i_{E})&&&\\ &\alpha^{q^{2}}(i_{E})&&&\\ &&\ddots&\\ &&&\alpha^{q^{2n-2}}(i_{E})\end{array}\right),\left(\begin{array}{ccccc}\alpha ^{q^{3}}(i_{E})&&&\\ &&\ddots&\\ &&&\alpha^{q^{2n-1}}(i_{E})\end{array}\right)\right).\] The element belongs to \(\operatorname{GL}_{n}(\mathbb{C})\times\operatorname{GL}_{n}(\mathbb{C})\), because \(i_{E}\in I_{E}\) (so there is no Frobenius). The reason why we have a second matrix is that \(\operatorname{Fr}_{F}\cdot x\cdot\operatorname{Fr}_{F}^{-1}\equiv x^{q}\pmod{ P_{F}}\) for \(x\in I_{F}\). This means that \(\alpha(\operatorname{Fr}_{F}\cdot i_{E}\cdot\operatorname{Fr}_{F}^{-1})= \alpha^{q}(i_{E})\). We index the standard basis of \(\mathbb{C}^{n}\) by \(e_{i}\), where \(i=0,1,\cdots,n-1\). Putting all together, we obtain \[\operatorname{As}(\varphi)\upharpoonright_{I_{F}}(i_{E})(e_{i}\otimes e_{j})= \alpha^{q^{2i}+q^{2j+1}}(i_{E})(e_{i}\otimes e_{j}).\] The matrix representing \(\operatorname{As}(\varphi)\upharpoonright_{I_{F}}\) is indexed by \((i,j)\), where \(0\leq i,j\leq n-1\). Furthermore, the eigenvalue \(\alpha^{q^{2i}+q^{2j+1}}\) lies in the Galois orbit of \(\alpha^{1+q^{2(j-i)+1}}\) as \((\alpha^{1+q^{2(j-i)+1}})^{q^{2i}}=\alpha^{q^{2i}+q^{2j+1}}\). Therefore, the Galois orbits indexed by integers \(0\leq d\leq\lfloor\frac{n-1}{2}\rfloor\) are given by \[\mathcal{O}(\alpha^{1+q^{2d+1}})=\{(\alpha^{1+q^{2d+1}})^{q^{k}}\,|\,0\leq k \leq 2n-1\},\] whereas \(\mathcal{O}(\alpha^{1+q^{2m+1}})\) looks a bit different for \(n=2m+1\) odd: \[\mathcal{O}(\alpha^{1+q^{2m+1}})=\{(\alpha^{1+q^{2m+1}})^{q^{k}}\,|\,0\leq k \leq n-1\}.\] We emphasize that \(\mathcal{O}(\alpha^{1+q^{2d+1}})\)'s should be thought of as (multi-)sets possibly with duplicated elements, so they are not exactly Galois orbits. Each \(\mathcal{O}(\alpha^{1+q^{2d+1}})\) consists of multiple copies of the same Galois orbit. Then [51, Theorem 2.4] gives us the desired formula for the \(\varepsilon_{0}\)-factor. Let \(\lambda_{E/F}(\psi_{F})\) be the _Langlands constant_ defined in [9, (30.4)]. Appealing to [9, Proposition 34.3-(1)], the Langlands constant \(\lambda_{E/F}(\psi_{F})\) is given by \(\lambda_{E/F}(\psi_{F})=-1\). We apologize for the double usage of "\(\lambda\)", but we hope that the reader can separate the meaning from the context. **Proposition 3.12**.: _Let \(\pi(\varphi)\) be an irreducible cuspidal representations of \(\operatorname{GL}_{n}(\mathbb{E})\) associated to \(n\)-dimensional tamely representation \(\varphi\) of \(W_{E}\) via Macdonald correspondence. Then we have_ \[\gamma(\pi(\varphi),\operatorname{As},\psi)=\omega_{\pi}^{n-1}(\Delta)\lambda _{E/F}(\psi_{F})^{-\frac{n(n-1)}{2}}\varepsilon_{0}(\operatorname{As}(\varphi ),\psi_{F}).\] Proof.: We divide it into two cases. We assume that \(\pi\) does not admits a Flicker-Rallis vector. We use Theorem 3.10 in conjunction with [2, Theorem 1.3] and [51, Corollary 2.7] in order to see that \[\gamma(\pi(\varphi),\operatorname{As},\psi)=\Gamma(s,\rho(\varphi), \operatorname{As},\psi_{F})\\ =\omega_{\rho}^{n-1}(\delta)\lambda_{E/F}(\psi_{F})^{-\frac{n(n-1) }{2}}\varepsilon(s,\operatorname{As}(\varphi),\psi_{F})=\omega_{\pi}^{n-1}( \Delta)\lambda_{E/F}(\psi_{F})^{-\frac{n(n-1)}{2}}\varepsilon_{0}( \operatorname{As}(\varphi),\psi_{F}).\] It remains to deal with the case when \(n=2m+1\) and \(\pi\) admits a Flicker-Rallis vector. Since \(\Delta\) is an element of trace zero, \(\Delta^{2}=-\Delta\overline{\Delta}\) belongs to \(\mathbb{F}^{\times}\). The central character restricted to \(\mathbb{F}^{\times}\), \(\omega_{\pi}\upharpoonright_{\mathbb{F}^{\times}}=\alpha\upharpoonright_{ \mathbb{F}^{\times}}\), becomes trivial so that \(\alpha^{1+q^{2m+1}}=\mathbf{1}\). Using \(\omega_{\pi}^{n-1}(\Delta)=\omega_{\pi}^{m}(\Delta^{2})=1\), this reduces the problem to confirm that \[\gamma(\pi(\varphi),\operatorname{As},\psi)=(-1)^{m}\varepsilon_{0}( \operatorname{As}(\varphi),\psi_{F}).\] Now, \(\left(\alpha^{1+q^{2i+1}}\right)^{1+q^{2m+1}}=\mathbf{1}\) and \(\alpha^{1+q^{2i+1}}\) is not trivial for \(0\leq i\leq m-1\). [51, Proposition 2.6] gives us that \(\tau(\alpha^{1+q^{2i+1}},\psi_{2(2m+1)})=-q^{2m+1}\alpha(x^{1+q^{2i+1}})\) for which \(x\in\mathbb{F}^{\times}\). Thus \(x^{1+q^{2i+1}}\in\mathbb{F}^{\times}\), so \(\alpha(x^{1+q^{2i+1}})=1\). Collecting all these together, and then using the fact that \(\tau(\mathbf{1},\psi_{2m+1})=1\), Theorem 3.11 tells us that \[\varepsilon_{0}(\operatorname{As}(\varphi),\psi_{F})=(-1)^{2m+1}q^{-\frac{(2m +1)^{2}}{2}}\tau(\alpha^{1+q^{2m+1}},\psi_{2m+1})\prod_{i=0}^{m-1}\tau(\alpha^{ 1+q^{2i+1}},\psi_{2(2m+1)})\] \[=(-1)^{2m+1}q^{-\frac{(2m+1)^{2}}{2}}(-q^{2m+1})^{m}\tau(\mathbf{1},\psi_{2m+1 })=(-1)^{m-1}q^{-\frac{2m+1}{2}},\] which agrees with \((-1)^{m}\gamma(\pi(\varphi),\operatorname{As},\psi)\) in Proposition 3.6, as required. We are in a position to state a main product formula for \(\gamma(\pi,\operatorname{As},\psi)\) with regard to Gauss sums. **Theorem 3.13** (Gauss sum).: _Let \(\pi\) be an irreducible cuspidal representations of \(\operatorname{GL}_{n}(\mathbb{E})\). We let \(\alpha\in\widehat{\mathbb{F}}_{q^{2n}}^{\times}\) be a regular character corresponding to \(\pi\) via Green's parametrization and \(m=\lfloor\frac{n-1}{2}\rfloor\). Then we have_ \[\gamma(\pi,\operatorname{As},\psi)=\omega_{\pi}^{n-1}(\Delta)\cdot(-1)^{- \frac{n(m+1)}{2}}q^{-\frac{n^{2}}{2}}\tau(\alpha^{1+q^{2m+1}},\psi_{d})\prod_{ i=0}^{m-1}\tau(\alpha^{1+q^{2i+1}},\psi_{2n}).\] ## 4. The Exterior Square Gamma Factor ### Jacquet-Shalika sums and periods We let \(\mathcal{M}_{n}\) be \(n\times n\) matrices, \(\mathcal{N}_{n}\) the subspace of upper triangular matrices of \(\mathcal{M}_{n}\). We let \(\sigma_{2m}\) be a permutation matrix given by \[\sigma_{2m}=\begin{pmatrix}1&2&\cdots&m&|&m+1&m+2&\cdots&2m\\ 1&3&\cdots&2m-1&|&2&4&\cdots&2m\end{pmatrix}\] and let \(\sigma_{2m+1}\) denote \[\sigma_{2m+1}=\begin{pmatrix}1&2&\cdots&m&|&m+1&m+2&\cdots&2m&2m+1\\ 1&3&\cdots&2m-1&|&2&4&\cdots&2m&2m+1\end{pmatrix}.\] Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{n}(\mathbb{F})\) with its associated Whittaker model \(\mathcal{W}(\pi,\psi)\). For all \(W_{\pi}\in\mathcal{W}(\pi,\psi)\) and \(\phi\in\mathcal{S}_{0}(\mathbb{F}^{m})\), there exists a complex number \(\gamma(\pi,\wedge^{2},\psi)\in\mathbb{C}^{\times}\) such that \[\gamma(\pi,\wedge^{2},\psi)\sum_{g\in N_{m}(\mathbb{F})\setminus \operatorname{GL}_{m}(\mathbb{F})}\sum_{X\in\mathcal{N}_{m}(\mathbb{F}) \setminus\mathcal{M}_{m}(\mathbb{F})}W_{\pi}\left(\sigma_{2m}\begin{pmatrix} 1_{m}&X\\ &1_{m}\end{pmatrix}\begin{pmatrix}g&\\ &g\end{pmatrix}\right)\psi^{-1}(\operatorname{Tr}X)\phi(e_{m}g)\\ =\sum_{g\in N_{m}(\mathbb{F})\setminus\operatorname{GL}_{m}( \mathbb{F})}\sum_{X\in\mathcal{N}_{m}(\mathbb{F})\setminus\mathcal{M}_{m}( \mathbb{F})}W_{\pi}\left(\sigma_{2m}\begin{pmatrix}1_{m}&X\\ &1_{m}\end{pmatrix}\begin{pmatrix}g&\\ &g\end{pmatrix}\right)\psi^{-1}(\operatorname{Tr}X)\mathcal{F}_{\psi}(\phi)(e_ {1}\,^{t}g^{-1}) \tag{4.1}\] in the even case \(n=2m\) \[\gamma(\pi,\wedge^{2},\psi)\sum_{g}\sum_{X}\sum_{Z}W_{\pi}\left( \sigma_{2m+1}\begin{pmatrix}1_{m}&X\\ &1_{m}\\ &&1\end{pmatrix}\begin{pmatrix}g&\\ &g&\\ &&1\end{pmatrix}\begin{pmatrix}1_{m}&\\ &1_{m}&\\ &Z&1\end{pmatrix}\right)\psi^{-1}(\operatorname{Tr}X)\phi(Z)\\ =\sum_{g}\sum_{X}\sum_{Z}W_{\pi}\left(\begin{pmatrix}&1\\ 1_{2m}&\end{pmatrix}\sigma_{2m+1}\begin{pmatrix}1_{m}&X\\ &1_{m}&\\ &&1\end{pmatrix}\begin{pmatrix}g&\\ &g&\\ &&1\end{pmatrix}\begin{pmatrix}1_{m}&-^{t}Z\\ &1_{m}&1\end{pmatrix}\right)\] \[\psi^{-1}(\operatorname{Tr}X)\mathcal{F}_{\psi}(\phi)(Z) \tag{4.2}\] in the odd case \(n=2m+1\), where the summation domain of \(g\), \(X\), and \(Z\) are taken over \(N_{m}(\mathbb{F})\backslash\operatorname{GL}_{m}(\mathbb{F})\), \(\mathcal{N}_{m}(\mathbb{F})\backslash\mathcal{M}_{m}(\mathbb{F})\), and \(\mathbb{F}^{m}\), respectively. We express \(\gamma(\pi,\wedge^{2},\psi)\) in terms of the Bessel functions associated with \(\pi\). **Proposition 4.1**.: _[_50_, Remark 2.20]_ _Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{2m}(\mathbb{F})\). Then we have_ \[\gamma(\pi,\wedge^{2},\psi)=q^{-\frac{m}{2}}\sum_{g\in N_{m}( \mathbb{F})\backslash\operatorname{GL}_{m}(\mathbb{F})}\sum_{X\in\mathcal{N} _{m}(\mathbb{F})\backslash\mathcal{M}_{m}(\mathbb{F})}\mathcal{B}_{\pi,\psi} \left(\sigma_{2m}\begin{pmatrix}1_{m}&X\\ 1_{m}\end{pmatrix}\begin{pmatrix}g&\\ &g\end{pmatrix}\sigma_{2m}^{-1}\right)\\ \times\psi^{-1}(\operatorname{Tr}X)\psi({e_{1}}^{t}g^{-1}\,{e_{m} }).\] _In particular, we have \(\gamma(\check{\pi},\wedge^{2},\psi^{-1})=\overline{\gamma(\pi,\wedge^{2},\psi)}\)._ We define a Shalika subgroup \(S_{2m}\) of \(\operatorname{GL}_{2m}\) by \[S_{2m}=\left\{\begin{pmatrix}1_{m}&X\\ &1_{m}\end{pmatrix}\begin{pmatrix}g&\\ &g\end{pmatrix}\bigg{|}\,X\in\mathcal{M}_{m},\;g\in\operatorname{GL}_{m}\right\}.\] Let \(\Theta\) be a Shalika character of \(S_{2m}\) given by \[\Theta\left(\begin{pmatrix}1_{m}&X\\ &1_{m}\end{pmatrix}\begin{pmatrix}g&\\ &g\end{pmatrix}\right)=\psi(\operatorname{Tr}X).\] A non-zero vector \(v\in V_{\pi}\) is called a _Jacquet-Shalika vector_ if \(\pi(s)v=\Theta(s)v\) for every \(s\in S_{2m}(\mathbb{F})\). Over a nonarchimedean local field \(F\), we let \(G=\operatorname{GL}_{2n}(F)\) and \(H=S_{2m}(F)\) in (2.5). We say that a representation \(\rho\) of \(G\) admits a _Jacquet-Shalika period_ if \(\rho\) is \((S_{2m}(F),\Theta)\)-distinguished. **Proposition 4.2**.: _Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{m}(\mathbb{F})\). Suppose that \(n=2m\) and \(\pi\) admits a Jacquet-Shalika vector. Then we have_ \[\gamma(\pi,\wedge^{2},\psi)=\gamma(\check{\pi},\wedge^{2},\psi^{-1})=-q^{- \frac{m}{2}}.\] Proof.: We embed the Shalika subgroup \(S_{2m}(\mathbb{F})\) to \(G=\operatorname{GL}_{2m}(\mathbb{F})\) via the conjugation by \(\sigma_{2m}\). We define a linear functional \(\Lambda\in\operatorname{Hom}_{N_{m}(\mathbb{F})}(\pi\upharpoonright_{N_{m}( \mathbb{F})},\mathbf{1}_{N_{m}(\mathbb{F})})\) on \(\mathcal{W}(\pi,\psi)\) by \[\Lambda(W_{\pi})=\frac{1}{|\mathcal{N}_{m}(\mathbb{F})|}W_{\pi}(1_{2m}).\] We choose \(W_{\pi}\) to be \(\mathcal{B}_{\pi,\psi}\). Upon using Lemma 3.5 with \(G=\operatorname{GL}_{2m}(\mathbb{F})\), \(L=S_{2m}(\mathbb{F})\) a Shalika subgroup, and \(\Xi=\Theta\) a Shalika character, we find from [38, Lemma 2.7] that \[\sum_{p\in N_{m}(\mathbb{F})\backslash P_{m}(\mathbb{F})}\sum_{X \in\mathcal{M}_{m}(\mathbb{F})}\Lambda\left(\pi\left(\sigma_{2m}\begin{pmatrix} p&\\ &p\end{pmatrix}\begin{pmatrix}1_{m}&X\\ &1_{m}\end{pmatrix}\sigma_{2m}^{-1}\right)\mathcal{B}_{\pi,\psi}\right)\psi^{- 1}(\operatorname{Tr}X)\\ =\sum_{p\in N_{m}(\mathbb{F})\backslash P_{m}(\mathbb{F})}\sum_{X \in\mathcal{N}_{m}(\mathbb{F})\backslash\mathcal{M}_{m}(\mathbb{F})} \mathcal{B}_{\pi,\psi}\left(\sigma_{2m}\begin{pmatrix}p&\\ &p\end{pmatrix}\begin{pmatrix}1_{m}&X\\ &1_{m}\end{pmatrix}\sigma_{2m}^{-1}\right)\psi^{-1}(\operatorname{Tr}X)=1,\] which gives rise to \[\sum_{g\in N_{m}(\mathbb{F})\backslash\operatorname{GL}_{m}( \mathbb{F})}\sum_{X\in\mathcal{M}_{m}(\mathbb{F})}\Lambda\left(\pi\left(\sigma_ {2m}\begin{pmatrix}g&\\ &g\end{pmatrix}\begin{pmatrix}1_{m}&X\\ &1_{m}\end{pmatrix}\sigma_{2m}^{-1}\right)\mathcal{B}_{\pi,\psi}\right)\psi^{- 1}(\operatorname{Tr}X)\psi^{-1}(e_{m}g^{\;t}e_{1})=-1.\] After multiplying both sides by \(q^{-\frac{m}{2}}\), and then making the change of variables \(g\mapsto g^{-1}\) and \(X\mapsto-X\), we arrive at the identity \[\gamma(\check{\pi},\wedge^{2},\psi^{-1})=q^{-\frac{m}{2}}\sum_{g \in N_{m}(\mathbb{F})\backslash\operatorname{GL}_{m}(\mathbb{F})}\sum_{X\in \mathcal{N}_{m}(\mathbb{F})\backslash\mathcal{M}_{m}(\mathbb{F})}\mathcal{B}_{ \check{\pi},\psi^{-1}}\left(\sigma_{2m}\begin{pmatrix}g&\\ &g\end{pmatrix}\begin{pmatrix}1_{m}&X\\ &1_{m}\end{pmatrix}\sigma_{2m}^{-1}\right)\\ \psi(\operatorname{Tr}X)\psi^{-1}({e_{1}}^{t}g^{-1}\,{e_{m}})=-q^{- \frac{m}{2}}.\] All that remains is to take the complex conjugate. In this way, we conclude from Proposition 4.1 that \(\gamma(\pi,\wedge^{2},\psi)=\overline{\gamma(\check{\pi},\wedge^{2},\psi^{-1})}=-q ^{-\frac{m}{2}}\), as desired. We now spell out functional equations for \(\gamma(\pi,\wedge^{2},\psi)\) of an irreducible cuspidal representation \(\pi\) of \(\operatorname{GL}_{n}(\mathbb{F})\). **Theorem 4.3** (Functional equation).: _Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{n}(\mathbb{F})\)._ 1. _If_ \(\pi\) _does not admits a Jacquet-Shalika vector, then we have_ \[\gamma(\pi,\wedge^{2},\psi)\gamma(\check{\pi},\wedge^{2},\psi^{-1})=1\quad \text{and}\quad\big{|}\gamma(\pi,\wedge^{2},\psi)\big{|}=1.\] 2. _If_ \(n=2m\) _and_ \(\pi\) _admits a Jacquet-Shalika vector, then we have_ \[\gamma(\pi,\wedge^{2},\psi)\gamma(\check{\pi},\wedge^{2},\psi^{-1})=q^{-m} \quad\text{and}\quad\big{|}\gamma(\pi,\wedge^{2},\psi)\big{|}=q^{-\frac{m}{2}}.\] Proof.: Part (1) has been done in [50, Remark 2.20], and Part (2) is straightforward from Proposition 4.2. ### Jacquet-Shalika integrals and close field theory Let \(\rho\) be level zero supercuspidal representations of \(\operatorname{GL}_{n}(F)\) with its attached Whittaker models \(\mathcal{W}(\rho,\psi_{F})\). Let \(\psi_{F}^{\flat}\) be a non-trivial additive character of \(F\) of level zero, that is trivial on \(\mathfrak{o}\) but not on \(\mathfrak{p}^{-1}\). For each \(W_{\rho}\in\mathcal{W}(\rho,\psi_{F})\) and \(\Phi\in\mathcal{S}(F^{m})\), we define Jacquet-Shalika integrals \(J(s,W_{\rho},\Phi)\) by \[\int_{N_{m}(F)\backslash\operatorname{GL}_{m}(F)}\int_{\mathcal{N }_{m}(F)\backslash\mathcal{M}_{m}(F)}\int_{F^{m}}W_{\rho}\left(\sigma_{2m+1} \begin{pmatrix}1_{m}&X\\ &1_{m}\\ &&1\end{pmatrix}\begin{pmatrix}g&\\ &g\\ &&1\end{pmatrix}\begin{pmatrix}1_{m}&&\\ &1_{m}&\\ &z&1\end{pmatrix}\right)\] \[\psi_{F}^{-1}(\operatorname{Tr}X)\Phi(z)|\det g|^{s-1}\,dzdXdg\] in the odd case \(n=2m+1\) and \[\int_{N_{m}(F)\backslash\operatorname{GL}_{m}(F)}\int_{\mathcal{N }_{m}(F)\backslash\mathcal{M}_{m}(F)}W_{\rho}\left(\sigma_{2m}\begin{pmatrix} 1_{m}&X\\ &1_{m}\end{pmatrix}\begin{pmatrix}g&\\ &g\end{pmatrix}\right)\psi_{F}^{-1}(\operatorname{Tr}X)\Phi(e_{m}g)|\det g|^{ s}\,dXdg\] in the even case \(n=2m\). These integrals converge absolutely for \(\operatorname{Re}(s)\) sufficiently large, and it defines a rational function in \(\mathbb{C}(q^{-s})\). The exterior square gamma factor \(\Gamma(s,\rho,\wedge^{2},\psi_{F})\) defined as a proportionality. The exterior square \(\gamma\)-factor is a rational function in \(\mathbb{C}(q^{-s})\) satisfying \[J(1-s,\check{\rho}(\tau_{n})\check{W}_{\rho},\mathcal{F}_{\psi_{F}}(\Phi))= \Gamma(s,\rho,\wedge^{2},\psi_{F})J(s,W_{\rho},\Phi), \tag{4.3}\] where \(\tau_{n}\) is the matrix \(\begin{pmatrix}1_{m}\\ 1_{m}\end{pmatrix}\) if \(n=2m\), and the matrix \(\begin{pmatrix}1_{m}&\\ 1_{m}&\\ &1\end{pmatrix}\) if \(n=2m+1\). The _local exterior square \(L\)-function_\(L(s,\rho,\wedge^{2})\) is the generator of the \(\mathbb{C}[q^{\pm s}]\)-fractional ideal of Jacquet-Shalika integrals \(J(s,W_{\rho},\Phi)\) with \(W_{\rho}\in\mathcal{W}(\rho,\psi_{F})\) and \(\Phi\in\mathcal{S}(F^{m})\) normalized to be of the form \(P(q^{-s})\) for some \(P(X)\in\mathbb{C}[X]\) satisfying \(P(0)=1\). A principal series representation of the form \(\Sigma=\operatorname{Ind}_{B_{n}(F)}^{\operatorname{GL}_{n}(F)}(\mu_{1} \otimes\cdots\otimes\mu_{n})\) is said to be _spherical_ if it has a \(K_{n}\)-fixed vector. It is worthwhile to mention that \(\Sigma\) is a full induced representation from the Borel subgroup \(B_{n}(F)\) of unramified characters \(\mu_{i}\) of \(F^{\times}\). Here unramified means that each \(\mu_{i}\) is invariant under the maximal compact subgroup \(\mathfrak{o}^{\times}\) of \(F^{\times}\). Such a spherical representation must have a one-dimensional space of Whittaker functionals \(\Lambda\in\operatorname{Hom}_{N_{n}(F)}(\Sigma\upharpoonright_{N_{n}(F)},\psi_ {F}^{\flat})\). The map \(v\mapsto\Lambda(\Sigma(\cdot)\cdot v)\) a priori need not to be injective, so that the Whittaker model \(\mathcal{W}(\Sigma,\psi_{F}^{\flat})\) consisting of Whittaker functions on \(\operatorname{GL}_{n}(F)\) of the form \(W_{\Sigma}(g):=\Lambda(\Sigma(g)\cdot v)\) may only be a model of a quotient of \(\Sigma\). However, if \(\rho\) is an irreducible generic representation of \(\operatorname{GL}_{n}(F)\), then \(\rho\) is isomorphic to its unique Whittaker model \(\mathcal{W}(\rho,\psi_{F})\), which is the image of \(V_{\rho}\) under the map \(v\mapsto\Lambda(\rho(\cdot)\cdot v)\). According to [27, Theorem 2.2], the local functional equation (4.3) can be extended to \(\rho\) and \(\Sigma\), which is sufficient for applications therein. **Lemma 4.4**.: _Let \(F\) be a local function field. Let \(\rho\) be an irreducible generic subquotient of a spherical representation \(\operatorname{Ind}_{B_{n}(F)}^{\operatorname{GL}_{n}(F)}(\mu_{1}\otimes\cdots \otimes\mu_{n})\). Then we have_ \[\Gamma(s,\rho,\wedge^{2},\psi_{F}^{\flat})=\prod_{1\leq j<k\leq n}\Gamma(s,\mu_ {j}\times\mu_{k},\psi_{F}^{\flat}).\] Proof.: Let us set \(\Sigma=\operatorname{Ind}_{B_{n}(F)}^{\operatorname{GL}_{n}(F)}(\mu_{1} \otimes\cdots\otimes\mu_{n})\). Let \(V_{\rho}\) and \(V_{\Sigma}\) denote their underlying space of \(\rho\) and \(\Sigma\), respectively. By the uniqueness of Whittaker functionals, a non-zero Whittaker functional on \(V_{\rho}\) induces a non-zero Whittaker functional on \(V_{\Sigma}\). As this representation has a unique Whittaker functional, this must be it and we conclude that \(\Gamma(s,\rho,\wedge^{2},\psi_{F}^{\flat})=\Gamma(s,\Sigma,\wedge^{2},\psi_ {F}^{\flat})\). For such a spherical representation \(\Sigma\), the subspace of spherical vectors must be one-dimensional and we normalized the spherical Whittaker function \(W_{\Sigma}^{\flat}\) in the Whittaker model \(\mathcal{W}(\Sigma,.\,\psi_{F}^{\flat})\) so that \(W_{\Sigma}^{\flat}(1_{n})=1\). Upon taking \(W_{\Sigma}^{\flat}\in\mathcal{W}(\Sigma,\psi_{F}^{\flat})\) and \(\Phi^{\flat}\in\mathcal{S}(F^{m})\) of a characteristic function on \(\mathfrak{o}^{m}\) and using [25, SS2], we have the identity \[\prod_{i=1}^{n}L(1-s,\mu_{i}^{-1},\wedge^{2})\prod_{1\leq j<k\leq n }L(1-s,\mu_{j}^{-1}\times\mu_{k}^{-1})=J(1-s,\check{\Sigma}(\tau_{n})\check{ W}_{\Sigma}^{\flat},\mathcal{F}_{\psi_{F}^{\flat}}(\Phi^{\flat}))\\ =\Gamma(s,\Sigma,\wedge^{2},\psi_{F}^{\flat})J(s,W_{\Sigma}^{ \flat},\Phi^{\flat})=\Gamma(s,\Sigma,\wedge^{2},\psi_{F}^{\flat})\prod_{i=1}^ {n}L(s,\mu_{i},\wedge^{2})\prod_{1\leq j<k\leq n}L(s,\mu_{j}\times\mu_{k})\] from which the result we seek for follows. We let \(k\) denote a global function field with field of constant \(\mathbb{F}_{q}\) and ring of adeles \(\mathbb{A}_{k}\). One of the most powerful tool for proving local factors is the standard globalization due to Lomeli [32, Lemma 3.1 and Remark 3.2] (cf. [2, Lemma 9.28]). **Theorem 4.5** (Lomeli).: _Let \(\rho\) be a level zero unitary supercuspidal representation of \(\operatorname{GL}_{n}(F)\) over a local function field \(F\). There is a global field \(k\) with a set of three places \(S=\{v_{0},v_{1},v_{\infty}\}\) such that \(k_{v_{0}}\cong F\). There exists an irreducible cuspidal automorphic representation \(\Pi=\otimes_{v}^{\prime}\Pi_{v}\) of \(\operatorname{GL}_{n}(\mathbb{A}_{k})\) satisfying the following properties:_ 1. \(\Pi_{v_{0}}\cong\rho\)_;_ 2. \(\Pi_{v}\) _is an irreducible unramified principal series representation at every_ \(v\notin S\)_;_ 3. \(\Pi_{v_{1}}\) _and_ \(\Pi_{v_{\infty}}\) _are irreducible quotients of unramified principal series representations._ 4. _If_ \(\rho\) _is generic, then_ \(\Pi\) _is globally generic._ The aforementioned globalization is required to prove a purely local statement, namely, that local exterior square \(\gamma\)-factors \(\Gamma(s,\rho,\wedge^{2},\psi_{F})\) via Rankin-Selberg methods due to Jacquet and Shalika [25] agrees with those \(\Gamma_{\operatorname{LS}}(s,\rho,\wedge^{2},\psi_{F})\) via Langlands-Shahidi methods [21] in positive characteristic at hand. We eventually generalize the equality to all characteristic, notably, zero in Theorem 4.9. **Theorem 4.6**.: _Let \(\rho\) be a level zero supercuspidal representation of \(\operatorname{GL}_{n}(F)\) over a local function field \(F\). Then we have_ \[\Gamma(s,\rho,\wedge^{2},\psi_{F})=\Gamma_{\operatorname{LS}}(s,\rho,\wedge^{ 2},\psi_{F}).\] Proof.: Twisting by an unramified character does not affect the conclusion, so there is no harm to assume that \(\pi\) is unitary (cf. [31, SS6.6-(vii)]). Applying Theorem 4.5 to the level zero supercuspidal representation, there are a global field \(k\) with three places \(v_{0},v_{1}\), and \(v_{\infty}\) such that \(k_{v_{0}}\cong F\), and an irreducible unitary cuspidal automorphic representation \(\Pi\) of \(\operatorname{GL}_{n}(\mathbb{A}_{k})\) with the required properties in Theorem 4.5. We take a non-trivial additive character \(\Psi\) of \(\mathbb{A}_{k}/k\), and assume, as we may, that \(\Psi_{v_{0}}=\psi_{F}\). The global functional equation via the Langlands-Shahidi method can be read from [21, SS4-(vi)] as \[L^{S}(s,\Pi,\wedge^{2})=\Gamma_{\mathrm{LS}}(s,\Pi_{v_{0}},\wedge^{2},\Psi_{v_{0}}) \prod_{v\in S-\{v_{0}\}}\Gamma_{\mathrm{LS}}(s,\Pi_{v},\wedge^{2},\Psi_{v})L^{S }(1-s,\tilde{\Pi},\wedge^{2}). \tag{4.4}\] Since for \(v\notin S\) we know that \(\Pi_{v}\) and \(\Psi_{v}\) are unramified so that \(\varepsilon(s,\Pi_{v},\wedge^{2},\Psi_{v})\equiv 1\), [27, Theorem 3.3] is rephrased as \[L^{S}(s,\Pi,\wedge^{2})=\Gamma(s,\Pi_{v_{0}},\wedge^{2},\Psi_{v_{0}})\prod_{v \in S-\{v_{0}\}}\Gamma(s,\Pi_{v},\wedge^{2},\Psi_{v})L^{S}(1-s,\tilde{\Pi}, \wedge^{2}). \tag{4.5}\] Applying Lemma 4.4 gives us \(\Gamma_{\mathrm{LS}}(s,\Pi_{v},\wedge^{2},\Psi_{v})=\Gamma(s,\Pi_{v},\wedge^ {2},\Psi_{v})\) for \(v\in S-\{v_{0}\}\). In order to do so, the result that we seek for is immediate, once we divide (4.4) by (4.5). Let \(\Gamma(s,\wedge^{2}(\varphi),\psi_{F})\) denote the Artin exterior square \(\gamma\)-factor and \(\Gamma(s,\varphi,\psi_{F})\) the Artin standard \(\gamma\)-factor. We then have a following result. **Proposition 4.7**.: _[_17_, Proposition 6.2]_ _For \((F,\varphi,\psi_{F})\) that is \(\mathrm{Del}\)-associated to \((F^{\prime},\varphi^{\prime},\psi^{\prime}_{F^{\prime}})\), we have_ \[\Gamma(s,\wedge^{2}(\varphi),\psi_{F})=\Gamma(s,\wedge^{2}(\varphi^{\prime}), \psi_{F^{\prime}}).\] Let \(\rho(\varphi)\) be a level zero supercuspidal representation of \(\mathrm{GL}_{n}(F)\) obtained from a tamely ramified Weil representation \(\varphi\) of \(W_{F}\) of degree \(n\) via the local Langlands correspondence (LLC). The identity \[\Gamma_{\mathrm{LS}}(s,\rho(\varphi),\wedge^{2},\psi_{F})=\Gamma(s,\wedge^{2} (\varphi),\psi_{F}) \tag{4.6}\] relating analytic \(\gamma\)-factors \(\Gamma_{\mathrm{LS}}(s,\rho,\wedge,\psi_{F})\) with corresponding Artin factors \(\Gamma(s,\wedge^{2}(\varphi),\psi_{F})\) has been established for non-archimedean local fields \(F\) of characteristic \(0\) in [12] and positive characteristic in [21]. **Proposition 4.8**.: _For \((F,\rho,\psi_{F})\) that is \(\mathrm{Kaz}\)-associated to \((F^{\prime},\rho^{\prime},\psi_{F^{\prime}})\), we have_ \[\Gamma(s,\rho,\wedge^{2},\psi_{F})=\Gamma(s,\rho^{\prime},\wedge^{2},\psi_{F^ {\prime}})\quad\text{and}\quad\Gamma_{\mathrm{LS}}(s,\rho,\wedge^{2},\psi_{F} )=\Gamma_{\mathrm{LS}}(s,\rho^{\prime},\wedge^{2},\psi_{F^{\prime}}).\] Proof.: We consider the first equality. Proposition 3.23 in [50] yields the equivalent condition that \(\mathrm{Hom}_{S_{2m}(F)}(\rho\otimes|\det(\cdot)|^{s/2},\Theta)\neq 0\) for some \(s\in\mathbb{C}\) if and only if \(\mathrm{Hom}_{S_{2m}(F^{\prime})}(\rho^{\prime}\otimes|\det(\cdot)|^{s^{ \prime}/2},\Theta)\neq 0\) for some \(s^{\prime}\in\mathbb{C}\). If this is the case, we use [50, Theorem 3.17] in conjunction with the fact that \(\omega_{\rho}(\varpi_{F})=\omega_{\rho^{\prime}}(\varpi_{F^{\prime}})\) to prove \[\Gamma(s,\rho,\wedge^{2},\psi_{F})=\frac{q^{m(s-1/2)}}{\omega_{\rho}(\varpi_{ F})}\cdot\frac{L(m(1-s),\omega_{\rho}^{-1})}{L(ms,\omega_{\rho})}=\frac{q^{m(s-1/2)}}{ \omega_{\rho^{\prime}}(\varpi_{F^{\prime}})}\cdot\frac{L(m(1-s),\omega_{\rho^ {\prime}}^{-1})}{L(ms,\omega_{\rho^{\prime}})}=\Gamma(s,\rho^{\prime},\wedge^{2 },\psi_{F^{\prime}}). \tag{4.7}\] Otherwise, owing to [50, Theorem 3.16], we are guided to \[\Gamma(s,\rho,\wedge^{2},\psi_{F})=\gamma(\pi,\wedge^{2},\psi)=\Gamma(s,\rho^ {\prime},\wedge^{2},\psi_{F^{\prime}}). \tag{4.8}\] Next, we deal with the second equality. For \((F,\varphi,\psi_{F})\) that is Del-associated to \((F^{\prime},\varphi^{\prime},\psi_{F^{\prime}})\), a similar notation \(\rho^{\prime}(\varphi^{\prime})\) applies to \(\varphi^{\prime}\). We know from Proposition 2.11 that \(\rho(\varphi)\) is \(\mathrm{Kaz}\)-associated to \(\rho^{\prime}(\varphi^{\prime})\), at which point Proposition 4.7 together with (4.6) completes the proof. It is time to bring Deligne-Kazhdan close field theory and Theorem 4.6 back together for good use. **Theorem 4.9**.: _Let \(\varphi\) be a \(n\)-dimensional tamely ramified Weil representation of \(W_{F}\) corresponding to the level zero supercuspidal representation \(\rho(\varphi)\) of \(\mathrm{GL}_{n}(F)\) via the Macdonald correspondence. Then we have_ \[\Gamma(s,\rho(\varphi),\wedge^{2},\psi_{F})=\Gamma(s,\wedge^{2}(\varphi),\psi_{ F}).\] Proof.: Given a local field \(F^{\prime}\) of characteristic \(p\) and an integer \(m\geq 1\), there exists a local field \(F\) of characteristic \(0\) such that \(F^{\prime}\) is \(m\)-close to \(F\)[17, p.1123]. The converse also holds for \(m=1\). Specifically, for a field \(F\) of characteristic \(0\), its residue field \(\mathfrak{o}_{F}/\mathfrak{p}_{F}\) is isomorphic to \(\mathbb{F}_{q}\) with \(q=p^{k}\) for some prime \(p\) and integer \(k\geq 1\). Then we take \(F^{\prime}\) to be \(\mathbb{F}_{q}((t))\) of characteristic \(p\). Now for a \(p\)-adic field \(F\), Theorem 4.6 and Proposition 4.8 allow us to deduce \(\Gamma(s,\rho,\wedge^{2},\psi_{F})=\Gamma_{\mathrm{LS}}(s,\rho,\wedge^{2}, \psi_{F})\). The desired equality then simply follows from (4.6) and Theorem 4.6. ## 5. The Bump-Friedberg Gamma Factor ### The Bump-Friedberg sum We define the embedding \(J:\operatorname{GL}_{m}\times\operatorname{GL}_{m}\to\operatorname{GL}_{n}\) by \[J(g,g^{\prime})_{k,l}=\begin{cases}g_{i,j}&\text{if $k=2i-1$, $l=2j-1$},\\ g^{\prime}_{i,j}&\text{if $k=2i$, $l=2j$},\\ 0&\text{otherwise},\end{cases}\] for \(n=2m\) even and \(J:\operatorname{GL}_{m+1}\times\operatorname{GL}_{m}\to\operatorname{GL}_{n}\) by \[J(g,g^{\prime})_{k,l}=\begin{cases}g_{i,j}&\text{if $k=2i-1$, $l=2j-1$},\\ g^{\prime}_{i,j}&\text{if $k=2i$, $l=2j$},\\ 0&\text{otherwise},\end{cases}\] for \(n=2m+1\) odd. We denote by \(M_{m,m}\) the standard Levi subgroup of \(\operatorname{GL}_{2m}\) associated with the partition \((m,m)\) of \(2m\). Let \(w_{m,m}=\sigma_{2m}\) and then we set \(H_{2m}=w_{m,m}M_{m,m}w_{m,m}^{-1}\). Let \(w_{m+1,m}=w_{m+1,m+1}|_{\operatorname{GL}_{2m+1}}\) so that \[w_{m+1,m}=\begin{pmatrix}1&2&\cdots&m+1&\big{|}&m+2&m+3&\cdots&2m&2m+1\\ 1&3&\cdots&2m+1&\big{|}&2&4&\cdots&2m-2&2m\end{pmatrix}.\] In the odd case, \(w_{m+1,m}\neq\sigma_{2m+1}\). We let \(M_{m+1,m}\) denote the standard Levi subgroup of \(\operatorname{GL}_{2m+1}\) associated with the partition \((m+1,m)\) of \(2m+1\). We set \(H_{2m+1}=w_{m+1,m}M_{m+1,m}w_{m+1,m}^{-1}\). The reason for introducing auxiliary elements \(w_{m,m}\) and \(w_{m+1,m}\) is that \(J(g,g^{\prime})=w_{m,m}\mathrm{diag}(g,g^{\prime})w_{m,m}^{-1}\) for \(\mathrm{diag}(g,g^{\prime})\in M_{m,m}\), and \(J(g,g^{\prime})=w_{m+1,m}\mathrm{diag}(g,g^{\prime})w_{m+1,m}^{-1}\) for \(\mathrm{diag}(g,g^{\prime})\in M_{m+1,m}\). We emphasize that \(H_{n}\) is compatible with the intersection in a manner that \(H_{n}\cap\operatorname{GL}_{n-1}=H_{n-1}\). Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{n}(\mathbb{F})\). For \(W_{\pi}\in\mathcal{W}(\pi,\psi)\) and \(\phi\in\mathcal{S}_{0}(\mathbb{F}^{[(n+1)/2]})\), we define the Bump-Friedberg sum as \[Z(W_{\pi},\phi):=\sum_{g\in N_{m}(\mathbb{F})\setminus\operatorname{GL}_{m}( \mathbb{F})}\sum_{g^{\prime}\in N_{m}(\mathbb{F})\setminus\operatorname{GL}_{ m}(\mathbb{F})}W_{\pi}(J(g,g^{\prime}))\phi(e_{m}g^{\prime}), \tag{5.1}\] in the even case \(n=2m\) and \[Z(W_{\pi},\phi):=\sum_{g\in N_{m+1}(\mathbb{F})\setminus\operatorname{GL}_{m+ 1}(\mathbb{F})}\sum_{g^{\prime}\in N_{m}(\mathbb{F})\setminus\operatorname{ GL}_{m}(\mathbb{F})}W_{\pi}(J(g,g^{\prime}))\phi(e_{m+1}g), \tag{5.2}\] in the odd case \(n=2m+1\). Similarly, we define the dual Bump-Friedberg sum as \[\check{Z}(W_{\pi},\phi):=\sum_{g\in N_{m}(\mathbb{F})\setminus\operatorname{ GL}_{m}(\mathbb{F})}\sum_{g^{\prime}\in N_{m}(\mathbb{F})\setminus \operatorname{GL}_{m}(\mathbb{F})}\check{W}_{\pi}(J(g,g^{\prime}))\mathcal{F}_{ \psi}(\phi)(e_{m}g^{\prime}),\] in the even case \(n=2m\) \[\check{Z}(W_{\pi},\phi):=\sum_{g\in N_{m+1}(\mathbb{F})\setminus\operatorname{ GL}_{m+1}(\mathbb{F})}\sum_{g^{\prime}\in N_{m}(\mathbb{F})\setminus \operatorname{GL}_{m}(\mathbb{F})}\check{W}_{\pi}(J(g,g^{\prime}))\mathcal{F}_{ \psi}(\phi)(e_{m+1}g),\] in the even case \(n=2m+1\). **Lemma 5.1**.: _Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{2m}(\mathbb{F})\). Then we have_ \[\check{Z}(W_{\pi},\phi)=\sum_{g\in N_{m}(\mathbb{F})\setminus\operatorname{GL}_{ m}(\mathbb{F})}\sum_{g^{\prime}\in N_{m}(\mathbb{F})\setminus\operatorname{GL}_{m}( \mathbb{F})}W_{\pi}\left(\sigma_{2m}\begin{pmatrix}&g^{\prime}\\ g\end{pmatrix}\sigma_{2m}^{-1}\right)\mathcal{F}_{\psi}(\phi)(e_{1}{}^{t}g^{ \prime-1})\] _for \(m=2n\) even and_ \[\check{Z}(W_{\pi},\phi)=\sum_{g\in N_{m+1}(\mathbb{F})\setminus\operatorname{ GL}_{m+1}(\mathbb{F})}\sum_{g^{\prime}\in N_{m}(\mathbb{F})\setminus\operatorname{GL}_{ m}(\mathbb{F})}W_{\pi}(J(g,g^{\prime}))\mathcal{F}_{\psi}(\phi)(e_{1}{}^{t}g^{-1})\] _for \(m=2n+1\)._ Proof.: We begin with inserting the identity (2.1) for \(\check{W}_{\pi}\). Subsequently, we make the change of variables \(g\mapsto w_{n}{}^{t}g^{-1}w_{n},g^{\prime}\mapsto w_{n}{}^{t}g^{\prime-1}w_{n}\), and then \(g\mapsto gw_{n},g^{\prime}\mapsto g^{\prime}w_{n}\) to arrive at the result. We aim to prove functional equation \(\gamma(\pi,\operatorname{BF},\psi)Z(W_{\pi},\phi)=\check{Z}(W_{\pi},\phi)\) satisfied by the Bump-Friedberg sum \(Z(W_{\pi},\phi)\). This allows us to define the _Bump-Friedberg gamma factor_\(\gamma(\pi,\operatorname{BF},\psi)\) of an irreducible cuspidal representation \(\pi\) of \(\operatorname{GL}_{n}(\mathbb{F})\). **Theorem 5.2**.: _Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{n}(\mathbb{F})\). For every \(W_{\pi}\in\mathcal{W}(\pi,\psi)\) and for any \(\phi\in\mathcal{S}_{0}(\mathbb{F}^{\lfloor(n+1)/2\rfloor})\), there exists a complex number \(\gamma(\pi,\operatorname{BF},\psi)\) such that_ \[\gamma(\pi,\operatorname{BF},\psi)\sum_{g\in N_{m}(\mathbb{F}) \setminus\operatorname{GL}_{m}(\mathbb{F})}\sum_{g^{\prime}\in N_{m}(\mathbb{F })\setminus\operatorname{GL}_{m}(\mathbb{F})}W_{\pi}\left(\sigma_{2m} \begin{pmatrix}&g\\ g^{\prime}\end{pmatrix}\sigma_{2m}^{-1}\right)\phi(e_{m}g^{\prime})\\ =\sum_{g\in N_{m}(\mathbb{F})\setminus\operatorname{GL}_{m}( \mathbb{F})}\sum_{g^{\prime}\in N_{m}(\mathbb{F})\setminus\operatorname{GL}_{ m}(\mathbb{F})}W_{\pi}\left(\sigma_{2m}\begin{pmatrix}&g^{\prime}\\ g\end{pmatrix}\sigma_{2m}^{-1}\right)\mathcal{F}_{\psi}(\phi)(e_{1}{}^{t}g^{ \prime-1}),\] _in the even case \(n=2m\) and_ \[\gamma(\pi,\operatorname{BF},\psi)\sum_{g\in N_{m+1}(\mathbb{F}) \setminus\operatorname{GL}_{m+1}(\mathbb{F})}\sum_{g^{\prime}\in N_{m}( \mathbb{F})\setminus\operatorname{GL}_{m}(\mathbb{F})}W_{\pi}(J(g,g^{\prime}) )\phi(e_{m+1}g)\\ =\sum_{g\in N_{m+1}(\mathbb{F})\setminus\operatorname{GL}_{m+1}( \mathbb{F})}\sum_{g^{\prime}\in N_{m}(\mathbb{F})\setminus\operatorname{GL}_{ m}(\mathbb{F})}W_{\pi}(J(g,g^{\prime}))\mathcal{F}_{\psi}(\phi)(e_{1}{}^{t}g^{-1}),\] _in the odd case \(n=2m+1\)._ Proof.: It can be checked from Lemma 5.1 that bilinear forms \(B_{1}:(W_{\pi},\phi)\mapsto Z(W_{\pi},\phi)\) and \(B_{2}:(W_{\pi},\phi)\mapsto\check{Z}(W_{\pi},\phi)\) belong to the space \(\operatorname{Hom}_{H_{n}(\mathbb{F})}(\pi\upharpoonright_{H_{n}(\mathbb{F})} \otimes\mathcal{S}_{0}(\mathbb{F}^{\lfloor(n+1)/2\rfloor}),\mathbf{1}_{H_{n}( \mathbb{F})})\). Hence it suffices to show that such Bilinear forms \(B_{1}\) and \(B_{2}\) differ only by scalars, that is to say, \[\dim\operatorname{Hom}_{H_{n}(\mathbb{F})}(\pi\upharpoonright_{H_{n}(\mathbb{F })}\otimes\mathcal{S}_{0}(\mathbb{F}^{\lfloor(n+1)/2\rfloor}),\mathbf{1}_{H_{n }(\mathbb{F})})\leq 1.\] We identify \(P_{n}(\mathbb{F})\cap H_{n}(\mathbb{F})\backslash H_{n}(\mathbb{F})\) with \(\mathbb{F}^{\lfloor(n+1)/2\rfloor}-\{0\}\), and then use the Frobenius reciprocity to deduce that \[\operatorname{Hom}_{H_{n}(\mathbb{F})}(\pi\upharpoonright_{H_{n}( \mathbb{F})}\otimes\mathcal{S}_{0}(\mathbb{F}^{\lfloor(n+1)/2\rfloor}), \mathbf{1}_{H_{n}(\mathbb{F})}) \cong\operatorname{Hom}_{H_{n}(\mathbb{F})}(\pi\upharpoonright_{H _{n}(\mathbb{F})}\otimes\operatorname{Ind}_{P_{n}(\mathbb{F})\cap H_{n}( \mathbb{F})}^{H_{n}(\mathbb{F})}(\mathbf{1}),\mathbf{1}_{H_{n}(\mathbb{F})})\] \[\cong\operatorname{Hom}_{P_{n}(\mathbb{F})\cap H_{n}(\mathbb{F })}(\pi\upharpoonright_{P_{n}(\mathbb{F})\cap H_{n}(\mathbb{F})},\mathbf{1}_{P_{n }(\mathbb{F})\cap H_{n}(\mathbb{F})}).\] After suitably conjugating \(P_{n}(\mathbb{F})\cap H_{n}(\mathbb{F})\) by Weyl elements, the dimension of the last space is at most one dimensional by [50, Theorem 2.22] for even case \(=2m\) and [50, Theorem 2.25] for odd case \(n=2m+1\). In the course of the proof of proceeding theorem, we get the following multiplicity one result as a byproduct, which is used in the proof of Lemma 5.7. **Corollary 5.3** (Multiplicity one result).: _Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{n}(\mathbb{F})\). Then we have_ \[\dim\operatorname{Hom}_{P_{n}(\mathbb{F})\cap H_{n}(\mathbb{F})}(\pi\upharpoonright_ {P_{n}(\mathbb{F})\cap H_{n}(\mathbb{F})},\mathbf{1}_{P_{n}(\mathbb{F})\cap H_{ n}(\mathbb{F})}\leq 1.\] ### The Friedberg-Jacquet period Let \(\rho\) be an irreducible supercuspidal representation of \(\operatorname{GL}_{2m}(F)\). As in (2.5), the representation \(\rho\) is called \(H_{2m}(F)\)-_distinguished_ or _distinguished with respect to \(H_{2m}(F)\)_ if \(\operatorname{Hom}_{H_{2m}(F)}(\rho\upharpoonright_{H_{2m}(F)},\mathbf{1}_{H_{2 m}(F)})\neq 0\). We also say that \(\rho\) admits a _Friedberg-Jacquet period_ if \(\rho\) is \(H_{2m}(F)\)-distinguished. Let \(\ell\) and \(\ell^{\prime}\) the linear form on \(\mathcal{W}(\rho,\psi_{F})\) defined by \[\ell:W_{\rho}\mapsto Z_{(0)}(1/2,W_{\rho}):=\int_{N_{m}(F)\setminus P_{m}(F)} \int_{N_{m}(F)\setminus\operatorname{GL}_{m}(F)}W_{\rho}(J(g,p^{\prime}))\, dgdp^{\prime}\] and \[\ell^{\prime}:W_{\rho}\mapsto Z_{(0)}(1/2,\tilde{W}_{\rho}):=\int_{N_{m}(F) \setminus P_{m}(F)}\int_{N_{m}(F)\setminus\operatorname{GL}_{m}(F)}\tilde{ W}_{\rho}(J(g,p^{\prime}))\,dgdp^{\prime}.\] **Lemma 5.4**.: _Let \(\rho\) be an irreducible supercuspidal representations of \(\operatorname{GL}_{2m}(F)\) which is distinguished with respect to \(H_{2m}(F)\). Then there exists a non-zero constant \(c(\rho)\in\mathbb{C}^{\times}\), which is independent of \(\psi_{F}\), such that \(\ell^{\prime}=c(\rho)\ell\)._ Proof.: We know from [28, Proposition 5.14] that \(L(s,\pi,\operatorname{BF})\) is holomorphic at \(s=1/2\) since \(\rho\) is assumed to be distinguished with respect to \(H_{2m}(F)\). As a consequence, all the integrals \(Z_{(0)}(s,W_{\rho})\) are holomorphic at \(s=1/2\) from which it follows that the linear forms \(\ell\) and \(\ell^{\prime}\) are well-defined. Since \(\rho\) is \(H_{2m}(F)\)-distinguished, \(\check{\rho}\) is also \(H_{2m}(F)\)-distinguished. Taking \[\operatorname{Hom}_{H_{2m}(F)}(\rho\upharpoonright_{H_{2m}(F)},\mathbf{1}_{H_ {2m}(F)})=\operatorname{Hom}_{P_{2m}(F)\cap H_{2m}(F)}(\rho\upharpoonright_{P_{ 2m}(F)\cap H_{2m}(F)},\mathbf{1}_{P_{2m}(F)\cap H_{2m}(F)})\] into account, \(\ell\) is a \(H_{2m}(F)\)-invariant functional on \(\mathcal{W}(\rho,\psi_{F})\) and the integral in the linear form of \(\ell^{\prime}\) is a \(H_{2m}(F)\)-invariant functional on \(\mathcal{W}(\check{\rho},\psi_{F}^{-1})\). As \(\check{\rho}\cong\rho^{\iota}\) where \(\rho^{\iota}(g)=\rho(^{t}g^{-1})\), \(\ell^{\prime}\) gives a \(H_{2m}(F)\)-invariant functional on \(\rho\) as well. Using the multiplicity one result of \(H_{2m}(F)\)-invariant linear functionals accompanied by [41, Remark 3], this yields that two linear forms \(\ell\) and \(\ell^{\prime}\) differ by a non-zero scalar \(c(\rho)\) which depends only on the representation \(\rho\). Let \(\varepsilon(s,\rho,\psi_{F})\) be the standard \(\varepsilon\)-factor defined by Godement and Jacquet [19]. The local constant takes the form \(\varepsilon(s,\rho,\psi_{F})=\varepsilon(0,\rho,\psi_{F})q^{-f(\rho,\psi_{F})s}\), where \(f(\rho,\psi_{F})=-n+f(\rho)\), for a non-negative integer \(f(\rho)\) regardless of the choice of \(\psi_{F}\)[10]. We shall primarily be interested in the special value of \(\varepsilon(s,\rho,\psi_{F})\) at \(s=1/2\). Although we narrow it down to \(\{\pm 1\}\) for distinguished representations \(\rho\), it is challenging to determine the sign of the root number \(\varepsilon(1/2,\rho,\psi_{F})\). This is what is known as _distinction problems_[41]. It is our belief that \(\varepsilon(1/2,\rho,\psi_{F})=1\) for distinguished representations \(\rho\), but we leave these out as it will be a digression from the main theorem of this paper. **Lemma 5.5**.: _Let \(\rho\) be an irreducible supercuspidal representations of \(\operatorname{GL}_{2m}(F)\) which is distinguished with respect to \(H_{2m}(F)\). Then we have \(\varepsilon(1/2,\rho,\psi_{F}^{\flat})=\varepsilon(1/2,\rho,\psi_{F})\in\{\pm 1\}\). In particular, if \(\rho\) is a level zero supercuspidal representations of \(\operatorname{GL}_{2m}(F)\), then we get_ \[\varepsilon(1/2,\rho,\psi_{F}^{\flat})=\varepsilon(1/2,\rho,\psi_{F})= \varepsilon(s,\rho,\psi_{F})\in\{\pm 1\}.\] Proof.: Appealing to [22, Theorem 2.2-(iv)] with an observation that the central character \(\omega_{\rho}\) of the distinguished representation \(\rho\) is trivial, the central value of epsilon factors \(\varepsilon(1/2,\rho,\psi_{F})\) does not depend on the choice of \(\psi_{F}\). With the choice of level one additive character \(\psi_{F}\), we recall from [10, SS6.1] that the level zero supercuspidal representations are of conductor zero and then their corresponding gamma factors \(\varepsilon(s,\rho,\psi_{F})\) are complex numbers instead of rational functions in \(q^{s}\). In this way, we see that \(\varepsilon(1/2,\rho,\psi_{F})=\varepsilon(s,\rho,\psi_{F})\). Let us make a straight observation. The \(\varepsilon\)-factor satisfies the identity \(\varepsilon(s,\rho,\psi_{F})\varepsilon(1-s,\check{\rho},\psi_{F}^{-1})=1\). Since \(\rho\) is self-contragredient, that is to say \(\rho\cong\check{\rho}\)[27, Theorems 3.5 and 4.1], it is clear from the fact \(\varepsilon(s,\check{\rho},\psi_{F}^{-1})=\varepsilon(s,\rho,\psi_{F})\) that \[\varepsilon(1/2,\rho,\psi_{F})^{2}=1.\] Thereupon we conclude that \(\varepsilon(1/2,\rho,\psi_{F})\in\{\pm 1\}\), as claimed. Let \(W^{\rm ess}_{\rho}\) be the essential Whittaker function defined by Jacquet-Piatetski-Shapiro-Shalika and \(\rho_{\rm ur}\) a certain unramified standard module attached to \(\rho\). We refer the reader to [28, SS2] for precise definitions of these objects. By evaluating the essential Whittaker function \(W^{\rm ess}_{\rho}\) in \(\mathcal{W}(\rho,\psi^{\triangleright}_{F})\), we specify the constant \(c(\rho)\). **Proposition 5.6**.: _Let \(\rho\) be an irreducible supercuspidal representations of \({\rm GL}_{2m}(F)\) which is distinguished with respect to \(H_{2m}(F)\). Then we have \(\ell^{\prime}=\varepsilon(s,\rho,\psi_{F})\ell\). In particular, if \(\rho\) is a level zero supercuspidal representations of \({\rm GL}_{2m}(F)\), then we get_ \[Z(\check{W}_{\pi},\mathbf{1}_{\mathbb{F}^{m}})=\varepsilon(s,\rho,\psi_{F})Z( W_{\pi},\mathbf{1}_{\mathbb{F}^{m}}).\] Proof.: Since \(c(\rho)\) does not depend on the choice of \(\psi_{F}\), we can take \(\psi_{F}\) to \(\psi^{\triangleright}_{F}\). Upon using [3, Proposition 4.4] in conjunction with Lemma 5.5, and then making the change of variables \(g\mapsto g\begin{pmatrix}\varpi^{-f(\pi)}1_{n-1}&\\ &1\end{pmatrix}\), we are led to \[\ell^{\prime}(W^{\rm ess}_{\rho})=\ell(\check{W}^{\rm ess}_{\rho})= \varepsilon(1/2,\rho,\psi^{\triangleright}_{F})^{2m-1}\ell\begin{pmatrix}\check{ \rho}\begin{pmatrix}\varpi^{f(\pi)}1_{n-1}&\\ &1\end{pmatrix}W^{\rm ess}_{\check{\rho}}\end{pmatrix}=\varepsilon(s,\rho, \psi_{F})\ell(W^{\rm ess}_{\check{\rho}}).\] With the help of [28, Theorem 5.15], we deduce from the self-contragredient of \(\rho\) that \[\ell(W^{\rm ess}_{\check{\rho}})=L(1/2,\check{\rho}_{\rm ur})L(1,\check{\rho}_ {\rm ur},\wedge^{2})=L(1/2,\rho_{\rm ur})L(1,\rho_{\rm ur},\wedge^{2})=\ell(W^ {\rm ess}_{\rho}).\] To sum up, we obtain \(\ell^{\prime}(W^{\rm ess}_{\rho})=\varepsilon(s,\rho,\psi_{F})\ell(W^{\rm ess} _{\rho})\), from which we conclude that \(c(\rho)=\varepsilon(s,\rho,\psi_{F})\). We assume that \(\rho\) is a level zero supercuspidal representation constructed from \(\pi\) and choose \(W_{\rho}\) to be \(W^{\circ}_{\rho}\). Since the support of \(W^{\circ}_{\rho}\) is contained in \(N_{2m}(F)F^{\times}K_{2m}\), we arrive at (cf. [40, Theorem 3.1]) \[\ell(W^{\circ}_{\rho})=\int_{N_{m}(F)\cap K_{m}\setminus K_{m}} \int_{N_{m}(F)\cap K_{m}\setminus K_{m}}W^{\circ}_{\rho}(J(k,k^{\prime}))\, dkdk^{\prime}\\ =\operatorname{Vol}(N_{m}(\mathfrak{o})(1_{m}+\mathcal{M}_{m}( \mathfrak{p})))^{2}\sum_{g\in N_{m}(\mathbb{F})\setminus\operatorname{GL}_{m} (\mathbb{F})}\sum_{g^{\prime}\in N_{m}(\mathbb{F})\setminus\operatorname{GL} _{m}(\mathbb{F})}W_{\pi}(J(g,g^{\prime}))\\ =\operatorname{Vol}(N_{m}(\mathfrak{o})(1_{m}+\mathcal{M}_{m}( \mathfrak{p})))^{2}Z(W_{\pi},\mathbf{1}_{\mathbb{F}^{m}}),\] and similarly we have \(\ell^{\prime}(W^{\circ}_{\rho})=\operatorname{Vol}(N_{m}(\mathfrak{o})(1_{m}+ \mathcal{M}_{m}(\mathfrak{p})))^{2}Z_{\psi}(\check{W}_{\pi},\mathbf{1}_{ \mathbb{F}^{m}})\). The common volume term is cancelled out, and we are left with \(Z(\check{W}_{\pi},\mathbf{1}_{\mathbb{F}^{m}})=\varepsilon(s,\rho,\psi_{F})Z( W_{\pi},\mathbf{1}_{\mathbb{F}^{m}})\), as required. A non-zero vector \(v\in V_{\pi}\) is called a _Friedberg-Jacquet vector_ if \(\pi(h)v=v\) for every \(h\in H_{2m}(\mathbb{F})\). We now characterize the existence of Friedberg-Jacquet vectors in terms of the non-vanishing sum. **Lemma 5.7**.: _Let \(\pi\) be an irreducible cuspidal representation of \({\rm GL}_{n}(\mathbb{F})\) with \(n=2m\) even. Then \(\pi\) admits a Friedberg-Jacquet vector if and only if there exists \(W_{\pi}\in\mathcal{W}(\pi,\psi)\) such that_ \[\sum_{g\in N_{m}(\mathbb{F})\setminus\operatorname{GL}_{m}(\mathbb{F})}\sum_{g ^{\prime}\in N_{m}(\mathbb{F})\setminus\operatorname{GL}_{m}(\mathbb{F})}W_{ \pi}(J(g,g^{\prime}))\neq 0.\] Proof.: We assume that \(\pi\) has a non-zero Friedberg-Jacquet vector. We equip \(\mathcal{W}(\pi,\psi_{\mathbb{E}/\mathbb{F}})\) with an inner product \((\cdot,\cdot)\) in which \(\pi\) is unitary. We define \(W_{\rm FJ}\in\mathcal{W}(\pi,\psi)\) by \[W_{\rm FJ}(g)=\frac{1}{|H_{n}(\mathbb{F})|}\sum_{p\in P_{n}(\mathbb{F})\cap H_{ n}(\mathbb{F})}\mathcal{B}_{\pi,\psi}(gp)\] for \(g\in H_{n}(\mathbb{E})\). Taking the advantage of the average, we see that \(W_{\rm FJ}(gh)=W_{\rm FJ}(g)\) for all \(h\in P_{n}(\mathbb{F})\cap H_{n}(\mathbb{F})\). Using inclusion \(\operatorname{Hom}_{H_{n}(\mathbb{F})}(\pi\upharpoonright_{H_{n}(\mathbb{F})},\mathbf{1})\subseteq\operatorname{Hom}_{P_{n}(\mathbb{F})\cap H_{n}(\mathbb{ F})}(\pi\upharpoonright_{P_{n}(\mathbb{F})\cap H_{n}(\mathbb{F})},\mathbf{1})\), we deduce the equality \(\operatorname{Hom}_{H_{n}(\mathbb{F})}(\pi\upharpoonright_{H_{n}(\mathbb{F})},\mathbf{1})=\operatorname{Hom}_{P_{n}(\mathbb{F})\cap H_{n}(\mathbb{F})}(\pi \upharpoonright_{P_{n}(\mathbb{F})\cap H_{n}(\mathbb{F})},\mathbf{1})\) from the one-dimensionality of both spaces, Corollary 5.3. In this way, \(W_{\rm FJ}\) produces an element \(T_{W_{\rm FJ}}\in\mathcal{W}(\pi,\psi)\). \(\operatorname{Hom}_{H_{n}(\mathbb{F})}(\pi\upharpoonright_{H_{n}(\mathbb{F})}, \mathbf{1})\) stated by \(T_{W_{\mathrm{FJ}}}(W^{\prime})=(W^{\prime},W_{\mathrm{FJ}})\) for \(W^{\prime}\in\mathcal{W}(\pi,\psi)\), from which it follows that \(W_{\mathrm{FJ}}\) is a Friedberg-Jacquet vector. Furthermore, the given summation is non-trivial, because [38, Lemma 2.7] yields Conversely, we assume that there exists \(W_{\pi}\in\mathcal{W}(\pi,\psi)\) such that \[\sum_{g\in N_{m}(\mathbb{F})\setminus\operatorname{GL}_{m}(\mathbb{F})}\sum_ {g^{\prime}\in N_{m}(\mathbb{F})\setminus\operatorname{GL}_{m}(\mathbb{F})}W _{\pi}(J(g,g^{\prime}))\neq 0.\] We define \(W_{\mathrm{FJ}}^{\sharp}\in\mathcal{W}(\pi,\psi)\) by \[W_{\mathrm{FJ}}^{\sharp}(h)=\frac{1}{|N_{n}(\mathbb{F})\cap H_{n}(\mathbb{F}) |}\sum_{g\in H_{n}(\mathbb{F})}W_{\pi}(hg).\] Combining \[W_{\mathrm{FJ}}^{\sharp}(1_{n})=\sum_{g\in N_{m}(\mathbb{F})\setminus \operatorname{GL}_{m}(\mathbb{F})}\sum_{g^{\prime}\in N_{m}(\mathbb{F}) \setminus\operatorname{GL}_{m}(\mathbb{F})}W_{\pi}(J(g,g^{\prime}))\neq 0\] along with the quasi-invariance property that \(W_{\mathrm{FJ}}^{\sharp}(hh^{\prime})=W_{\mathrm{FJ}}^{\sharp}(h)\) for all \(h^{\prime}\in H_{n}(\mathbb{F})\), \(W_{\mathrm{FJ}}^{\sharp}\) is a non-zero Friedberg-Jacquet vector that we seek for. ### Bump-Freidberg integrals and close field theory Let \(\rho\) be level zero supercuspidal representations of \(\operatorname{GL}_{n}(F)\) constructed from irreducible cuspidal representations \(\pi\) of \(\operatorname{GL}_{n}(\mathbb{F})\) with its attached Whittaker models \(\mathcal{W}(\rho,\psi_{F})\). For \(W_{\rho}\in\mathcal{W}(\rho,\psi_{F})\) and \(\Phi\in\mathcal{S}(F^{\lfloor(n+1)/2\rfloor})\), we define the _Bump-Friedberg integral_\(Z(s_{1},s_{2},W_{\rho},\Phi)\) by \[\int_{N_{m}(F)\setminus\operatorname{GL}_{m}(F)}\int_{N_{m}(F)\setminus \operatorname{GL}_{m}(F)}W_{\rho}(J(g,g^{\prime}))\Phi(e_{m}g^{\prime})\left| \det g\right|^{s_{1}-1/2}\left|\det g^{\prime}\right|^{1/2+s_{2}-s_{1}}\,dg^ {\prime}dg\] for \(n=2m\) even and \[\int_{N_{m+1}(F)\setminus\operatorname{GL}_{m+1}(F)}\int_{N_{m}(F)\setminus \operatorname{GL}_{m}(F)}W_{\rho}(J(g,g^{\prime}))\Phi(e_{m+1}g)\left|\det g \right|^{s_{1}}\left|\det g^{\prime}\right|^{s_{2}-s_{1}}\,dg^{\prime}dg\] for \(n=2m+1\) odd. For the sake of coherence with [35, 36], we bring further notations. For \(t\in\mathbb{C}\) a complex number, we denote by \(\delta_{t}\) the character defined by \[\delta_{t}:J(g,g^{\prime})\mapsto\left|\frac{\det g}{\det g^{\prime}}\right|^{ t}.\] We denote by \(\chi_{n}\) characters of \(H_{n}(F)\): \[\chi_{n}:J(g,g^{\prime})\mapsto\begin{cases}\mathbf{1}_{H_{n}(F)}&\text{for }n=2m ;\\ \left|\frac{\det g}{\det g^{\prime}}\right|&\text{for }n=2m+1.\end{cases}\] In particular, we are interested in the case for \(s_{1}=s+t+1/2\) and \(s_{2}=2s\). With regard to \(\delta_{r}\), \(\chi_{n}\), \(s\), and \(t\), These Bump-Friedberg integrals depending on the parity of numbers \(n\) can be put in a single integral as \[Z(s,t,W_{\rho},\Phi)=\int_{(N_{n}(F)\cap H_{n}(F))\setminus H_{n}(F)}W_{\rho} (h)\Phi(e_{n}h)\chi_{n}^{1/2}(h)\delta_{t}(h)\left|\det h\right|^{s}dh.\] The integral converges absolutely for \(\operatorname{Re}(s)\) and \(\operatorname{Re}(t)\) sufficiently large [36, Proposition 3.1], and it enjoys a meromorphic continuation to \(\mathbb{C}\times\mathbb{C}\) as an element of \(\mathbb{C}(q^{-s},q^{-t})\). There exists a rational function \(\Gamma(s,t,\rho,\mathrm{BF},\psi_{F})\) in \(\mathbb{C}(q^{-s},q^{-t})\) such that for every \(W_{\rho}\) in \(\mathcal{W}(\rho,\psi_{F})\) and \(\Phi\) in \(\mathcal{S}(F^{\lfloor(n+1)/2\rfloor})\), we have the following functional equation [36, Corollary 3.2]: \[Z(1/2-s,-1/2-t,\check{W}_{\rho},\mathcal{F}_{\psi_{F}}(\Phi))=\Gamma(s,t,\rho, \mathrm{BF},\psi_{F})Z(s,t,W_{\rho},\Phi). \tag{5.3}\] For our purpose, it will often be convenient to write \(Z(s,W_{\rho},\Phi)\) and \(\Gamma(s,\rho,\mathrm{BF},\psi_{F})\) in place of \(Z(s,0,W_{\rho},\Phi)\) and \(\Gamma(s,0,\rho,\mathrm{BF},\psi_{F})\), respectively. The _local Bump-Friedberg \(L\)-function_\(L(s,\rho,\mathrm{BF})\) is the generator of the \(\mathbb{C}[q^{\pm s}]\)-fractional ideal of Bump-Friedberg integrals \(Z(s,W_{\rho},\Phi)\) with \(W_{\rho}\in\mathcal{W}(\rho,\psi_{F})\) and \(\Phi\in\mathcal{S}(F^{\lfloor(n+1)/2\rfloor})\) normalized to be of the form \(P(q^{-s})\) for some \(P(X)\in\mathbb{C}[X]\) satisfying \(P(0)=1\). **Proposition 5.8** (The modified functional equation).: _Let \(\pi\) be an irreducible cuspidal representation of \(\mathrm{GL}_{2m}(\mathbb{F})\). Then for every \(W_{\pi}\in\mathcal{W}(\pi,\psi)\), \(\phi\in\mathcal{S}(\mathbb{F}^{m})\), and \(s\in\mathbb{C}\), there exists \(\Gamma(s,t,\rho,\mathrm{BF},\psi_{F})\) such that_ \[\check{Z}(W_{\pi},\phi)+q^{-m(1-2s)} \omega_{\rho}^{-1}(\varpi)\mathcal{F}_{\psi}(\phi)(0)L(m(1-2s), \omega_{\rho}^{-1})\varepsilon(s,\rho,\psi_{F})Z(W_{\pi},\mathbf{1}_{ \mathbb{F}^{m}})\] \[=\Gamma(s,t,\rho,\mathrm{BF},\psi_{F})(Z(W_{\pi},\phi)+q^{-2ms} \omega_{\rho}(\varpi)\phi(0)L(2ms,\omega_{\rho})Z(W_{\pi},\mathbf{1}_{ \mathbb{F}^{m}})).\] Proof.: The computation for two sides are quite similar. For this reason, we give a detailed proof only in the dual integral \(Z(1/2-s,-1/2-t,\check{W}_{\rho}^{\circ},\mathcal{F}_{\psi_{F}}(\Phi_{\circ}))\). The rational for our choice is that the dual side of modified functional equation is less written in the literature (cf. [50, Theorem 3.11], Proposition 3.9) and certain additional difficulties arise. Since the support of \(W_{\rho}^{\circ}\) lies in \(N_{n}(F)F^{\times}K_{n}=\mathbbm{U}_{l\in\mathbb{Z}}\varpi^{l}N_{n}(F)K_{n}\), for \(\mathrm{Re}(s)\ll 0\), \(\check{Z}(1/2-s,\check{W}_{\rho}^{\circ},\mathcal{F}_{\psi_{F}}(\Phi))\) can be decomposed as \[Z(1/2-s,-1/2-t,\check{W}_{\rho}^{\circ},\mathcal{F}_{\psi_{F}}( \Phi_{\circ}))=\sum_{l\in\mathbb{Z}}q^{-ml(1-2s)}\int_{\mathfrak{o}^{\times}} \omega_{\rho}^{-1}(x\varpi^{l})\\ \times\int_{N_{m}(F)\cap K_{m}\setminus K_{m}}\int_{N_{m}(F)\cap K _{m}\setminus K_{m}}\check{W}_{\rho}^{\circ}(J(k,k^{\prime}))\mathcal{F}_{ \psi_{F}}(\Phi_{\circ})(e_{m}k^{\prime}x\varpi^{l})\,dkdk^{\prime}d^{\times}x.\] The Fourier transform \(\mathcal{F}_{\psi_{F}}(\Phi_{\circ})\) is a lift of \(\mathcal{F}_{\psi}(\phi)\), from which we deduce that \(\mathcal{F}_{\psi_{F}}(\Phi_{\circ})(e_{m}k^{\prime}x\varpi^{l})=0\) for \(l<0\), while \(\mathcal{F}_{\psi_{F}}(\Phi_{\circ})(e_{m}k^{\prime}x\varpi^{l})=\phi(0)\) for \(l>0\). Upon making the change of variables \(k^{\prime}\mapsto k^{\prime}x^{-1}\) for \(l=0\), the infinite sum can be reduced to \[Z(1/2-s,-1/2-t,\check{W}_{\rho}^{\circ},\mathcal{F}_{\psi_{F}}( \Phi_{\circ}))\\ =\int_{N_{m}(F)\cap K_{m}\setminus K_{m}}\int_{N_{m}(F)\cap K_{m} \setminus K_{m}}\check{W}_{\rho}^{\circ}(J(k,k^{\prime}))\mathcal{F}_{\psi_{F}} (\Phi_{\circ})(e_{m}k^{\prime})\,dkdk^{\prime}+\left(\sum_{l=1}^{\infty}q^{-ml(1 -2s)}\omega_{\rho}^{-1}(\varpi^{l})\right.\\ \left.\cdot\mathcal{F}_{\psi}(\phi)(0)\,\int_{\mathfrak{o}^{\times} }\omega_{\rho}^{-1}\,(x)d^{\times}x\int_{N_{m}(F)\cap K_{m}\setminus K_{m}} \int_{N_{m}(F)\cap K_{m}\setminus K_{m}}\check{W}_{\rho}^{\circ}(J(k,k^{\prime} ))\,dkdk^{\prime}\right).\] We rewrite the integration as sums akin to the proof of [40, Theorem 3.1] by \[Z(1/2-s,-1/2-t,\check{W}_{\rho}^{\circ},\mathcal{F}_{\psi_{F}}( \Phi_{\circ}))=\mathrm{Vol}(N_{m}(\mathfrak{o})(1_{m}+\mathcal{M}_{m}(\mathfrak{p })))^{2}\\ \times\left(\check{Z}(W_{\pi},\phi)+\sum_{l=1}^{\infty}q^{-ml(1-2s)} \omega_{\rho}^{-1}(\varpi^{l})\mathcal{F}_{\psi}(\phi)(0)\int_{\mathfrak{o}^{ \times}}\omega_{\rho}^{-1}\,(x)d^{\times}x\cdot Z(\check{W}_{\pi},\mathbf{1}_{ \mathbb{F}^{m}})\right). \tag{5.4}\] To deal with the second term, we assume that \(\omega_{\rho}\) is unramified. Combining [34, SS5] with [50, Theorem 2.30 and Proposition 3.23], \(\rho\nu^{s0}\) is \(S_{2m}(F)\)-distinguished for some \(s_{0}\in\mathbb{C}\), which is amount to saying that it is \(H_{2m}(F)\)-distinguished. It follows from Proposition 5.6 that the second term is equal to \[q^{-m(1-2s)}\omega_{\rho}^{-1}(\varpi)\mathcal{F}_{\psi}(\phi)(0)L(m(1-2s), \omega_{\rho}^{-1})Z(\check{W}_{\pi},\mathbf{1}_{\mathbb{F}^{m}})\] \[=q^{-m(1-2s)}\omega_{\rho}^{-1}(\varpi)\mathcal{F}_{\psi}(\phi)(0)L(m(1-2s), \omega_{\rho}^{-1})\varepsilon(s,\rho,\psi_{F})Z(W_{\pi},\mathbf{1}_{\mathbb{F}^ {m}}).\] On the other hand, if \(\omega_{\rho}\) is ramified, \(\pi\) does not admit a non-zero Friedberg-Jacquet vector in that \(\omega_{\pi}\) is non-trivial. This in turn implies that the second term vanishes, as \[\int_{\mathfrak{o}^{\times}}\omega_{\rho}^{-1}\,(x)d^{\times}x=0=Z(W_{\pi}, \mathbf{1}_{\mathbb{F}^{m}}),\] thanks to Lemma 5.7. The analogous argument for \(Z(s,t,W_{\rho}^{\circ},\Phi_{\circ})\) goes through, and it guides us to \[Z(s,t,W_{\rho}^{\circ},\Phi_{\circ})=\operatorname{Vol}(N_{m}(\mathfrak{o})(1 _{m}+\mathcal{M}_{m}(\mathfrak{p})))^{2}(Z(W_{\pi},\phi)+q^{-2ms}\omega_{\rho} (\varpi)\phi(0)L(2ms,\omega_{\rho})Z(W_{\pi},\mathbf{1}_{\mathbb{F}^{m}})).\] All that remains is to apply the functional equation (5.3) and then to cancel out the common volume term. In contrast to the exterior square local factor \(\Gamma(s,\rho,\wedge^{2},\psi_{F})\), the Bump and Friedberg local factor \(\Gamma(s,t,\rho,\operatorname{BF},\psi_{F})\) possesses two parameters \(s\) and \(t\). For this reason, \(\Gamma(s,t,\rho,\operatorname{BF},\psi_{F})\) is not really defined as a proportionality, but rather the functional equation for the \(\varepsilon\)-factor, \(\varepsilon(s,t,\rho,\operatorname{BF},\psi_{F})\), need to be established beforehand. Thankfully, we will see the following theorem, Theorem 5.9, that \(\Gamma(s,t,\rho,\operatorname{BF},\psi_{F})\) is independent of \(t\) in the class of level zero supercuspidal representations \(\rho\). Therefore there is no harm to assign \(t=0\) to define \(Z(s,W_{\rho},\Phi)\) and \(\Gamma(s,\rho,\operatorname{BF},\psi_{F})\). **Theorem 5.9**.: _Let \(\rho\) be a level zero supercuspidal representation of \(\operatorname{GL}_{n}(F)\)._ 1. _If_ \(\pi\) _does not admit a Friedberg-Jacquet vector, then we have_ \[\Gamma(s,t,\rho,\operatorname{BF},\psi_{F})=\gamma(\pi,\operatorname{BF},\psi).\] 2. _If_ \(n=2m\) _and_ \(\pi\) _admits a Friedberg-Jacquet vector, then we have_ (5.5) \[\Gamma(s,t,\rho,\operatorname{BF},\psi_{F})=\varepsilon(s,\rho, \psi_{F})q^{m\left(2s-\frac{1}{2}\right)}\omega_{\rho}^{-1}(\varpi)\frac{L(m(1 -2s),\omega_{\rho}^{-1})}{L(2ms,\omega_{\rho})}.\] Proof.: We first look into the odd case \(m=2n+1\). Just like (5.4), we decompose the domain of integrations \(N_{n}(F)F^{\times}K_{n}\) as shells \(\varpi^{l}N_{n}(F)K_{n}\) to see that \[Z(s,t,W_{\rho}^{\circ},\Phi_{\circ})=\operatorname{Vol}(N_{m}( \mathfrak{o})(1_{m}+\mathcal{M}_{m}(\mathfrak{p})))\operatorname{Vol}(N_{m+1} (\mathfrak{o})(1_{m+1}+\mathcal{M}_{m+1}(\mathfrak{p})))\\ \times\left(Z(W_{\pi},\phi)+\sum_{l=1}^{\infty}q^{-l((2m+1)s+t+1 /2)}\omega_{\rho}(\varpi^{l})\phi(0)\int_{\mathfrak{o}^{\times}}\omega_{\rho} \,(x)d^{\times}x\cdot Z(W_{\pi},\mathbf{1}_{\mathbb{F}^{m}})\right), \tag{5.6}\] and \[Z(1/2-s,-1/2-t,\check{W}_{\rho}^{\circ},\mathcal{F}_{\psi_{F}} (\Phi_{\circ}))\\ =\operatorname{Vol}(N_{m}(\mathfrak{o})(1_{m}+\mathcal{M}_{m}( \mathfrak{p})))\operatorname{Vol}(N_{m+1}(\mathfrak{o})(1_{m+1}+\mathcal{M}_ {m+1}(\mathfrak{p})))\left(\check{Z}(W_{\pi},\phi)\right.\\ \left.+\sum_{l=1}^{\infty}q^{-l((2m+1)(1/2-s)-t)}\omega_{\rho}^{-1 }(\varpi^{l})\mathcal{F}_{\psi}(\phi)(0)\int_{\mathfrak{o}^{\times}}\omega_{ \rho}^{-1}\,(x)d^{\times}x\cdot Z(\check{W}_{\pi},\mathbf{1}_{\mathbb{F}^{m}}) \right). \tag{5.7}\] We take \(W_{\pi}=\mathcal{B}_{\pi,\psi}\) and \(\phi=\delta_{e_{m+1}}\). In light of (5.6) along with Lemma 3.5, we are left with \[Z(s,t,\operatorname{B}_{\pi,\psi_{F}},\Phi_{\circ})= \operatorname{Vol}(N_{m}(\mathfrak{o})(1_{m}+\mathcal{M}_{m}(\mathfrak{p}))) \operatorname{Vol}(N_{m+1}(\mathfrak{o})(1_{m+1}+\mathcal{M}_{m+1}(\mathfrak{p} )))Z(\mathcal{B}_{\pi,\psi},\delta_{e_{m+1}})\\ =\operatorname{Vol}(N_{m}(\mathfrak{o})(1_{m}+\mathcal{M}_{m}( \mathfrak{p})))\operatorname{Vol}(N_{m+1}(\mathfrak{o})(1_{m+1}+\mathcal{M}_{m+ 1}(\mathfrak{p}))),\] which is a non-zero constant. Appealing to [36, Corollary 3.2] along with [26, Theorem 3.6-(ii)] and [27, Theorem 4.5], the local factor \(\Gamma(s,t,\rho,\operatorname{BF},\psi_{F})\in\mathbb{C}[q^{\pm s},q^{\pm t}]^{\times}\) is a unit in \(q^{-s}\) and \(q^{-t}\), that is, a monomial of the form \(\Gamma(s,t,\rho,\operatorname{BF},\psi_{F})=\alpha q^{-\beta s}q^{-\eta t}\), with \(\alpha\in\mathbb{C}\) and \(\beta,\eta\in\mathbb{Z}\). This forces that all but one of summands in (5.7) must vanish. Among them, the only term which survives is \[\check{Z}(\mathcal{B}_{\pi,\psi},\delta_{e_{m+1}})=\gamma(\pi,\operatorname{BF}, \psi)Z(\mathcal{B}_{\pi,\psi},\delta_{e_{m+1}})=\gamma(\pi,\operatorname{BF}, \psi).\] Combining all these calculations, we find that \(\Gamma(s,t,\rho,\mathrm{BF},\psi_{F})=\gamma(\pi,\mathrm{BF},\psi)\), as needed. We turn our attention to the case when \(n=2m\) and \(\pi\) does not have Friedberg-Jacquet vector. By taking the advantage of Lemma 5.7, \(Z(W_{\pi},\mathbf{1}_{\mathbb{F}^{m}})=0\) for all \(W_{\pi}\in\mathcal{W}(\pi,\psi)\). As before, Proposition 5.8 simply turns into \[\gamma(\pi,\mathrm{BF},\psi)Z(W_{\pi},\phi)=\check{Z}(W_{\pi},\phi)=\Gamma(s, t,\rho,\mathrm{BF},\psi_{F})Z(W_{\pi},\phi). \tag{5.8}\] All that remains is to choose \(W_{\pi}=\mathcal{B}_{\pi,\psi}\) and \(\phi=\delta_{e_{m}}\). In doing so, Lemma 3.5 guarantees that \(Z(\mathcal{B}_{\pi,\psi},\delta_{e_{m}})\) is precisely \(1\), from which the equality \(\gamma(\pi,\mathrm{BF},\psi)=\Gamma(s,t,\rho,\mathrm{BF},\psi_{F})\) follows. Suppose that \(n=2m\) and \(\pi\) admits a Friedberg-Jacquet vector. Upon choosing \(\phi=\mathbf{1}_{\mathbb{F}^{m}}\), the relation \(\mathcal{F}_{\psi}(\mathbf{1}_{\mathbb{F}^{m}})=q^{\frac{m}{2}}\delta_{0}\) implies that \(Z(\check{W}_{\pi},\mathbf{1}_{\mathbb{F}^{m}})=0\). With aid of Lemma 5.7, we take \(W_{\pi}\in\mathcal{W}(\pi,\psi_{F})\) satisfying \(Z(W_{\pi},\mathbf{1}_{\mathbb{F}^{m}})=1\). In this way, we reduce Proposition 5.8 to \[q^{-m(1-2s)+\frac{m}{2}}\omega_{\rho}^{-1}(\varpi)L(m(1-2s), \omega_{\rho}^{-1})\varepsilon(s,\rho,\psi_{F})\\ =\Gamma(s,t,\rho,\mathrm{BF},\psi_{F})(1+q^{-2ms}\omega_{\rho}( \varpi)L(2ms,\omega_{\rho}))=\Gamma(s,t,\rho,\mathrm{BF},\psi_{F})L(2ms, \omega_{\rho})\] from which the required equality holds. We accomplish the following nice expression of Bump-Friedberg gamma factors \(\gamma(\pi,\mathrm{BF},\psi)\) in terms of their Bessel functions. **Proposition 5.10**.: _Let \(\pi\) be an irreducible cuspidal representation of \(\mathrm{GL}_{n}(\mathbb{F})\). Then we have_ \[\gamma(\pi,\mathrm{BF},\psi)=q^{-\frac{m}{2}}\sum_{g\in N_{m}(\mathbb{F}) \setminus\mathrm{GL}_{m}(\mathbb{F})}\sum_{g^{\prime}\in N_{m}(\mathbb{F}) \setminus\mathrm{GL}_{m}(\mathbb{F})}\mathcal{B}_{\pi,\psi}\left(\sigma_{2m} \begin{pmatrix}&g^{\prime}\\ g&\end{pmatrix}\sigma_{2m}^{-1}\right)\psi(e_{1}{}^{t}g^{\prime-1}{}^{t}e_{m}) \tag{5.9}\] _in the even case \(n=2m\)_ \[\gamma(\pi,\mathrm{BF},\psi)=q^{-\frac{m+1}{2}}\sum_{g\in N_{m+1}(\mathbb{F}) \setminus\mathrm{GL}_{m+1}(\mathbb{F})}\sum_{g^{\prime}\in N_{m}(\mathbb{F}) \setminus\mathrm{GL}_{m}(\mathbb{F})}\mathcal{B}_{\pi,\psi}(J(g,g^{\prime})) \psi(e_{1}{}^{t}g^{-1}{}^{t}e_{m+1})\] _in the odd case \(n=2m+1\). In particular, we have \(\gamma(\check{\pi},\mathrm{BF},\psi^{-1})=\overline{\gamma(\pi,\mathrm{BF}, \psi)}\)._ Proof.: Just as in the proof of Theorem 3.4, we take \(W_{\pi}=\mathcal{B}_{\pi,\psi}\) and \(\phi\) to be an indicator function \(\delta_{e_{m}}\) in the even case \(n=2m\) and \(\delta_{e_{m+1}}\) in the odd case \(n=2m+1\), at which point [38, Lemma 2.7] ensures that \(Z(\mathcal{B}_{\pi,\psi},\delta_{e_{m}})=Z(\mathcal{B}_{\pi,\psi},\delta_{e_{m +1}})=1\). It remains to note that \(\mathcal{F}_{\psi}(\delta_{e_{m}})(y)=q^{-\frac{m}{2}}\psi(e_{m}{}^{t}y)\) and \(\mathcal{F}_{\psi}(\delta_{e_{m+1}})(y)=q^{-\frac{m+1}{2}}\psi(e_{m+1}{}^{t}y)\). We precisely evaluate the sum (5.9), when \(\pi\) has the Friedberg-Jacquet vector. **Proposition 5.11**.: _Let \(\pi\) be an irreducible cuspidal representation of \(\mathrm{GL}_{n}(\mathbb{F})\). Suppose that \(n=2m\) and \(\pi\) admits a Friedberg-Jacquet vector. Then we have_ \[\gamma(\pi,\mathrm{BF},\psi)=\gamma(\check{\pi},\mathrm{BF},\psi^{-1})=- \varepsilon(s,\rho,\psi_{F})q^{-\frac{m}{2}}.\] Proof.: We insert the identity (5.5) for \(\Gamma(s,t,\rho,\mathrm{BF},\psi_{F})\). Next, we select \(W_{\pi}=\mathcal{B}_{\pi,\psi}\) and \(\phi=\delta_{e_{m}}\) an indicator function on \(e_{m}\), so that its Fourier transform is given by \(\mathcal{F}_{\psi}(\delta_{e_{m}})(y)=q^{-\frac{m}{2}}\psi(e_{m}{}^{t}y)\). In this way, Proposition 5.8 becomes \[\gamma(\pi,\mathrm{BF},\psi)+q^{-m(1-2s)-\frac{m}{2}}\omega_{\rho }^{-1}(\varpi)L(m(1-2s),\omega_{\rho}^{-1})\varepsilon(s,\rho,\psi_{F})Z( \mathcal{B}_{\pi,\psi},\mathbf{1}_{\mathbb{F}^{m}})\\ =\varepsilon(s,\rho,\psi_{F})q^{m\left(2s-\frac{1}{2}\right)}\omega_ {\rho}^{-1}(\varpi)\frac{L(m(1-2s),\omega_{\rho}^{-1})}{L(2ms,\omega_{\rho})}.\] We clear the denominator to express it as \[\gamma(\pi,\mathrm{BF},\psi)-\omega_{\rho}^{-1}(\varpi)q^{-m(1-2 s)}\gamma(\pi,\mathrm{BF},\psi)+q^{-m(1-2s)-\frac{m}{2}}\omega_{\rho}^{-1}(\varpi) \varepsilon(s,\rho,\psi_{F})Z(\mathcal{B}_{\pi,\psi},\mathbf{1}_{\mathbb{F}^{m}})\\ =\varepsilon(s,\rho,\psi_{F})q^{m\left(2s-\frac{1}{2}\right)}\omega_ {\rho}^{-1}(\varpi)-\varepsilon(s,\rho,\psi_{F})q^{-\frac{m}{2}}.\] We compare the coefficients of constant terms and \(q^{2ms}\) terms individually. In doing so, we arrive at a system of linear equations \[\begin{cases}\gamma(\pi,\mathrm{BF},\psi)=-\varepsilon(s,\rho,\psi_{F})q^{-\frac {m}{2}}\\ -\omega_{\rho}^{-1}(\varpi)q^{-m}\gamma(\pi,\mathrm{BF},\psi)+q^{-m-\frac{m}{2} }\omega_{\rho}^{-1}(\varpi)\varepsilon(s,\rho,\psi_{F})Z(\mathcal{B}_{\pi, \psi},\mathbf{1}_{\mathbb{F}^{m}})=\varepsilon(s,\rho,\psi_{F})q^{-\frac{m}{2 }}\omega_{\rho}^{-1}(\varpi),\end{cases}\] treating \(\gamma(\pi,\mathrm{BF},\psi)\) and \(Z(\mathcal{B}_{\pi,\psi},\mathbf{1}_{\mathbb{F}^{m}})\) as unknown variables, from which the equality \(\gamma(\pi,\mathrm{BF},\psi)=-\varepsilon(s,\rho,\psi_{F})q^{-\frac{m}{2}}\) shall follow. Having Proposition 5.10 in mind, all we have to do is to take the complex conjugate. We end up with \[\gamma(\check{\pi},\mathrm{BF},\psi^{-1})=\overline{\gamma(\pi,\mathrm{BF}, \psi)}=-\varepsilon(s,\rho,\psi_{F})q^{-\frac{m}{2}}.\qed\] We will shift our focus to the coincidence of arithmetic and analytic local factors, but beforehand we state functional equations for \(\gamma(\pi,\mathrm{BF},\psi)\) over finite fields. **Theorem 5.12** (Functional equation).: _Let \(\pi\) be an irreducible cuspidal representation of \(\mathrm{GL}_{n}(\mathbb{F})\)._ 1. _If_ \(\pi\) _does not admits a Friedberg-Jacquet vector, then we have_ \[\gamma(\pi,\mathrm{BF},\psi)\gamma(\check{\pi},\mathrm{BF},\psi^{-1})=1\quad \text{and}\quad|\gamma(\pi,\mathrm{BF},\psi)|=1.\] 2. _If_ \(n=2m\) _and_ \(\pi\) _admits a Friedberg-Jacquet vector, then we have_ \[\gamma(\pi,\mathrm{BF},\psi)\gamma(\check{\pi},\mathrm{BF},\psi^{-1})=q^{-m} \quad\text{and}\quad|\gamma(\pi,\mathrm{BF},\psi)|=q^{-\frac{m}{2}}.\] Proof.: Upon invoking Lemma 5.1, the functional equation in (1) can be seen from the double-duality \[\check{Z}(\check{W}_{\pi},\mathcal{F}_{\psi^{-1}}(\phi))=Z(W_{\pi},\phi).\] just as in [50, Proposition 2.12]. In view of Lemma 5.5, we combine Proposition 5.10 and Proposition 5.11 to see the rest of results, and this ends the proof. In practice, [35, Proposition 4.1] allows us to generalize the local function equation (5.3) to an irreducible generic representation \(\rho\) and a spherical representation \(\mathrm{Ind}_{B_{n}(F)}^{\mathrm{GL}_{n}(F)}(\mu_{1}\otimes\cdots\otimes\mu_ {n})\) at least for the Bump-Friedberg \(\gamma\)-factor with one variable \(s\) (\(t=0\)). We do not strive for maximal generality, so this hypothesis might be redundant, but which holds in all our applications. **Lemma 5.13**.: _Let \(F\) be a local function field. Let \(\rho\) be an irreducible subquotient of a spherical representation \(\mathrm{Ind}_{B_{n}(F)}^{\mathrm{GL}_{n}(F)}(\mu_{1}\otimes\cdots\otimes\mu_ {n})\). Then we have_ \[\Gamma(s,\rho,\mathrm{BF},\psi_{F}^{\flat})=\Gamma(s+1/2,\rho, \psi_{F}^{\flat})\Gamma(2s,\rho,\wedge^{2},\psi_{F}^{\flat})\\ =\prod_{1\leq i\leq n}\Gamma(s+1/2,\mu_{i},\psi_{F}^{\flat})\prod _{1\leq j<k\leq n}\Gamma(2s,\mu_{j}\times\mu_{k},\psi_{F}^{\flat}).\] Proof.: The proof of Lemma 4.4 literally goes through word by word except that we use the unramified computation of Bump and Friedberg [6, Theorem 3] in place of [25, SS2] (cf. The reader may be referred to [24, Proposition 1.2] for an alternative proof of unramified computations). As an intermediate step, we establish that Bump-Friedberg \(\gamma\)-factors agree with the counterpart Langlands-Shahidi gamma factors in positive characteristics. We extend the coincidence of two local factors to all characteristic, notably, zero in Theorem 5.16. **Theorem 5.14**.: _Let \(\rho\) be a level zero supercuspidal representation of \(\mathrm{GL}_{n}(F)\) over a local function field \(F\). Then we have_ \[\Gamma(s,t,\rho,\mathrm{BF},\psi_{F})=\Gamma(s+t+1/2,\rho,\psi_{F})\Gamma_{ \mathrm{LS}}(2s,\rho,\wedge^{2},\psi_{F})=\Gamma(s+t+1/2,\rho,\psi_{F})\Gamma(2 s,\rho,\wedge^{2},\psi_{F}).\] Proof.: Putting together Theorem 5.9 and [10, SS6.1], \(\Gamma(s,t,\rho,\mathrm{BF},\psi_{F})\) and \(\Gamma(s+t+1/2,\rho,\psi_{F})\) are independent of \(t\), so that we take \(t\) to be \(0\). With the help of [31, SS6.6-(vii)], twists by unramified characters do not affect on the first equality. For this reason, we may assume that \(\pi\) is unitary without loss of generality. Applying Theorem 4.5 to the level zero supercuspidal representation, there are a global field \(k\) with three places \(v_{0},v_{1}\), and \(v_{\infty}\) such that \(k_{v_{0}}\cong F\), and an irreducible unitary cuspidal automorphic representation \(\Pi\) of \(\mathrm{GL}_{n}(\mathbb{A}_{k})\) with the required properties in Theorem 4.5. We choose a non-trivial additive character \(\Psi\) of \(\mathbb{A}_{k}/k\), and assume, as we may, that \(\Psi_{v_{0}}=\psi_{F}\). The global functional equation for exterior square \(L\)-functions via the Langlands-Shahidi method can be read from [21, SS4-(vi)] as (4.4), while that for standard \(L\)-functions due to Godement and Jacquet is extracted from [19] as \[L^{S}(s,\Pi)=\Gamma(s,\Pi_{v_{0}},\Psi_{v_{0}})\prod_{v\in S-\{v_{0}\}}\Gamma (s,\Pi_{v},\Psi_{v})L^{S}(1-s,\check{\Pi}). \tag{5.10}\] In the meantime, taking into account local functional equations (5.3), the global functional equation for Bump-Friedberg \(L\)-functions in [6, Theorem 6] (cf. [24, (1.1)]) takes the following explicit form: \[L^{S}(s+1/2,\Pi,\psi)L^{S}(2s,\Pi,\wedge^{2})\\ =\Gamma(s,\Pi_{v_{0}},\mathrm{BF},\Psi_{v_{0}})\prod_{v\in S-\{v_{ 0}\}}\Gamma(s,\Pi_{v},\mathrm{BF},\Psi_{v})L^{S}(1/2-s,\check{\Pi})L^{S}(1-2s,\check{\Pi},\wedge^{2}). \tag{5.11}\] In accordance with Lemma 5.13, each places \(v\) in \(S-\{v_{0}\}\) can be controlled in such a way that \[\Gamma(s,\Pi_{v},\mathrm{BF},\Psi_{v})=\Gamma(s+1/2,\Pi_{v},\Psi_{v})\Gamma_{ \mathrm{LS}}(2s,\Pi_{v},\wedge^{2},\Psi_{v}).\] After plugging \(s+1/2\) for \(s\) in (5.10), the case for positive characteristics is at least done by dividing (5.11) by the product of (4.4) and (5.10). The Bump-Friedberg gamma factor \(\Gamma(s,t,\rho,\mathrm{BF},\psi_{F})\) is comparable with Kazhdan close field theory. The proof is nearly identical to that of Proposition 4.8, and so we shall be brief. **Proposition 5.15**.: _For \((F,\rho,\psi)\) that is \(\mathrm{Kaz}\)-associated to \((F^{\prime},\rho^{\prime},\psi^{\prime})\), we have_ \[\Gamma(s,t,\rho,\mathrm{BF},\psi_{F})=\Gamma(s,t,\rho^{\prime},\mathrm{BF}, \psi_{F^{\prime}}).\] Proof.: As a consequence of Lemma 2.10, we know that \(\varepsilon(s,\rho,\psi_{F})=\varepsilon(s,\rho^{\prime},\psi_{F^{\prime}})\). With \(\mathrm{Vol}(\mathfrak{p}_{F})=\mathrm{Vol}(\mathfrak{p}_{F^{\prime}})=q^{-1/2}\) and \(\omega_{\rho}(\varpi_{F})=\omega_{\rho^{\prime}}(\varpi_{F^{\prime}})\) in hand, Theorem 5.9 readily implies our assertion. We are now in a position to formulate the main factorization formula conjectured by Bump and Friedberg [6, Conjecture 4]. **Theorem 5.16**.: _Let \(\varphi\) be a \(n\)-dimensional tamely ramified representation of \(W_{F}\) corresponding to the level zero supercuspidal representation \(\rho(\varphi)\) of \(\mathrm{GL}_{n}(F)\) via Macdonald correspondence. Then we have_ \[\Gamma(s,t,\rho(\varphi),\mathrm{BF},\psi_{F})=\varepsilon(s+t+1/2,\varphi, \psi_{F})\Gamma(2s,\wedge^{2}(\varphi),\psi_{F}).\] Proof.: Let \(F\) be a non-archimedean local field of characteristic \(0\) with its residue field \(\mathfrak{o}_{F}/\mathfrak{p}_{F}\) isomorphic to \(\mathbb{F}_{q}\). Let \(F^{\prime}=\mathbb{F}_{q}((t))\) so that \(F\) and \(F^{\prime}\) are \(1\)-close. At this point, we have given the detailed argument before in the proof of Theorem 4.9. One may simply mimic the argument there, resting on Theorem 4.9, Theorem 5.14, Proposition 5.15, and a part of local Langlands correspondence [16, (b) of Theorem 7.1]. ### The Bump-Friedberg epsilon factor and the Gauss sum The following elementary lemma illustrates how the standard \(\varepsilon\)-factors and \(\varepsilon_{0}\)-factors are related by, but it does not seem to be recorded elsewhere. We take the occasion to provide a proof for completeness. **Lemma 5.17**.: _Let \(\varphi\) be a \(n\)-dimensional tamely ramified representation of \(W_{F}\). Then we have_ \[\varepsilon(s,\varphi,\psi_{F})=\varepsilon_{0}(\varphi,\psi_{F}).\] Proof.: As mentioned in the proof of Lemma 5.5, [10, SS6.1] asserts that \(\varepsilon(s,\varphi,\psi_{F})=\varepsilon(s,\rho(\varphi),\psi_{F})\) is a complex number. The identity (2.6) enforces that \(V^{I_{F}}=\{0\}\). The result then follows from [51, Corollary 2.7]. Our next task is to deduce the decomposition of Bump-Friedberg \(\gamma\)-factors, which we may think of as being the finite field analogue of Theorem 5.16. **Proposition 5.18**.: _Let \(\pi(\varphi)\) be an irreducible cuspidal representation of \(\operatorname{GL}_{n}(\mathbb{F})\) associated to tamely ramified representation \(\varphi\) of \(W_{F}\) of degree \(n\) via Macdonald correspondence. Then we have_ \[\gamma(\pi(\varphi),\operatorname{BF},\psi)=\varepsilon_{0}(\varphi,\psi_{F} )\gamma(\pi(\varphi),\wedge^{2},\psi).\] Proof.: We break it down into two cases. It is worth noting the equivalent statement that \(\pi\) admits a Jacquet-Shalika vector if and only if it admits a Friedberg-Jacquet vector, which we defer to the next section (see Corollary 6.2). Suppose that \(\pi\) does not admit a Friedberg-Jacquet vector. Owing to Theorem 5.9, Theorem 5.16, Lemma 5.17 together with [50, Theorem 3.16], we are guided to \[\gamma(\pi(\varphi),\operatorname{BF},\psi)=\Gamma(s,\rho(\varphi),\operatorname{BF},\psi_{F})\\ =\varepsilon(s+t+1/2,\varphi,\psi_{F})\Gamma(2s,\wedge^{2}(\varphi ),\psi_{F})=\varepsilon_{0}(\varphi,\psi_{F})\gamma(\pi(\varphi),\wedge^{2}, \psi).\] It remains to deal with the case when \(n=2m\) is even and \(\pi\) admits a Friedberg-Jacquet vector. This case only requires a purely local approach avoiding globalization, as the result is immediate from combining Proposition 4.2 and Proposition 5.11 with Lemma 5.17. The following expression for \(\varepsilon_{0}(\varphi,\psi_{F})\varepsilon_{0}(\wedge^{2}(\varphi),\psi_{F})\) in terms of Gauss sums is thought of as the Bump-Friedberg analogue of Theorem 2.7 and Theorem 3.11. **Theorem 5.19**.: _Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{n}(\mathbb{F})\). We let \(\alpha\in\widehat{\mathbb{F}}_{q^{n}}^{\times}\) be a regular character corresponding to \(\pi\) via Green's parametrization and \(m=\lfloor\frac{n}{2}\rfloor\). Then we have_ \[\varepsilon_{0}(\varphi,\psi_{F})\varepsilon_{0}(\wedge^{2}(\varphi),\psi_{F })=(-1)^{n+\binom{n}{2}}q^{-\frac{1}{2}\left(n+\binom{n}{2}\right)}\tau( \alpha,\psi_{n})\tau(\alpha^{1+q^{m}},\psi_{d})\prod_{i=1}^{m-1}\tau(\alpha^{1 +q^{i}},\psi_{n}),\] _where \(d=n\) if \(n\) is odd, and \(d=m\) if \(n\) is even._ Proof.: When the second representation is the trivial one of \(\operatorname{GL}_{1}(F)\), \(\mathbf{1}_{F^{\times}}\), Rankin-Selberg \(\gamma\)-factor \(\gamma(s,\pi(\varphi)\times\mathbf{1}_{F^{\times}},\psi_{F})\) degenerates into Godement-Jacquet \(\varepsilon\)-factor \(\varepsilon(s,\pi(\varphi),\psi_{F})\). Now, [51, Proposition 2.6] along with [51, Theorems 4.3, 4.4] and Lemma 5.17 ensure that \[\varepsilon_{0}(\varphi\otimes\mathbf{1}_{W_{F}},\psi_{F})=\gamma(s,\pi( \varphi)\times\mathbf{1}_{F^{\times}},\psi_{F})=\varepsilon(s,\varphi,\psi_{F })=\varepsilon_{0}(\varphi,\psi_{F}).\] Our claim is a direct consequence of [51, Corollary 4.5] and [51, Theorem 5.2]. The proof of Proposition 3.12 should be compared with that of Proposition 5.20 below. Unlike the equality \(\Gamma(s,\rho(\varphi),\operatorname{As},\psi_{F})=\omega_{\rho}^{n-1}(\delta )\lambda_{E/F}(\psi_{F})^{-\frac{n(n-1)}{2}}\varepsilon(s,\operatorname{As}( \varphi),\psi_{F})\), which is independently settled in [2] and [5] for irreducible supercuspidal representations \(\rho\), the corresponding equality for exterior square \(\gamma\)-factors has been less developed. We use Deligne-Kazhdan close field theory to deduce the required identity for level zero supercuspidal representations \(\rho\), which is enough for applications. **Proposition 5.20**.: _Let \(\pi(\varphi)\) be an irreducible cuspidal representation of \(\operatorname{GL}_{n}(\mathbb{F})\) associated to tamely ramified representation \(\varphi\) of \(W_{F}\) of degree \(n\) via Macdonald correspondence. Then we have_ \[\gamma(\pi(\varphi),\wedge^{2},\psi)=\varepsilon_{0}(\wedge^{2}(\varphi),\psi_{ F})\quad\text{and}\quad\gamma(\pi(\varphi),\operatorname{BF},\psi)=\varepsilon_{0}( \varphi,\psi_{F})\varepsilon_{0}(\wedge^{2}(\varphi),\psi_{F}).\] Proof.: We separate it into two cases. Suppose that \(\pi\) does not admits a Jacquet-Shalika vector. We use Theorem 4.9 in conjunction with [50, Theorem 3.16] and [51, Corollary 2.7] in order to see that \[\gamma(\pi(\varphi),\wedge^{2},\psi)=\Gamma(s,\rho(\varphi),\wedge^{2},\psi_{ F})=\varepsilon(s,\wedge^{2}(\varphi),\psi_{F})=\varepsilon_{0}(\wedge^{2}( \varphi),\psi_{F}).\] We now handle the remaining case when \(n=2m\) is even and \(\pi\) admits a Jacquet-Shalika vector. The central character \(\omega_{\pi}=\alpha\upharpoonright_{\mathbb{F}^{\times}}\) becomes trivial so that \(\alpha^{1+q^{m}}=\mathbf{1}\). Since \(\left(\alpha^{1+q^{i}}\right)^{1+q^{m}}=\mathbf{1}\) and \(\alpha^{1+q^{i}}\) is not trivial for \(0\leq i\leq m-1\), we invoke [52, Proposition A.2] to get \(\tau(\alpha^{1+q^{i}},\psi_{n})=-q^{m}\). In light of [51, Proposition 2.6], we conclude that \[\varepsilon_{0}(\wedge^{2}(\varphi),\psi_{F})=(-1)^{\binom{2m}{2}}q^{-\frac{1 }{2}\binom{2m}{2}}\tau(\alpha^{1+q^{m}},\psi_{m})\prod_{i=1}^{m-1}\tau(\alpha^ {1+q^{i}},\psi_{2m})=-q^{-\frac{1}{2}\binom{2m}{2}}\cdot q^{m(m-1)}\tau( \mathbf{1},\psi_{m}),\] which agrees with \(\gamma(\pi(\varphi),\wedge^{2},\psi)=-q^{-\frac{m}{2}}\) in Proposition 4.2, having used the fact that \(\tau(\mathbf{1},\psi_{m})=1\). Then the second equality can be justified from Proposition 5.18. We are prepared to gain a product formula for \(\gamma(\pi,\wedge^{2},\psi)\) and \(\gamma(\pi,\operatorname{BF},\psi)\) with regard to Gauss sums. **Theorem 5.21** (Gauss sum).: _Let \(\pi\) be an irreducible cuspidal representation of \(\operatorname{GL}_{n}(\mathbb{F})\). We let \(\alpha\in\widehat{\mathbb{F}}_{q^{n}}^{\times}\) be a regular character corresponding to \(\pi\) via Green's parametrization and \(m=\lfloor\frac{n}{2}\rfloor\). Then we have_ \[\gamma(\pi,\wedge^{2},\psi)=(-1)^{\binom{n}{2}}q^{-\frac{1}{2}\binom{n}{2}} \tau(\alpha,\psi_{n})\tau(\alpha^{1+q^{m}},\psi_{d})\prod_{i=1}^{m-1}\tau( \alpha^{1+q^{i}},\psi_{n})\] _and_ \[\gamma(\pi,\operatorname{BF},\psi)=(-1)^{n+\binom{n}{2}}q^{-\frac{1}{2}\binom{ n+\binom{n}{2}}{2}}\tau(\alpha,\psi_{n})\tau(\alpha^{1+q^{m}},\psi_{d})\prod_{i=1}^{m-1} \tau(\alpha^{1+q^{i}},\psi_{n}),\] _where \(d=n\) if \(n\) is odd, and \(d=m\) if \(n\) is even._ ## 6. Period Vectors and Distinctions In this section, we study the period vectors and integrals for four pairs of groups \((G,L)\): Jacquet-Piatetski-Shapiro-Shalika period, Flicker-Rallis period, Friedberg-Jacquet period, and Jacquet-Shalika period. Let \(\sigma\) be a level zero supercuspidal representation of \(G(E)\) coming from an irreducible cuspidal representation \(\Pi\) of \(G(\mathbb{E})\). The group \(G\) and its closed subgroups \(L\) and \(N\), as well as its representations \(\Pi\) and \(\sigma\), are given by the following table: \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Period Vectors & \(G(\mathbb{E})\) & \(G(E)\) & \(L\) & \(U\) & \(\Pi\) & \(\sigma\) & \(r\) \\ \hline \hline Jacquet & \(\operatorname{GL}_{n}(\mathbb{F})\) & \(\operatorname{GL}_{n}(F)\) & & & & & & \\ -Piatetski-Shapiro & \(\times\operatorname{GL}_{n}(\mathbb{F})\) & \(\times\operatorname{GL}_{n}(F)\) & & & & & & \\ -Shalika & \(\operatorname{GL}_{2m+1}(\mathbb{E})\) & \(\operatorname{GL}_{2m+1}(E)\) & \(\operatorname{GL}_{2m+1}\) & \(N_{2m+1}\) & \(\pi\) & \(\rho\) & As \\ \hline Flicker-Rallis & \(\operatorname{GL}_{2m}(\mathbb{F})\) & \(\operatorname{GL}_{2m}(F)\) & \(H_{2m}\) & \(N_{m}\times N_{m}\) & \(\pi\) & \(\rho\) & BF \\ \hline Jacquet-Shalika & \(\operatorname{GL}_{2m}(\mathbb{F})\) & \(\operatorname{GL}_{2m}(F)\) & \(S_{2m}\) & \(N_{m}\times\mathcal{N}_{m}\) & \(\pi\) & \(\rho\) & \(\wedge^{2}\) \\ \hline \end{tabular} Given a pair of representations \(\sigma\) and \(\Pi\), their corresponding central characters \(\omega_{\Pi}\) and \(\omega_{\sigma}\), and their associated Whittaker models \(\mathcal{W}(\Pi,\psi_{\mathbb{E}/\mathbb{F}})\) and \(\mathcal{W}(\sigma,\psi_{E/F})\), in addition to the character \(\Xi\) of \(L\), are highlighted in the following table: For each four period vectors, we prove a relation between the period integrals and sums, and the \(L\)-factors \(L(s,\sigma,r)\) and \(\gamma\)-factors \(\gamma(\Pi,r,\psi)\). The local factors that show up in this section include the Rankin-Selberg factors \(L(s,\rho_{1}\times\rho_{2})\) and \(\gamma^{\star}(\pi_{1}\times\pi_{2},\psi)\), the Asai factors \(L(s,\rho,\mathrm{As})\) and \(\gamma(\pi,\mathrm{As},\psi)\), the Bump-Friedberg factors \(L(s,\rho,\mathrm{BF})\) and \(\gamma(\pi,\mathrm{BF},\psi)\), and the exterior square factors \(L(s,\rho,\wedge^{2})\) and \(\gamma(\pi,\wedge^{2},\psi)\). **Theorem 6.1** (Finite period vectors and integrals).: _When \(\Pi=\pi_{1}\times\pi_{2}\), we let \(\gamma(\Pi,r,\psi)\) denote \(\gamma^{\star}(\pi_{1}\times\pi_{2},\psi)\). With the above datum, the following statements are equivalent:_ 1. \(\Pi\) _admits a non-zero period vector, i.e., there exists a non-zero vector_ \(v\in V_{\Pi}\) _such that_ \(\Pi(g)v=v\) _for all_ \(g\in L\)_._ 2. \(\omega_{\Pi}\restriction_{\mathbb{F}^{\times}}=\mathbf{1}_{\mathbb{F}^{\times}}\)_._ 3. _There exists_ \(W_{\Pi}\in\mathcal{W}(\Pi,\psi_{\mathbb{E}/\mathbb{F}})\) _such that_ \[\sum_{g\in U(\mathbb{F})\setminus L(\mathbb{F})}W_{\Pi}(g)\neq 0.\] 4. \(|\gamma(\Pi,r,\psi)|=q^{-\beta}\)_._ 5. \(\sigma\) _admits a nontrivial twisted period, i.e.,_ \(\mathrm{Hom}_{L(F)}(\sigma\nu^{\alpha s_{0}}\restriction_{L(F)},\Xi)\neq 0\) _for some_ \(s_{0}\in\mathbb{C}\)_._ 6. \(\omega_{\sigma}\restriction_{F^{\times}}=\nu^{-\alpha s_{0}}\restriction_{F^{ \times}}\)_._ 7. _There exists_ \(W_{\sigma}\in\mathcal{W}(\sigma,\psi_{E/F})\) _such that the integral_ (6.1) \[\int_{F^{\times}U(F)\setminus L(F)}W_{\sigma}(g)\nu^{\alpha s_{0}}(g)\,dg\] _is well-defined and non-vanishing for some_ \(s_{0}\in\mathbb{C}\)_._ 8. \(L(s,\sigma,r)\) _has a pole at_ \(s=s_{0}\)_._ In order to keep our exposition of the proof neat and concise, we separate it into two parts. We deal with the equivalent assertions for finite field cases. Proof of the equivalence of (1),(3), (4), and (8).: The equivalence of (1) & (3) has been explained in Lemma 3.8, Lemma 5.7, [49, Lemma 4.2], and [50, Theorem 2.30]. The equivalence of (1) & (4) is a direct consequence of functional equations: Theorem 2.5, Theorem 3.7, Theorem 4.3, and Theorem 5.12. The equivalence of (1) & (8) is just a summary of Theorem 2.6, Theorem 3.10, Theorem 5.9, and [50, Theorems 3.16 and 3.17]. We treat the parallel statements for non-archimedean local field cases. Proof of the equivalence of (2),(5),(6), (7), and (8).: The equivalent statement about central characters (2) & (6) is clear from the fact that \(\omega_{\Pi}\restriction_{\circ^{\times}}=\omega_{\sigma}\circ\mathrm{pr} \restriction_{\circ^{\times}}\). The equivalence of (5) & (8) is simply a restatement of [26, Proposition 3.4], [33, Theorem 2.1], and [35, Proposition 4.6, Corollary 4.3]. Since \(L(s,\sigma,r)\) and \(L(1-s,\tilde{\sigma},r)\) does not share any common poles, the equivalence of (6) & (8) can be seen from Theorem 2.6, Theorem 3.10, Theorem 5.9, and [50, Theorems 3.16 and 3.17]. The proof of equivalent statements (7) & (8) needs to be managed more carefully. The absolute convergence of the integral (6.1) can be justified from [35, Proposition 2.4]. We omit the detail, since its proof is standard [29, Lemmata 4.1 and 4.2]. Taking care of the characterization of poles in terms of residual integrals (6.1), the proof of [29, Theorem 4.3] which is originated from [11, SS2.6.1] carries over verbatim to the setting of the Flicker-Rallis and Friedberg-Jacquet periods. The main point is to think of (6.1) as a constant multiple of the leading coefficient in the Laurent expansion of Jacquet-Piatetski-Shapiro-Shalika integral \(\Psi(s,W_{\rho_{1}},W_{\rho_{2}},\Phi)\), Flicker integral \(I(s,W_{\rho},\Phi)\), Friedberg-Jacquet integral \(Z(s,W_{\rho},\Phi)\), and Jacquet-Shalika integrals \(J(s,W_{\rho},\Phi)\) at \(s=s_{0}\), respectively. When \(\Pi=\pi_{1}\times\pi_{2}\), our result can be regarded as a special case of [47, Theorem 1.3] for \(d_{\pi}(\sigma)=1\). **Corollary 6.2** (Equivalent Periods).: _Let \(\rho\) be level zero supercuspidal representations of \(\operatorname{GL}2mn(F)\) constructed from irreducible cuspidal representations \(\pi\) of \(\operatorname{GL}_{2m}(\mathbb{F})\). The following statements are equivalent:_ 1. \(\pi\) _admits a Friedberg-Jacquet vector._ 2. \(\pi\) _admits a Jacquet-Shalika vector._ 3. \(\rho\nu^{s_{0}}\) _is_ \(H_{2m}(F)\)_-distinguished for some_ \(s_{0}\in\mathbb{C}\)_._ 4. \(\rho\nu^{s_{0}}\) _is_ \((S_{2m}(F),\Theta)\)_-distinguished for some_ \(s_{0}\in\mathbb{C}\)_._ Proof.: The following equivalent statements can be read from the proof of [27, Theorem 4.1], which has its origin in the work of Matringe [36, Proposition 2.1]: 1. \(L(2s,\rho,\wedge^{2})\) has a pole at \(s=s_{0}\); 2. \(L(s,\rho,\operatorname{BF})\) has a pole at \(s=s_{0}\); 3. \(\rho\nu^{s_{0}}\) is \((S_{2m}(F),\Theta)\)-distinguished; 4. \(\rho\nu^{s_{0}}\) is \(H_{2m}(F)\)-distinguished. All we need to do at this point is to look back on Theorem 6.1 for Friedberg-Jacquet and Jacquet-Shalika periods. _Acknowledgments._ The author is deeply indebted to Elad Zelingher for sending a proof of Theorem 3.11 to us and kindly allowing us to reproduce it here. We would like to thank Rongqing Ye for drawing the attention to the equality of exterior square gamma factors in author's thesis, and Andrew Knightly and Gilbert Moss for many fruitful discussions. Thanks are also owed to David Schwein for elaborating the close field theory in Harish-Chandra learning seminar, where the author first learned the topic. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (No. RS-2023-00209992). **Data availability.** This manuscript has no associated data. **Conflicts of interest.** The author states that there is no conflict of interest.
2303.10176
Single-polarization Hybrid Hollow-core Anti-resonant Fiber Designs at 2 $μ$m
In this letter, to the best of our knowledge, a new type of hollow-core anti-resonant fiber (HC-ARF) design using hybrid silica/chalcogenide cladding is presented for single-polarization, high-birefringence, and endlessly single-mode operation at 2 $\mu$m wavelength. We show that the inclusion of a chalcogenide layer in the cladding allows strong suppression of $x$-polarization, while maintaining low propagation loss and single-mode propagation for $y$-polarization. The optimized HC-ARF design includes a combination of low propagation loss, high-birefringence, and polarization-extinction ratio (PER) or loss ratio of 0.02 dB/m, 1.2$\times$10$^{-4}$, >550 respectively, while the loss of the $x$-polarization is >20 dB/m. The proposed fiber may also be coiled to small bend radii while maintaining low bend-loss of $\approx$ 0.01--0.1 dB/m, and can potentially be used as polarization filter based on the different gap separations and bend conditions.
Herschel Herring, Md Selim Habib
2023-03-16T19:26:09Z
http://arxiv.org/abs/2303.10176v1
# Single-polarization Hybrid Hollow-core Anti-resonant Fiber Designs at 2 \(\mu\)m ###### Abstract In this letter, to the best of our knowledge, a new type of hollow-core anti-resonant fiber (HC-ARF) design using hybrid silica/chalcogenide cladding is presented for single-polarization, high-birefringence, and endlessly single-mode operation at 2 \(\mu\)m wavelength. We show that the inclusion of a chalcogenide layer in the cladding allows strong suppression of \(x\)-polarization, while maintaining low propagation loss and single-mode propagation for \(y\)-polarization. The optimized HC-ARF design includes a combination of low propagation loss, high-birefringence, and polarization-extinction ratio (PER) or loss ratio of 0.02 dB/m, 1.2\(\times\)10\({}^{-4}\), \(>\)550 respectively, while the loss of the \(x\)-polarization is \(>\)20 dB/m. The proposed fiber may also be coiled to small bend radii while maintaining low bend-loss of \(\approx\) 0.01-0.1 dB/m, and can potentially be used as polarization filter based on the different gap separations and bend conditions. Hollow-core anti-resonant fiber, high-birefringence, single-polarization, single-mode fiber. ## I Introduction The 2 \(\mu\)m wavelength is of particular interest in the field of optics and photonics due to its unique property of high absorption in water. This characteristic allows for an enormous range of optical fiber system applications such as biomarker detection, environmental observation, and molecular spectroscopy [1, 2]. High light transmission, excellent polarization control, and high-birefringence are much needed for many optical devices such as fiber sensors [3], fiber lasers [4], optical amplifiers [5], and fiber based gyroscopes [6]. Although several designs in the past have achieved high-birefringence and single-polarization in solid-core [7] and photonic band gap [8] fibers, hollow-core anti-resonant fibers (HC-ARFs) have proven to be the foremost choice because of their unique and exceptional optical guidance in air [9]. HC-ARFs guide light through inhibited-coupling between the core and cladding modes (CMs) and anti-resonant effect [10]. This mechanism provides wide transmission bandwidth [11, 12, 13, 14], extremely low power overlap with cladding tubes [15], low anomalous dispersion [16] and extremely low loss [17]. Achieving high-birefringence and single-polarization using the HC-ARF structure is a relatively new concept, which was first proposed in 2016 [18]. Recently, a few high-birefringence and single-polarization HC-ARF designs have been proposed in the near-IR regime mostly at 1550 nm or 1064 nm with different cladding arrangements and number of layers in the cladding structure [18, 19, 20, 21]. High-birefringence and single-polarization were achieved either by using multiple nested resonators [18, 21] or high index materials in the cladding [19, 20]. Most recently, a four-fold [21] and six-fold [22] symmetry bi-thickness HC-ARF were experimentally reported with outstanding optical performances. For example, in [21], a phase birefringence of \(2.35\times 10^{-5}\) and \(9.1\times 10^{-5}\) at 1550 and 1589 nm were achieved respectively. However, to the best of our knowledge, single-polarization, and single-mode HC-ARFs have not yet been investigated at 2 \(\mu\)m. In this paper, we propose a semi-nested HC-ARF design that utilizes a hybrid silica/chalchogenide cladding to attain low propagation loss, single-polarization, high-birefringence, and endlessly single-mode operation simultaneously at 2 \(\mu\)m. ## II Fiber Geometry The HC-ARF geometries investigated in this work are displayed in Fig. 1(a-b). Fig. 1(a) shows a semi-nested 6-tube non-touching HC-ARF architecture in which chalcogenide (As\({}_{2}\)Se\({}_{3}\)) tubes (orange color) are inserted into two horizontally placed silica tubes and two of the nested tubes are removed from those silica tubes to ensure low loss in one polarization state (\(y\)-polarization) and high loss for the other polarization state (\(x\)-polarization). Chalcogenide is used due to its high transmission at 2 \(\mu\)m. The HC-ARF has core diameter of \(D_{\rm c}\), tube diameter \(D\), wall thickness of silica/As\({}_{2}\)Se\({}_{3}\) tubes, \(t_{1}/t_{2}\), nested tube ratio, \(d/D\), and a gap separation between the outer tubes, \(g\). Fig. 1(b) shows a regular HC-ARF design without nested tubes. In all our simulations, we choose a relatively large core diameter of \(D_{\rm c}\) = 56 \(\mu\)m to ensure low loss in the \(y\)-polarization. However, we optimize the silica/chalcogenide tube thickness (\(t_{1}/t_{2}\)), and gap separation, \(g\) to ensure single-polarization, single-mode, and high-birefringence. The outer diameter \(D\) is related to the core diameter \(D_{\rm c}\), wall thickness \(t_{1}\), and number to tubes \(N\), which can be written as [23]: \[\frac{D}{2}=\frac{\frac{D_{\rm c}}{2}\sin(\frac{\pi}{N})-\frac{\pi}{2}-t_{1}(1- \sin(\frac{\pi}{N}))}{1-\sin(\frac{\pi}{N})}. \tag{1}\] ## III Numerical Results and Discussion The simulations were performed using fully-vectorial finite-element (FE) modeling. A perfectly-matched layer (PML) boundary was placed outside the fiber domain to accurately calculate the confinement loss. The mesh size and PML boundary conditions were chosen similar to [11, 20, 24]. The propagation loss was calculated by considering confinement loss, effective material loss [24], and surface scattering loss (SSL) [25, 11]. ### _Optimization of gap separation and As\({}_{2}\)Se\({}_{3}\) wall thickness_ We started our investigations towards achieving high-birefringence, single-mode, and single-polarization by optimizing the gap separation, \(g\), and the chalcogenide (As\({}_{2}\)Se\({}_{3}\)) wall thickness, \(t_{2}\) for a fixed the core diameter, \(D_{\rm c}=56\ \mu\)m, silica wall thickness, \(t_{1}=560\) nm, and normalized nested tube ratio, \(d/D=0.5\). The desired properties of low loss, single-polarization, and high-birefringence are highly dependent on these two parameters. In particular, the effect of \(g\) is critical to examine since it heavily influences how the fundamental-mode (FM) of one polarization is guided in the core while that of the other polarization spreads to the cladding. Fig. 1(d) shows the mode-field profiles of both nested and regular HC-ARFs in which it can be seen that the light of \(y\)-polarization is almost entirely guided in the core, while the light of \(x\)-polarization spreads more towards the As\({}_{2}\)Se\({}_{3}\) cladding. It is also clear that there is enhanced performance with the nested HC-ARF compared to regular HC-ARF as more of the light is well guided in the core and weakly leaks towards the cladding. Fig. 1: HC-ARF geometries: 6-tube (a) semi-nested HC-ARF with six nested tubes and (b) regular HC-ARF in which the nested tubes are not present in the outer tubes. (c) Both fibers have a core diameter of \(D_{\rm c}=56\ \mu\)m, t\({}_{1}\)/t\({}_{2}\)= wall thickness of silica/As\({}_{2}\)Se\({}_{3}\) tubes, \(d\) = nested tube diameter, \(D\) = outer tube diameter, \(g\) = separation between the outer tubes. (d) Normalized mode-field profiles of both polarizations for nested and regular HC-ARF at 2 \(\mu\)m. The color bar shows the intensity distributions in a linear scale. The mode-field profile (\(y\)-pol.) of nested HC-ARF is more confined to the core compared to the regular HC-ARF. (e) The FE- simulated results of birefringence: \(|t_{x}-n_{y}|\), propagation loss in dB/m of \(y\)- and \(x\)- polarizations, and PER or loss ratio of both polarizations. Top panel: nested HC-ARF, and bottom panel: regular HC-ARF. The simulations were performed at 2 \(\mu\)m. Top to plot the 2D surface plots, gap separation, \(g\), and As\({}_{2}\)Se\({}_{3}\) wall thickness, \(t_{2}\) were scanned with 30 and 40 data points respectively and between the data points are interpolated. Fig. 2: The FE- simulated results of birefringence: \(|n_{x}-n_{y}|\), propagation loss of \(y\)- and \(x\)- polarizations, and loss ratio of both polarizations as a function of As\({}_{2}\)Se\({}_{3}\) thickness, \(t_{2}\) with different values of silica thickness, \(t_{1}\). The HC-ARF has core diameter, \(D_{\rm c}=56\ \mu\)m, normalized tube ratio, \(d/D=0.5\), and gap separation, \(g=2\ \mu\)m. The simulations were performed at 2 \(\mu\)m. Silica wall thickness, \(t_{1}\), and As\({}_{2}\)Se\({}_{3}\) wall thickness, \(t_{2}\) were scanned with 25 and 30 data points respectively and between the data points are interpolated. The FE-simulated results for semi-nested and regular HC-ARF are shown in Fig. 1(e). From the 2D surface plots, we can see that there is low loss for \(y\)-polarization, high loss for \(x\)-polarization, high-birefringence, and high loss ratio around the region where \(g\) = 2 \(\mu\)m and \(t_{2}\) = 559 nm. The graphical patterns for the semi-nested and regular HC-ARF are similar in regards to birefringence and the loss of both polarizations, however the semi-nested structure displays a much higher loss ratio. Numerically, the semi-nested structure also shows significantly lower loss of the \(y\)-polarization with a minimum FM loss of \(\approx\) 0.0011 dB/m comparing to that of the regular structure at \(\approx\) 0.05 dB/m. It is also evident that the ideal range is more highly impacted by the value of the As\({}_{2}\)Se\({}_{3}\) wall thickness, since there is a visible boundary near \(t_{2}\) = 559 nm where the birefringence drops greatly from \(>\)1.23\(\times 10^{-4}\) to \(<\)0.27\(\times 10^{-4}\). For both semi-nested and regular HC-ARFs, the maximum possible loss of \(x\)-polarization, birefringence, and loss ratio as well as the minimum loss of the \(y\)-polarization do not all occur at the same values of \(g\) and \(t_{2}\). For example, the semi-nested structure's best values are 290 dB/m, \(>\)1.23\(\times 10^{-4}\), \(>\)1.7\(\times 10^{5}\), and \(\approx\) 1 dB/km respectively, and all occurring for different values of \(g\) and \(t_{2}\). The optimal region where birefringence of \(>\)1.0 \(\times 10^{-4}\) is achieved while maintaining an FM loss of the \(y\)-polarization of \(<\)0.05 dB/m is highly sensitive, and occurs where \(g\) = 2 \(\mu\)m and 559 nm \(<\)\(t_{2}\)\(<\) 559.5 nm. A polarization-extinction ratio (PER) or loss ratio of \(>\)550 is shown in this region as well. Values near this \(t_{2}\) range were chosen for analysis in further sections with the anticipation that high-birefringence, low loss, and single-mode operation could be enhanced with additional parameter tuning. ### _Optimization of silica/As\({}_{2}\)Se\({}_{3}\) wall thickness (\(t_{1}\)/\(t_{2}\))_ In this section, we optimized the wall thicknesses of silica/As\({}_{2}\)Se\({}_{3}\), while fixing our gap separation, \(g\) = 2 \(\mu\)m as found in the previous section. Here, we maintained the fixed core diameter and normalized nested tube ratio of \(D_{\rm c}\) = 56 \(\mu\)m and \(d\) / \(D\) = 0.5 respectively. From the results of the 2D contour plots seen in Fig. 2, we can see that the wall thickness are crucial to achieving low loss, high-birefringence, and single-polarization. As mentioned previously, we chose the value of \(g\) = 2 \(\mu\)m with the anticipation that further parameter tuning could lead to enhanced birefringence. From the simulated results of the silica/As\({}_{2}\)Se\({}_{3}\) sweep, high-birefringence of \(>\)1.20 \(\times 10^{-4}\) was achieved in the region where \(t_{1}\) = 564 nm and \(t_{2}\) = 559 nm. At these values, the FM loss of the \(y\)-polarization can be made as low as 0.02 dB/m while the loss of the \(x\)-polarization is \(>\)20 dB/m. The loss ratio is in the range of 578-970 here as well. Again, the birefringence is highly sensitive to changes in the wall thickness of As\({}_{2}\)Se\({}_{3}\) and a sharp drop can be seen once the it exceeds 559 nm. These results demonstrate that by sufficient tuning of the silica thickness \(t_{1}\), we are able to enhance the properties of low loss, high-birefringence, and single-polarization. By doing so, we were able to increase high-birefringence from \(>\)1.0 \(\times 10^{-4}\) to \(>\)1.2\(\times 10^{-4}\) and loss ratio from \(>\)400 to \(>\)550 while reducing the FM loss of the \(y\)-polarization from \(<\)0.05 dB/m to \(<\)0.03 dB/m. ### _Single-mode operation and higher-order mode suppression_ Following the examination of the low loss, high-birefringence, and single-polarization, the effectively single-mode operation and the suppression of the higher-order modes (HOMs) is demonstrated by optimization of the gap separation, \(g\), and the nested tube ratio, \(d\) / \(D\). The FE- simulated results for this are shown in Fig. 3, where we can easily see that there is a large region where the loss of the FM is very low (\(<\)0.02 dB/m) while the loss of the HOMs is high (\(>\)20 dB/m). It is also important to note that FM loss is more heavily influenced by changes in \(g\) compared to changes in \(d\) / \(D\). However, \(d\) / \(D\) has a significant impact on the loss of the HOMs. The region of highest loss of the HOMs becomes narrow for the range 1 \(\mu\)m \(<g\) < 2.5 \(\mu\)m, where the value of \(d\) / \(D\) must be precise in order to obtain effectively single-mode operation. The coupling between the core-guided HOMs and CMs is shown in Fig. 3(c-d). The effective mode index of the FM-like mode (red dotted line) remains unchanged as a function of \(d\) / \(D\), avoiding any phase matching with CMs. This mechanism confirms low loss guidance. However, the effective index of HOMs strongly couple with CMs and tube modes at various values of \(d\) / \(D\) (also see Fig. 3(d), ensuring high loss for HOMs, thus confirming effective single-mode operation. ### _Bend loss analysis_ Lastly, we examined the bend loss using [11] of semi-nested HC-ARF for a range of bend radii \(R_{\rm b}\) of \(5-10\) cm for different \(g\) = {2, 3, 4} \(\mu\)m, fixed core diameter \(D_{\rm c}\) = 56 \(\mu\)m, \(t_{1}\)/\(t_{2}\) Fig. 3: The FE- simulated propagation loss of \(y\)-polarization for (a) \(LP_{01}\)-mode and (b) HOMs for varying values of \(d\) / \(D\) and gap separation, \(g\). Results were scanned with 50 and 30 data points respectively and between the data points are interpolated. Effect of changing normalized nested tube ratio, \(d\) / \(D\) on (c) Effective mode index with a core diameter, \(D_{\rm c}\) = 56 \(\mu\)m, \(t_{1}\)/\(t_{2}\) = 564/559 nm, and gap separation, \(g\) = 2 \(\mu\)m. The normalized tube ratio, \(d\) / \(D\) were scanned from 0.3 to 0.8 by simulating 50 modes at 2 \(\mu\)m. (d) The electric field intensities of the first four core-guided modes and CMs are shown for \(d\) / \(D\)\(\approx\)0.42 and \(g\) = 2 \(\mu\)m on a linear color scale. 564/559 nm, and normalized nested tube ratio, \(d/D=0.65\). The bend loss of \(x\)-direction is much higher than \(y\)-direction and has strong light coupling to the tube modes as expected due to the missing nested tubes in the \(x\)-directions. The strong coupling between the core modes and tubes modes occur at different bend radii and the coupling shifts for different \(g\) values. This phenomenon is interesting because it indicates that the fiber can be used as a polarization filter at those gap separations and bend conditions as displayed in Fig. 4. ## Acknowledgment The authors would like to thank Dr. Rodrigo Amezcua-Correa for providing high performance computational support. ## IV Conclusion In conclusion, we present a novel HC-ARF architecture that has been developed and validated to have single-polarization, high-birefringence, and single-mode operation characteristics at the 2 \(\mu\)m wavelength. Through extensive fiber parameter optimizations based on regular and semi-nested hybrid silica/chalcogenide HC-ARF structures, we demonstrated that the semi-nested geometry has better performance compared to regular HC-ARF. We also found a favorable range of the gap separation and chalcogenide wall thickness. From there, we were able to tune the silica thickness to obtain enhanced low loss, high-birefringence, and polarization-extinction ratio of 0.02 dB/m, 1.2\(\times\)10\({}^{-4}\), and \(>\)550 respectively. Furthermore, effectively single-mode operation was achieved by suitably choosing the nested tube ratio, \(d/D\). We found that the loss of the HOMs could be made as high as 20 dB/m while the loss of the FM was \(<\)0.02 dB/m for a large range of \(g\) and \(d/D\) values. Finally, we investigated the effect of changing the bend radius in both \(x\) and \(y\)-directions. The strong coupling between the core-guided modes and CMs lead to an interesting phenomenon through which the proposed fiber can be used as polarization filter. The exceptional polarization, low propagation loss, and single-mode properties presented in this work will pave the way toward practical applications at the highly appealing 2 \(\mu\)m wavelength.
2308.12413
Non-Linear Relay Optimization using Deep-Learning tools
Widespread deployment of relays can yield a significant boost in the throughput of forthcoming wireless networks. However, the optimal operation of large relay networks is still infeasible. This paper presents two approaches for the optimization of large relay networks. In the traditional approach, we formulate and solve an optimization problem where the relays are considered linear. In the second approach, we take an entirely new direction and consider the true non-linear nature of the relays. Using the similarity to neural networks, we leverage deep-learning methodology. Unlike previous applications of neural networks in wireless communications, where neural networks are added to the network to perform computational tasks, our deep relay optimization treats the relay network itself as a neural network. By exploiting the non-linear transfer function exhibited by each relay, we achieve over 15dB gain compared to traditional optimization methods. Moreover, we are able to implement part of the network functionality over the relay network. Our findings shed light on the potential of deep relay optimization, promising significant advancements in future wireless communication systems.
Itsik Bergel
2023-08-23T20:23:48Z
http://arxiv.org/abs/2308.12413v1
# Non-Linear Relay Optimization using Deep-Learning tools ###### Abstract Widespread deployment of relays can yield a significant boost in the throughput of forthcoming wireless networks. However, the optimal operation of large relay networks is still infeasible. This paper presents two approaches for the optimization of large relay networks. In the traditional approach, we formulate and solve an optimization problem where the relays are considered linear. In the second approach, we take an entirely new direction and consider the true non-linear nature of the relays. Using the similarity to neural networks, we leverage deep-learning methodology. Unlike previous applications of neural networks in wireless communications, where neural networks are added to the network to perform computational tasks, our deep relay optimization treats the relay network itself as a neural network. By exploiting the non-linear transfer function exhibited by each relay, we achieve over \(15\)dB gain compared to traditional optimization methods. Moreover, we are able to implement part of the network functionality over the relay network. Our findings shed light on the potential of deep relay optimization, promising significant advancements in future wireless communication systems. ## I Introduction The importance of relaying has been known since the early days of wireless communication (e.g., [1, 2, 3]). In recent years, relays have become even more accessible due to progress in energy harvesting and full duplex communication. Energy harvesting (e.g., [4, 5]) enables devices to gather energy from their surroundings without relying on a traditional power source. The limited energy output is generally sufficient to activate a low-power relay. Full duplex communication (e.g., [6, 7]) allows devices to transmit and receive signals over the same frequency simultaneously. Thus, current technology enables the deployment of many relays without worrying about power sources or wasting valuable spectral resources. However, despite the potential for substantial throughput gains, current communication networks underutilize relays. This is primarily due to the complexity involved in managing and optimizing a large number of relays. We commonly distinguish between two types: decode-and-forward relays (e.g., [8, 9, 10]) and amplify-and-forward relays (e.g., [8, 11]). Decode-and-forward relays can achieve superior performance, but at a cost of significantly higher relay complexity and more challenging network design. In this research, we focus on amplify-and-forward relays, which boast simpler manufacturing and operation. Optimization of amplify-and-forward relay networks is far from trivial, as their performance usually presents a non-convex behavior. Recent works (e.g., [12, 13]) have made progress and solved the optimization of various relay network topologies. Yet, there is no known solution for general relay networks. Our work addresses this problem from two distinct perspectives. The first part extends the state-of-the-art relay network optimization, by solving the optimization of cascade relay networks (encompassing also all topologies with known optimal solutions). This novel optimization approach offers a versatile solution applicable to any relay network without loops. This solution stands as an independent novel contribution, presenting a substantial extension beyond existing results. Moreover, it provides a benchmark for evaluating the novel scheme of the second part. The second part of our work steps out of traditional network optimization and presents a completely novel approach to relay design. One key limitation of traditional relay network optimization lies in considering the relays as linear amplifiers (with a power constraint). Thus, they must limit the relay operation to the regime where it can be approximated as linear. As a result, these methods are forced to set a power constraint lower than the actual power achievable by the relays. In our research, we recognize and leverage the non-linear transfer function exhibited by each relay. Recognizing the striking similarity between relay networks and neural networks, we embrace neural network algorithms to effectively optimize the relay network. By doing so, we unleash the relay network's performance beyond the confines of linearity, enhancing communication throughput and efficiency. Neural networks have gained much popularity in recent years due to their ability to solve tough computational challenges, particularly when a good problem model is unavailable. The use of neural networks has been suggested in many communication applications (e.g., [14, 15, 16, 17]) and even in relay applications (e.g., [18, 19, 20]). In particular, [18] used neural networks for relay selection, [20] combined them with power allocation, and [19] used neural networks for predicting outage probabilities. However, all applications of neural networks in wireless communications focus on inserting neural networks into nodes in the network to perform computational tasks. In this work, we use neural network technology in a completely different way. We observe that the power limit at the relay exhibits a non-linear transfer function. This non-linear behavior is very similar to the hyperbolic tangent (tanh) commonly used in neural networks. Thus, instead of adding a neural network to our system, we treat our system as a neural network. Thus we analyze and optimize the relays using neural network algorithms. Furthermore, we extend the benefits of the similarity to neural networks by implementing various functionalities over the relay network. This opens up exciting possibilities for data processing over the relay network. We demonstrate these capabilities by training the network to non-linearly separate the transmitted signal into the desired components of each receiver. Our results show gains of over 15 dB compared to the state of the art in a cellular network with 100 relays. The main advantages of the _deep relay (DR) optimization_ are: * Robust optimization, using well-established deep-learning techniques. * Improved communication over relay networks. * New computational capabilities "over the air". The main differences between the DR and other implementations of neural networks are: * Relays are treated as neurons but are actual parts of the network. * Most of the network topology is determined by the channel gains, resulting from physical phenomena. The network optimization can only control the gain (and possibly bias) at each relay. * The input of each relay is affected by additive noise. Thus, we need to cope with many noise sources within the network. While the proposed approach is relevant for all applications of relay networks, we prefer to focus here on low-complexity receivers. Such receivers are important, for example, for Internet-of-Things (IoT) devices [21, 22, 23], where each receiver requires a low data rate, but the network must support a large number of receivers. It is important to note that the relay network can learn several scenarios simultaneously. Using just a low rate control signaling, the network can switch between a set of scenarios, where each relay keeps in memory its gain and bias for each scenario. Hence, the same network can serve a large number of users, keeping a specific setup in memory for each set of users. In conclusion, this research delves into the untapped potential of non-linear relay networks, leveraging deep-learning tools to optimize their performance. By treating the relay system as a neural network, we unlock greater power utilization, leading to improved communication efficiency. In the following, we first present the system model in Section II. Section III solves the optimization problem in the traditional approach, while Section IV presents our novel deep-learning approach. Section V presents numerical studies that demonstrate the advantages of our approach and Section VI gives our concluding remarks. ## II System model We consider a single sector of a cellular network, with a single transmitter (base station), \(M\) receivers and \(N\) relays as demonstrated in Fig. 1. All transmissions in the network are performed over the same frequency. The transmitter simultaneously transmits independent data to each of the receivers. For the clarity of this basic study on deep relay networks, we take two simplifying assumptions. We assume that all signals are real (i.e., transmitted signals, channel gains and relay gains are all real) and we assume perfect full duplex and directional antennas. These assumptions simplify the mathematical presentation while retaining the essence of the relay network. Both assumptions should be relaxed in future studies. We note that traditional optimization is performed by maximizing the signal to noise ratio (SNR). Thus, it does not depend on the specific modulation used. On the other hand, deep-learning is based on the specific signal values, and hence must focus on a specific modulation. In the following, we describe the complete signal structure, recalling that the definitions of the modulation are only needed for Section IV. The bits intended for receiver \(m\) at time \(k\) are denoted by \(\mathbf{u}_{m}[k]=\left[u_{m,1}[k],\ldots,u_{m,B}[k]\right]\) where \(B\) denotes the number of bits simultaneously transmitted to each receiver. We consider a single antenna transmitter (and hence, signal separation cannot be done by spatial multiplexing). Thus, all transmitted data is jointly modulated into a single symbol. This is done by stacking the bits intended for all users into a single vector, \(\mathbf{u}[k]=[\mathbf{u}_{1}[k],\ldots,\mathbf{u}_{M}[k]]\), using gray code and then pulse amplitude modulation (PAM) modulation. Without loss of generality, the maximal absolute value at the transmitter output is set to \(1\). More specifically, let \(a=0,\ldots,2^{MB}-1\) represent the symbol index, then the transmitted value for symbol \(a\) is \((2a-2^{MB}+1)/(2^{MB}-1)\). This value represents a value of \(\left\lfloor\frac{a+2^{c-1}}{2^{c}}\right\rfloor\mod 2\) for the \(c\) transmitted bit (\(1\leq c\leq MB\)), which is bit \((B-(c-1\mod B))\) of user \((M+1-\lceil c/B\rceil)\). For example, for \(B=1\) and \(M=2\), the \(4\) PAM points to deliver one bit per channel use for each user are shown in table I. Each relay amplifies its received signal, applies its (non-linear) transfer function, and transmits it forward. The system is tuned by setting the gain of each relay, and (possibly) adding bias signals at the relays (to better utilize the relay non-linearity). Each relay is equipped with four directional (sector) antennas, each of \(90^{\circ}\) width. Out of these antennas, only two are active. The receive antenna is pointing backward to the transmitter (and will receive also any relay in its beam width). The transmit antenna is pointing in the opposite direction (forward). The assumption of ideal directional antennas guarantees that the network will not contain loops. Thus, we can look at it as a cascade network, in which the relays are divided into layers, and each relay can receive signals only from the BS and from relays at previous layers. Let \(y_{i,a}[k]\) be the signal received by relay \(a\) of layer \(i\) at time \([k]\), and \(\mathbf{y}_{i}[k]=\left[y_{i,1}[k],\ldots,y_{i,N_{i}}[k]\right]^{T}\) be the vector of inputs for all relays at layer \(i\) (with \(N_{i}\) being the number of relays at layer \(i\)). The received signal vector at layer 1 is given by: \[\mathbf{y_{1}}[k]=\mathbf{h_{1}}s[k]+\mathbf{n_{1}}[k] \tag{1}\] where \(s\)[k] is the signal transmitted by the BS, \(\mathbf{h}_{i}\) is the vector of channel gains from the BS Fig. 1: Network model example: One BS (red circle), 100 relays and 2 receivers (green triangle). As shown in the figure, the receive antennas of all relays are pointing toward the BS. Fig. 2: Transfer function of a relay with a gain of \(w\) and a bias of \(b\). The slope at the linear regime is determined by the gain such that \(w=\tan(\theta)\). to the relays of layer \(i\), and \(\mathbf{n}_{i}[k]\) is the vector of additive white noise at each relay, which is assumed to have independent Gaussian distribution with zero mean and variance \(\sigma^{2}\). The amplify-and-forward relay has limited power. The traditional analysis considers the relays as linear amplifiers with a power constraint. To improve performance, we consider the actual transfer function of the relays. Each amplifier has its unique transfer function. Typically, the transfer function is almost linear for low input values and reaches a saturation for large input values. An example of a transfer function is depicted in Fig. 2. This transfer function has two controlled parameters: the gain, \(w\), and an added bias, \(b\). (The traditional analysis does not consider the bias as it cannot improve the performance in a linear model.) Again, without loss of generality, we set the maximal absolute value at the output of each transmitter to be \(1\). The resemblance of Fig 2 to the common transfer function of a neuron in a neural network leads to the concept of using deep-learning tools for the optimization of the network. Thus, the network is tuned by setting the gain and bias of each relay, and we use backpropagation to optimize the network. This approach will be presented in Section IV. The signal at relay \(j\) of layer \(i\) is amplified by a gain of \(w_{j,k}\), added to a bias term \(b_{j,k}\) and then subjected to the amplifier non-linearity. For simplicity, we assume throughout that all relays are characterized by the hyperbolic tangent function. Thus, the signal at the output of layer \(i\) is given by: \[\mathbf{o}_{i}[k]=\tanh\left(\text{diag}(\mathbf{w}_{i})\mathbf{ y}_{i}[k]+\mathbf{b}_{i}\right). \tag{2}\] The input for layer \(i>1\) is given by: \[\mathbf{y}_{i}[k]=\mathbf{h}_{i}s[k]+\sum_{\ell=1}^{i-1}\mathbf{ F}_{i,\ell}\mathbf{o}_{\ell}[k]+\mathbf{n}_{i}[k] \tag{3}\] where \(\mathbf{F}_{i,\ell}\) is the matrix of channel gains from layer \(\ell\) to layer \(i\). Finally, we assume no direct link from the BS to the receivers, and the received signal at user \(m\) is: \[r_{m}[k]=\sum_{i=1}^{d}\mathbf{g}_{i,m}^{T}\mathbf{o}_{i}[k]+ \tilde{n}_{m}[k]. \tag{4}\] where \(d\) is the number of layers, \(\mathbf{g}_{i,m}\) is the vector of channel gains from layer \(i\) to receiver \(m\) and \(\tilde{n}_{m}[k]\) is the additive Gaussian noise, again with variance \(\sigma^{2}\). We assume Gaussian fading over all channels, so that each channel gain is given by \(r^{-\alpha}\cdot v\) where \(r\) is the link length, \(\alpha=4\) is the path loss exponent and \(v\) is independent Gaussian fading with zero mean and unit variance. The receiver applies a set of detection functions to produce bit estimates: \[\hat{u}_{m,b}[k]=q_{m,b}(r_{m}[k]),\ m=1,...,M,\ b=1,...,B \tag{5}\] and the performance is measured by the bit error rate (BER) given by \[\epsilon_{m,b}=\Pr(\hat{u}_{m,b}\neq u_{m,b}). \tag{6}\] We will focus on max-min BER optimization. Thus, our network performance metric will be \[\epsilon=\max_{m,b}\epsilon_{m,b}. \tag{7}\] If the relays were linear, each network output would have been a scaled and noised version of the transmitted signal. In such a case, we expect the receiver to employ a PAM receiver that matches the transmitted modulation. Note that the structure of the modulation is such that users' \(m\) data can be decoded with \(mB\)-PAM receiver. Thus, only user \(M\) truly needs an \(MB\)-PAM receiver. With non-linear relays, we can train the relay to perform some of the signal separation and hence allow simpler receivers. In particular, we may wish that all users will employ only \(B\)-PAM receivers. While the implementation of PAM receivers is standard, we give here an exact description of our implementation. This description serves to better define the types of receivers we consider, and also as a preparation for the deep-learning scheme, which will use some of these functions. To accommodate all types of considered receivers, we divide the detection functions into three parts: scaling, output processing and decision. The scaling part linearly adjusts the received signal to match the receiver decision zones. Thus, the scaled output is given by: \[\bar{r}_{m}[k]=\bar{w}_{m}r_{m}[k]+\bar{b}_{m}. \tag{8}\] The values of \(\bar{w}_{m}\) and \(\bar{b}_{m}\) for each output are determined by the optimization algorithm. The output processing part takes the network output and extracts the bits of the specific user. Let \(f(x)=2\sqrt{x^{2}+\epsilon^{2}}-1\), with \(\epsilon=0.01\). Also, let \(f^{(0)}(x)=x\) and \(f^{(z)}(x)=f\big{(}f^{(z-1)}(x)\big{)}\) We consider two types of receivers. For standard receivers, user \(m\) uses an \(mB\)-PAM receiver, and the output processing for its \(b\)-th bit is \[\bar{q}_{m,b}[k]=f^{(mB-b)}\left(-\frac{2^{MB}-1}{2^{MB}}\cdot\bar{r }_{m}[k]\right). \tag{9}\] For low-complexity receivers, all users use \(B\)-PAM receivers, and the output processing for their \(b\)-th bit is \(\bar{q}_{m,b}[k]=f^{(B-b)}\left(-\frac{2^{MB}-1}{2^{MB}}\cdot\bar{r}_{m}[k]\right)\). The decision function in all cases is \(\hat{u}_{m,b}[k]=1\) if \(\bar{q}_{m,b}[k]<0\) and \(0\) otherwise. Note in particular that in the specific case of low-complexity receivers and \(B=1\) the decision functions for all receivers simplify to the scaled binary-phase-shift-keying (BPSK) receivers, that is: \(\hat{u}_{m,1}[k]=1\) if \(-\bar{w}_{m}r_{m}[k]-\bar{b}_{m}<0\) and \(0\) otherwise. ## III Traditional (linear model) optimization The traditional approach treats the relays as linear amplifiers. To that end, we constrain the relay output power to a low enough level, such that the hyperbolic function can be reasonably approximated as linear. In mathematical terms, the linear model is obtained by replacing (2) with \[\mathsf{o}_{i}[k]\approx\text{diag}(\mathbf{w}_{i})\mathbf{y}_{i }[k]+\mathbf{b}_{i} \tag{10}\] and adding a constraint \(E\big{[}o_{i,k}^{2}[k]\big{]}\leq P_{\max}\). Using (10) instead of (2), the complete network is linear. Thus, each receiver will receive a scaled version of the transmitted signal plus additive Gaussian noise. In such a scenario, BER minimization is obtained by weighted SNR maximization subject to the power constraint. In the linear model, the bias term consumes output power, but has no benefit for the network. Thus, we set \(\mathbf{b}_{i}=0\) for all \(i\). We need to solve the optimization problem: \[\max_{\{\mathbf{w}_{i}\}}\min_{m}\ \zeta_{m}\cdot\text{SNR}_{m} \tag{11}\] \[\text{Subject to:}\] \[E[o_{i,j}^{2}]\leq P_{\max}\quad i=1,\ldots,d,\ j=1,\ldots,N_{i}\] where \[\text{SNR}_{m}=\frac{E\left[\left|[E[r_{m}[k]|\mathbf{u}_{m}[k]] \right|^{2}\right]}{\text{Var}(r_{m}[k]|\mathbf{u}_{m}[k])}. \tag{12}\] The SNR weights, \(\zeta_{m}\) are chosen to balance the BER of the different users. The optimization problem in (11) is not convex, and its solution was not derived so far. The closest solution is the one derived by Phan et al. [12]. In the following, we extend this solution to the problem at hand. This extension includes: i) extending the solution to multiple relay layers by alternating minimization over the layers ii) extending the solution to a cascade network and iii) changing the optimization variables to allow a solution. We start by rephrasing the problem as a power minimization problem and constructing the alternating minimization. Let \(\underline{\mathbf{w}}_{v}=\{\mathbf{w}_{1},\ldots,\mathbf{w}_{v-1},\mathbf{w }_{v+1},\ldots,\mathbf{w}_{d}\}\), we define the \(v\)-th layer min max power for a weighted SNR, \(\eta\) as: \[P_{v}(\underline{\mathbf{w}}_{v},\eta)= \min_{\mathbf{w}_{v}} \max_{j}E\big{[}o_{i,j}^{2}[k]\big{]}\] (13) Subject to: \[\zeta_{m}\cdot\text{SNR}_{m}\geq\eta,\quad 1\leq m\leq M.\] Once we will solve (13), the solution to (11) can be easily evaluated using Algorithm 1. Thus, the main challenge in solving (11) is to solve (13). For that purpose, we construct a more efficient network description. Let the extended input of the \(i\)-th layer be \(\tilde{\mathbf{o}}_{i}[k]=[\tilde{\mathbf{o}}_{i-1}^{T}[k],\mathbf{o}_{i}^{T }]^{T}[k]\) with \(\tilde{\mathbf{o}}_{0}[k]=s[k]\). We also define \(\tilde{N}_{i}=\sum_{u=1}^{i}N_{i}\), \(\tilde{\mathbf{g}}_{m}=[0,\mathbf{g}_{1,m}^{T},\ldots,\mathbf{g}_{d,m}^{T}]^{T}\), \(\tilde{\mathbf{w}}_{i}=[\mathbf{1}_{1+\tilde{N}_{i-1}}^{T},\ \mathbf{w}_{i}^{T}]^{T}\), \(\tilde{\mathbf{C}}_{i}=[\mathbf{0},\ \mathbf{I}_{N_{i}}]^{T}\), \(\mathbf{F}_{i}=[\mathbf{F}_{i,1},\ldots,\mathbf{F}_{i,i-1}]\), and \[\tilde{\mathbf{F}}_{i}=\begin{bmatrix}1&\mathbf{0}\\ \mathbf{0}&\mathbf{I}_{\tilde{N}_{i-1}}\\ \mathbf{h}_{i}&\mathbf{F}_{i}\end{bmatrix} \tag{14}\] To clarify, note that \(\tilde{\mathbf{F}}_{1}=[1,\mathbf{h}_{i}^{T}]^{T}\). Using this notation, the extended signal at the input level \(i\) is: \[\tilde{\mathbf{o}}_{i} =\text{diag}(\tilde{\mathbf{w}}_{i})\tilde{\mathbf{F}}_{i}\tilde{ \mathbf{o}}_{i-1}+\tilde{\mathbf{C}}_{i}\mathbf{n}_{i}\] \[=\left(\prod_{i^{\prime}=1}^{i}\text{diag}\left(\tilde{\mathbf{w} }_{i^{\prime}}\right)\tilde{\mathbf{F}}_{i^{\prime}}\right)s\] \[\quad+\sum_{u=1}^{i}\left(\prod_{i^{\prime}=u+1}^{i}\text{diag} \left(\tilde{\mathbf{w}}_{i^{\prime}}\right)\tilde{\mathbf{F}}_{i^{\prime}} \right)\text{diag}\left(\tilde{\mathbf{w}}_{u}\right)\tilde{\mathbf{C}}_{u} \mathbf{n}_{u}\] \[=\mathbf{G}_{1,i}s+\sum_{u=1}^{i}\mathbf{G}_{u+1,i}\text{diag} \left(\tilde{\mathbf{w}}_{u}\right)\tilde{\mathbf{C}}_{u}\mathbf{n}_{u} \tag{15}\] where \(\mathbf{G}_{u,i}=\prod_{i^{\prime}=u}^{i}\text{diag}\left(\tilde{\mathbf{w}}_ {i^{\prime}}\right)\tilde{\mathbf{F}}_{i^{\prime}}\). The received signal at the \(m\)-th mobile is: \[r_{m}=\tilde{\mathbf{g}}_{m}^{T}\tilde{\mathbf{o}}_{d}+\tilde{n}_{m}. \tag{16}\] The signal to noise ratio (SNR) at receiver \(m\) is \[\text{SNR}_{m}=\frac{\left|\tilde{\mathbf{g}}_{m}^{T}\mathbf{G}_{1,d}\right| ^{2}\sigma_{s}^{2}/\sigma^{2}}{1+\sum_{u=1}^{d}\left\|\tilde{\mathbf{g}}_{m}^ {T}\mathbf{G}_{u+1,d}\text{diag}\left(\tilde{\mathbf{w}}_{u}\right)\tilde{ \mathbf{C}}_{u}\right\|^{2}}. \tag{17}\] where \(\sigma_{s}^{2}=E[s^{2}]\). Defining also \(\mathbf{e}_{j}\) to be an indicator vector for the \(j\)-th element, the output power of relay \(j\) is: \[p_{j} =E[\sigma_{i,j}^{2}]=\left|\mathbf{e}_{j+1}^{T}\mathbf{G}_{1,d} \right|^{2}\sigma_{s}^{2}\] \[\quad+\sigma^{2}\sum_{u=1}^{i}\left\|\mathbf{e}_{j+1}^{T}\mathbf{ G}_{u+1,d}\text{diag}\left(\tilde{\mathbf{w}}_{u}\right)\tilde{\mathbf{C}}_{u} \right\|^{2}. \tag{18}\] To allow the optimization with respect to the gains of layer \(v\). We note that for \(u<v<i\) we can write: \[\mathbf{G}_{u,i}=\mathbf{G}_{v+1,i}\text{diag}\left(\tilde{\mathbf{w}}_{v} \right)\tilde{\mathbf{F}}_{v}\mathbf{G}_{u,v-1}. \tag{19}\] This can be conveniently extended to \(u\leq v\leq i\) by defining \(\mathbf{G}_{u,i}=1\) for any \(u>i\). Thus, the noise term at receiver \(m\) divided by \(\sigma^{2}\) (i.e., the denominator of (17)) can be written as: \[\Xi_{m} =1+\sum_{u=1}^{v-1}\left\|\tilde{\mathbf{g}}_{m}^{T}\mathbf{G}_{v +1,d}\text{diag}\left(\tilde{\mathbf{w}}_{v}\right)\tilde{\mathbf{F}}_{v} \mathbf{G}_{u+1,v-1}\right.\] \[\quad\left.\cdot\text{diag}\left(\tilde{\mathbf{w}}_{u}\right) \tilde{\mathbf{C}}_{u}\right\|^{2}+\left\|\tilde{\mathbf{g}}_{m}^{T}\mathbf{G }_{v+1,d}\text{diag}\left(\tilde{\mathbf{w}}_{v}\right)\tilde{\mathbf{C}}_{v} \right\|^{2}\] \[\quad+\sum_{u=v+1}^{d}\left\|\tilde{\mathbf{g}}_{m}^{T}\mathbf{G }_{u+1,d}\text{diag}\left(\tilde{\mathbf{w}}_{u}\right)\tilde{\mathbf{C}}_{u} \right\|^{2}\] \[=1+\sum_{u=1}^{v-1}\left\|\tilde{\mathbf{w}}_{v}^{T}\text{diag} \left(\mathbf{G}_{v+1,d}^{T}\tilde{\mathbf{g}}_{m}\right)\tilde{\mathbf{F}}_ {v}\mathbf{G}_{u+1,v-1}\right.\] \[\quad\left.\cdot\text{diag}\left(\tilde{\mathbf{w}}_{u}\right) \tilde{\mathbf{C}}_{u}\right\|^{2}+\left\|\tilde{\mathbf{w}}_{v}^{T}\text{diag }\left(\mathbf{G}_{v+1,d}^{T}\tilde{\mathbf{g}}_{m}\right)\tilde{\mathbf{C}}_ {v}\right\|^{2}\] \[\quad+\sum_{u=v+1}^{d}\left\|\tilde{\mathbf{g}}_{m}^{T}\mathbf{G }_{u+1,d}\text{diag}\left(\tilde{\mathbf{w}}_{u}\right)\tilde{\mathbf{C}}_{u} \right\|^{2}\] \[=1+\sum_{u=1}^{v-1}\left\|\tilde{\mathbf{w}}_{v}^{T}\mathbf{L}_{v,u}\right\|^{2}+\left\|\tilde{\mathbf{w}}_{v}^{T}\mathbf{L}_{v,v}\right\|^{2}+ \ell_{v}. \tag{20}\] where \(\ell_{v}\left(\mathbf{g}\right)=\sum_{u=v+1}^{d}\left\|\mathbf{g}^{T}\mathbf{ G}_{u+1,d}\text{diag}\left(\tilde{\mathbf{w}}_{u}\right)\tilde{\mathbf{C}}_{u} \right\|^{2}\), \(\mathbf{L}_{v,u}\left(\mathbf{g}\right)=\text{diag}\left(\mathbf{G}_{v+1,d}^{ T}\mathbf{g}\right)\tilde{\mathbf{F}}_{v}\mathbf{G}_{u+1,v-1}\text{diag}\left( \tilde{\mathbf{w}}_{u}\right)\tilde{\mathbf{C}}_{u}\) for \(u>v\) and \(\mathbf{L}_{v,v}\left(\mathbf{g}\right)=\text{diag}\left(\mathbf{G}_{v+1,d}^{ T}\mathbf{g}\right)\tilde{\mathbf{C}}_{v}\). Defining also \(\tilde{\mathbf{W}}_{v}=\tilde{\mathbf{w}}_{v}\tilde{\mathbf{w}}_{v}^{T}\) and \(\mathbf{L}_{v}\left(\mathbf{g}\right)=\sum_{u=1}^{v}\mathbf{L}_{v,u}\left( \mathbf{g}\right)\mathbf{L}_{v,u}^{T}\left(\mathbf{g}\right)\) we can write: \[\Xi_{m}=1+\text{Tr}\left[\tilde{\mathbf{W}}_{v}^{T}\mathbf{L}_{v}\left( \tilde{\mathbf{g}}_{m}\right)\right]+\ell_{v}\left(\tilde{\mathbf{g}}_{m} \right). \tag{21}\] Similarly, the signal term, divided by \(\sigma^{2}\), can be written as \(\text{Tr}\left[\tilde{\mathbf{W}}_{v}^{T}\mathbf{Q}_{v}\left(\tilde{\mathbf{g}} _{m}\right)\right]\) where \[\mathbf{Q}_{v}\left(\mathbf{g}\right) =\frac{\sigma_{s}^{2}}{\sigma^{2}}\text{diag}\left(\mathbf{G}_{v +1,d}^{T}\mathbf{g}\right)\tilde{\mathbf{F}}_{v}\mathbf{G}_{1,v-1}\mathbf{G}_{1,v-1}^{T}\tilde{\mathbf{F}}_{v}^{T}\] \[\quad\cdot\text{diag}\left(\mathbf{G}_{v+1,d}^{T}\mathbf{g} \right). \tag{22}\] Thus, the weighted SNR constraint of (13) is written as \[\text{Tr}\left[\tilde{\mathbf{W}}_{v}^{T}\mathbf{B}_{v,m}\right] \geq 1+\ell_{v}\left(\tilde{\mathbf{g}}_{m}\right) \tag{23}\] where \[\mathbf{B}_{v,m}=\frac{\zeta_{m}\mathbf{Q}_{v}\left(\tilde{\mathbf{g}}_{m} \right)}{\eta}-\mathbf{L}_{v}\left(\tilde{\mathbf{g}}_{m}\right). \tag{24}\] Using the same terminology, we can write (18) as: \[p_{j}=\sigma^{2}\cdot\text{Tr}\left[\tilde{\mathbf{W}}_{v}^{T}\mathbf{A}_{v, j}\right]+\sigma^{2}\ell_{v}\left(\mathbf{e}_{j+1}\right) \tag{25}\] where \[\mathbf{A}_{v,j}=\mathbf{Q}_{v}\left(\mathbf{e}_{j+1}\right)+\mathbf{L}_{v}\left( \mathbf{e}_{j+1}\right). \tag{26}\] To further simplify the optimization, we perform the optimization with respect to \(\tilde{\mathbf{W}}_{v}\) instead of \(\mathbf{w}_{v}\). Thus, we need to add the constraint that \(\tilde{\mathbf{W}}_{v}\) is rank \(1\) and also \((\tilde{\mathbf{W}}_{v})_{j,j}=1\) for any \(1\leq j\leq 1+\tilde{N}_{v-1}\). Hence, (13) can be rewritten as \[P_{v}(\underline{\mathbf{w}}_{v},\eta)=\sigma^{2}\cdot\underset{ \tilde{\mathbf{W}}_{v}}{\min}\ \max_{j}\text{Tr}\left[\tilde{\mathbf{W}}_{v}^{T}\mathbf{A}_{v,j}\right]+\ell_ {v}\left(\mathbf{e}_{j+1}\right)\] (27) Subject to: \[\text{Tr}\left[\tilde{\mathbf{W}}_{v}^{T}\mathbf{B}_{v,m}\right] \geq 1+\ell_{v}\left(\tilde{\mathbf{g}}_{m}\right),\quad 1\leq m\leq M\] \[\tilde{\mathbf{W}}_{v}\geq 0,\text{rank}(\tilde{\mathbf{W}}_{v})=1,( \tilde{\mathbf{W}}_{v})_{j,j}=1,\] \[1\leq j\leq 1+\tilde{N}_{v-1}.\] Following [12] we replace the rank \(1\) constraint by \(\text{Tr}[\tilde{\mathbf{W}}_{v}]-\lambda_{\max}(\tilde{\mathbf{W}}_{v})\leq 0\), and further simplify the problem by moving the constraint to the utility function, resulting with: \[P_{v}(\underline{\mathbf{w}}_{v},\eta) =\sigma^{2}\cdot\underset{\tilde{\mathbf{W}}_{v}}{\min}\ \max_{j}\text{Tr}\left[\tilde{\mathbf{W}}_{v}^{T}\mathbf{A}_{v,j}\right]+\ell_ {v}\left(\mathbf{e}_{j+1}\right)\] \[+\mu\left(\text{Tr}[\tilde{\mathbf{W}}_{v}]-\lambda_{\max}( \tilde{\mathbf{W}}_{v})\right)\] (28) Subject to: \[\text{Tr}\left[\tilde{\mathbf{W}}_{v}^{T}\mathbf{B}_{v,m}\right] \geq 1+\ell_{v}\left(\tilde{\mathbf{g}}_{m}\right),\quad 1\leq m\leq M\] \[\tilde{\mathbf{W}}_{v}\geq 0,(\tilde{\mathbf{W}}_{v})_{j,j}=1, \quad 1\leq j\leq 1+\tilde{N}_{v-1}.\] Theorem 1 in [12] guarantees that for large enough \(\mu\) Problems (27) and (28) are equivalent. As (28) is still not convex, we follow [12] again and solve iteratively using \(\mathbf{w}_{\max}^{(u)}\mathbf{w}_{\max}^{(u)H}\) as a sub-gradient of \(\lambda_{\max}(\tilde{\mathbf{W}}_{v})\) where \(\mathbf{w}_{\max}^{(u)}\) is the unit-norm vector that corresponds to the maximal eigenvalue of \(\tilde{\mathbf{W}}_{v}\) in iteration \(u\). Thus the resulting optimization problem for iteration \(u+1\) becomes: \[\begin{split}&\tilde{\mathbf{W}}_{v}^{(u+1)}=\operatorname*{arg\,min}_{ \mathbf{W}_{v}}\,\max_{j}\,\text{Tr}\big{[}\tilde{\mathbf{W}}_{v}^{T}\mathbf{ A}_{v,j}\big{]}+\ell_{v}\left(\mathbf{e}_{j+1}\right)\\ &\quad+\mu\big{(}\text{Tr}[\tilde{\mathbf{W}}_{v}]-\text{Tr}( \mathbf{w}_{\max}^{(u)}\mathbf{w}_{\max}^{(u)H}(\tilde{\mathbf{W}}_{v}-\tilde{ \mathbf{W}}_{v}^{(u)}))\big{)}\end{split} \tag{29}\] Subject to: \[\begin{split}&\text{Tr}\left[\tilde{\mathbf{W}}_{v}^{T}\mathbf{B}_{v,m }\right]\geq 1+\ell_{v}\left(\tilde{\mathbf{g}}_{m}\right),\quad 1\leq m\leq M\\ &\tilde{\mathbf{W}}_{v}\geq 0,(\tilde{\mathbf{W}}_{v})_{j,j}=1, \quad 1\leq j\leq 1+\tilde{N}_{v-1}.\end{split}\] and the solution of (13) can be obtained by Algorithm 2. ``` 1:Set \(\tilde{\mathbf{W}}_{v}^{(0)}\) based on the last last known \(\mathbf{w}_{v}\). 2:Set \(\mu=0.25\). 3:repeat 4: Set \(\mu=2\mu\), \(u=0\). 5:repeat 6: Set \(\mathbf{w}_{\max}^{(u)}\) as a unit-norm vector corresponding to the maximal eigenvalue of \(\tilde{\mathbf{W}}_{v}^{(u)}\). 7: Solve (29). 8: set \(u=u+1\). 9:until Convergence 10: Set \(\tilde{\mathbf{W}}_{v}=\tilde{\mathbf{W}}_{v}^{(0)}=\tilde{\mathbf{W}}_{v}^{(u)}\). 11:until\(\text{rank}(\tilde{\mathbf{W}}_{v})=1\). ``` **Algorithm 2** Evaluating \(P_{v}(\underline{\mathbf{W}}_{v},\eta)\) ## IV Deep-learning optimization ### _Optimization approach_ In our novel approach, we use deep-learning training for the optimization of the network. This training allows the relays to use their non-linear regime, as long as the total network performance increases. We should note that there is a conceptual difference between the training of the relay network as opposed to neural networks. In the relay case, the network input contains very little information about the desired functionality. For example, in table I, the network input has only \(4\) possible values. As the inputs propagate through the network they accumulate noise, which is the main adversary in this scenario. Thus, the information on the network is obtained by transmitting each of the few possible inputs many times over the network, and experiencing many different noises and different outputs. ### _Training_ The network training is based on transmitting a batch of symbols over the network and observing the input and output of each relay and the signals received at the receivers. This resembles the training of a neural network that relies on labeled data to learn its mapping. It is important to note that (unlike in the previous section) the training is done for a specific modulation. Thus, the system input is the unmodulated data bits vector \(\mathbf{u}\), and the modulation operation is considered part of the system. The system output is the bit estimates, \(\hat{u}_{m,b}\) given in (5) and the performance measure is the BER given in (6). However, for training, we take the processed outputs before the final decisions, i.e., \(\bar{q}_{m,b}[k]\) defined in (9). We denote the network operation as a function of its input and trainable parameters: \[\bar{\mathbf{q}}[k]=\mathfrak{f}(\mathbf{u}[k];\boldsymbol{\varphi}) \tag{30}\] where \(\bar{\mathbf{q}}[k]=[\bar{q}_{1,1}[k],\ldots,\bar{q}_{1,B}[k],\bar{q}_{2,1}[k],\ldots,\bar{q}_{M,B}[k]]^{T}\) and the trainable parameters are collected into \(\boldsymbol{\varphi}=[\mathbf{w}_{1}^{T},\ldots,\mathbf{w}_{d}^{T},\bar{ \mathbf{w}}^{T},\mathbf{b}_{1}^{T},\ldots,\mathbf{b}_{d}^{T},\bar{\mathbf{b} }^{T}]^{T}\) with \(\bar{\mathbf{w}}=[\bar{w}_{1},\ldots\bar{w}_{M}]^{T}\) and \(\bar{\mathbf{b}}=[\bar{b}_{1},\ldots\bar{b}_{M}]^{T}\). Note that (unlike most neural networks) the function \(\mathfrak{f}()\) is a random function due to the effect of the noise (see (1) and (3)). The functionality of \(\mathfrak{f}()\) is determined by the network topology as defined by the channel gains \(\mathbf{h}_{i}\), \(\mathbf{F}_{i,\ell}\) and \(\mathbf{g}_{i,m}\), \(i=1,\ldots,d\)\(\ell=1,\ldots,d-1\) and \(m=1,\ldots,M\). The network optimization can be performed in a distributed online manner, or, in a centralized batch manner. In this work, we focus on the centralized version and leave the distributed version for future research. Yet, it is important to note that the distributed version has two important advantages: i. the backpropagation algorithm has a natural and efficient distributed implementation and ii. online training does not require explicit channel estimations. The centralized network training is based on the digital twin paradigm, where the central processor optimizes a simulated version of the network, and then, the optimized parameters are fed into the actual network. Thus, the centralized optimization requires a central processor that knows all channel gains, but, does not require additional pilot transmissions except for those needed for the channel estimation. We collect the data of \(K=600\) (simulated) symbols into a single batch. The training is performed by minimizing the loss function, where we first apply a sigmoid for each output, \[\bar{u}_{m,b}[k]=\sigma(\beta\bar{q}_{m,b}[k])=\frac{1}{1+e^{-\beta\cdot\bar{q}_{ m,b}[k]}} \tag{31}\] with \(\beta=5\) and then the binary cross entropy loss: \[\mathcal{L}_{m}(\mathbf{\varphi}) =\frac{1}{BK}\sum_{b=1}^{B}\sum_{k=1}^{K}-u_{m,b}[k]\log_{2}(\bar{ u}_{m,b}[k])\] \[\quad-(1-u_{m,b}[k])\log_{2}(1-\bar{u}_{m,b}[k]). \tag{32}\] To combine the BER of the different users, we use the Boltzmann softmax operator: \[\mathcal{L}(\mathbf{\varphi})=\sum_{m}\frac{\mathcal{L}_{m}(\mathbf{\varphi})e^{ \alpha\mathcal{L}_{m}(\mathbf{\varphi})}}{\sum_{m}e^{\alpha\mathcal{L}_{m}(\mathbf{ \varphi})}} \tag{33}\] with \(\alpha=5\). The training module follows the backpropagation algorithm, using gradient-based minimization of the loss with iterations of the form \[\mathbf{\varphi}^{(t+1)}=\mathbf{\varphi}^{(t)}-\eta\nabla_{\mathbf{\varphi}}\mathcal{L} (\mathbf{\varphi}^{(t)}). \tag{34}\] Note that all the training is performed over a single batch of data. Thus, during the training the noises (\(\textbf{n}_{i}[k]\) and \(\tilde{n}_{m}[k]\)) are fixed and hence the loss is deterministic. The step size, \(\eta\) is updated using the ADAM optimizer [24]. The training optimizes the performance at the specific network setting. To obtain network gains that give good performance over a large range of SNRs, we train the network with adaptive noise variance. Specifically, we start with very low variance, and increase the variance by a factor of \(1.5\) whenever the BER goes below \(5\%\). ### _Testing and validation_ We do not need to set aside data for testing and validation. These operations are always performed using new random noises (as we can always generate more data). ### _Initialization_ The most direct approach to initialize the training is by using the result of the linear optimization of Section III. That is, we use the optimal gains obtained from (11) and initialize all biases to \(\mathbf{b}_{i}=\mathbf{0}\). But, the solution of (11) requires a sequential solution of many convex problems and hence is quite complicated for large networks. Instead, we present here a simpler initialization which showed good performance in our numerical study. The main idea of this initialization is to keep all relays close to their linear regime, so that no relay starts saturated. We start with layer \(i=1\) and then move forward. For relay \(n\) of layer \(i\) we calculate \[p_{i,n}=E[y_{i,n}^{2}].\] (Recall that \(p_{i,n}\) depends only on the gains and biases of previous layers). Then we draw a random sign \(s_{i,n}\in\{-1,1\}\) and random amplitude \(a_{i,n}\) which is uniformly distributed over \([0.5,1]\), and set: \[w_{i,n}=\frac{a_{i,n}s_{i,n}}{p_{i,n}},\quad b_{i,n}=0. \tag{35}\] Going over all layers from \(i=1\) till \(i=d\), we establish a starting point with good dynamic behavior. ## V Numerical results ### _Simple networks_ In this section, we demonstrate the advantages of the use of many relays, and in particular when using our novel optimization approach. We start by studying simple networks with one and two layers. Fig. 3: A relay network with 4 relays in a single layer. As a first example, we consider the network depicted in Fig. 3, with \(N=4\) relays and \(M=2\) outputs, where each output sees different two relays. In this network, we set all physical channel gains to \(1\), that is \(\mathbf{h}_{1}=[1,1,1,1]^{T}\), \(\mathbf{g}_{1,1}=[1,1,0,0]^{T}\) and \(\mathbf{g}_{1,2}=[0,0,1,1]^{T}\). The resulting BER is depicted in Fig. 4 as a function of \(1/\sigma^{2}\). Fig. 4 shows the BER of the worst user (out of 2) in \(3\) scenarios. The red curve (with \(x\)-marks) depicts the performance with the novel optimization scheme of section III, using the traditional linearization approach (i.e., where the optimization treats the relays as linear amplifiers with a power constraint, and we set \(P_{\max}=0.64\)). The blue and green curves show the results of Fig. 4: BER vs. \(1/\sigma^{2}\) for the relay network of Fig. 3. The figure compares traditional (linear) optimization and deep-learning optimization (DR). Fig. 5: Network transfer function for user 2: value received by user 2 as a function of the network input in the absence of noise for two optimization approaches and different receivers. a deep-learning relay optimization (DR). The blue curve (with circles) shows the performance with standard receivers, i.e., where user \(2\) employs a 4 PAM receiver according to Table I. The green curve (with diamonds) shows the performance with low-complexity receivers, i.e., where both users employ a BPSK receiver. The figure shows that the deep-learning optimization approach outperforms traditional optimization by 1dB. It is important to highlight the dual significance of this achievement. Firstly, it demonstrates the superiority of non-linear optimization over traditional linear-based optimization techniques. Secondly, it underscores the ability of our approach to deliver enhanced performance while accommodating low-complexity receivers. Specifically, our network enables User 2 to receive a simple BPSK modulation while effectively mitigating the impact of User 1's data on User 2's reception. This result demonstrates the capability of our approach to enhance performance while accommodating simplified receiver configurations even in such a shallow network. To gain deeper insights into the advantages of non-linear optimization, Fig. 5 depicts the transfer function of User 2 in each of the optimized networks. The transfer function is the relationship between the value measured at the receiver of User 2 and the input value at the transmitter in the absence of noise1. The markers in the figure correspond to the transmitted values (on the \(x\)-axis) and the desired bit values of User 2, where \(+1\) represents a 0-bit and \(-1\) represents a 1-bit. Footnote 1: It is important to acknowledge that the transfer function does not provide a complete description of the network, as the introduction of noise occurs at six different points within the network. This observation holds particularly true when the relays operate in their non-linear regime, where the influence of noise cannot be approximated by a single noise gain. The figure shows that the traditional optimization indeed results in a nearly linear transfer function. Recall that the output processing function for this user, \(f^{(1)}(-3\bar{r}_{2}[k]/4)\) maps values Fig. 6: A relay network with 2 layers each with 5 relays. between \(-2/3\) and \(2/3\) to negative outputs, and hence (with small enough noise) the receiver can detect the transmitted bit. With the same receiver types, the DR optimization achieves its improvement by merging the two middle modulation points. As a result, both \(-1/3\) and \(+1/3\) are mapped to values close to zero, while the \(-1\) and \(+1\) points are mapped to values close to \(-1\) and \(+1\). This particular mapping improves the detection of the information of User 2, by distinguishing whether the transmitted value originated from one of the intermediate points or from one of the extreme points. Fig. 8: Network transfer function low-complexity receivers: The value received by each user using the deep-learning relay (DR) optimization with low-complexity receivers. Fig. 7: BER vs. \(1/\sigma^{2}\) for the relay network of Fig. 6. The figure compares traditional (linear) optimization and deep-learning relay (DR) optimization. For the utilization of a low-complexity receiver, it is imperative for User 2's data to be effectively separated from the data of User 1. The bottom subplot in Fig. 5 demonstrates that this separation is indeed accomplished. We should emphasize that once such a separation is achieved, the amplification of the resulting bi-podal signal becomes significantly more effective. Consequently, networks incorporating more layers with DR optimization are anticipated to achieve substantially greater gains compared to those relying on traditional optimization methods. We proceed to analyze a (slightly) deeper network comprising two layers and three receivers, as depicted in Fig. 6. For this setup, we use: \(\mathbf{h}_{1}=[1,1,1,1,1]^{T}\), \(\mathbf{h}_{2}=\mathbf{0}\), \(\mathbf{g}_{2,1}=[4,-1,0,0,1]^{T}\), \(\mathbf{g}_{2,2}=[0,1,4,-1,0]^{T}\), \(\mathbf{g}_{2,3}=[-1,0,0,1,4]^{T}\), \(\mathbf{g}_{1,1}=\mathbf{g}_{1,2}=\mathbf{g}_{1,3}=\mathbf{0}\), and: \[\mathbf{F}_{2,1}=\begin{bmatrix}1&-0.5&-1&-0.5&1\\ -0.5&1&-0.5&1&-0.5\\ -1&-0.5&1&-0.5&-1\\ -0.5&1&-0.5&1&-0.5\\ 1&-0.5&-1&-0.5&1\end{bmatrix}. \tag{36}\] The resulting minimal bit error rate (BER) across the three users is depicted in Figure 8. Notably, our innovative DR optimization yields a significant gain of approximately 3dB over the traditional method when employing standard receivers. This gain is noteworthy, as it enables Fig. 9: BER vs. cell edge SNR for networks with uniformly distributed relays. For reference, the figure also shows the BER curve with no relays in the network. us to reduce all transmission powers by a factor of two at the only price of a more intelligent optimization technique. In this scenario, achieving effective data separation for three users poses a more intricate challenge. Consequently, the network that utilizes low-complexity receivers exhibits slightly inferior performance compared to the one employing standard receivers. Nevertheless, even with low-complexity receivers, DR optimization achieves a gain of around 1dB over traditional optimization (where the latter utilizes standard PAM receivers). Fig. 8 demonstrates the successful data separation for all three users in the low-complexity DR network. The figure depicts the transfer functions of the three receivers and shows that the network indeed implemented the required receiver structure for each user (the markers denote the desired output sign for each receiver). The apparent asymmetry observed in the curves in Fig. 8 may raise concerns regarding the effectiveness of the network. However, we can abate such concerns by recalling that the receivers only take the sign of the received signals, and that all zero crossings in the curves of Fig. 8 are in close proximity to their optimal positions. Furthermore, we recall that the BER curves in Fig. 8 exhibit excellent performance, clearly indicating the efficiency of the network. Consequently, we conclude that the observed asymmetry stems from the necessity to implement three distinct functions within the same 10-relay network. In this context, it is important to remember that the relay network has lower computational capability than a neural network of the same size. This is because most of the connections in the relay network are determined by the physical conditions and are not trainable. For example, in the network of Fig. 6 there are \(39\) connections but only \(20\) trainable parameters (a gain and a bias for each of the \(10\) relays). ### _Spatial distribution of relays_ We now turn to a more practical model in which the relays are distributed across a two-dimensional plane. The network consists of a single-antenna base station (BS) transmitter, two single-antenna receivers, and 100 relays equipped with directional antennas, as illustrated in Fig. 1. The network represents a sector spanning \(60^{\circ}\) of a cell with a radius of 100 meters. The receivers are positioned at the cell edge. To ensure the absence of loops within the network, we assume that the relays use ideal directional antennas. (in Fig. 1, the sectors in each relay illustrate the directional beams of their respective antennas.) We adopt the cell edge SNR as the reference SNR for this model. Using our previous normalization of the transmission power to \(1\), the cell edge SNR is given by \(10^{-8}/\sigma^{2}\). The resulting BER curves (of the worst out of the 2 users) are depicted in Fig. 9 for a network of 10 relays and a network of 100 relays. For reference, the figure also shows the BER curve in a network with no relays. The figure clearly shows the gain from the use of many relays. Considering, for example, a BER of \(0.01\), we see that with traditional optimization, the network with \(10\) relays gained \(8\)dB while the network with \(100\) relays gained \(21\)dB over the reference network (without relays). Recalling that the deployment and utilization are fairly easy, such gains show the potential for a huge boost in communication performance. More surprisingly, DR optimization shows a huge gain over traditional optimization in large networks. In the network with \(100\) relays, the use of DR optimization gained \(19\)dB over the best-known traditional optimization. This huge gain comes only at the price of recognizing the non-linearity in the relays and using the appropriate deep-learning tools. Thus, we conclude that the DR approach has a huge potential for boosting communication performance in large networks. Note that Fig. 9 only shows one realization of the random networks. To better reflect the Fig. 10: BER vs. the number of relays for networks with uniformly distributed relays at a cell edge SNR of \(-22\)dB. The lines show the medial BER over 10 random networks. The error bars show the \(25\)-th and \(75\)-th percentiles. For reference, the figure also shows the median BER with traditional relay optimization at cell edge SNR of \(-10\)dB. statistical nature of these networks, Fig. 10 shows the median of the BER over \(10\) realizations of each network, as a function of the number of relays in the network. All BERs in this figure (except for the reference curve) are evaluated when the cell edge SNR is \(-22\)dB. The error bars in the figure also show the BER of the first and last quartiles. The figure shows that the gain of DR over traditional optimization is negligible for the \(10\)-relays network, but grows very fast with the number of relays. To better understand this gain, the figure also shows the median BER with traditional optimization, but at a higher SNR of \(-10\)dB. Thus, we can see that with \(50\) relays, the median gain of DR over traditional optimization is larger close to \(12\)dB. This confirms that the huge gains of Fig. 9 are not unique for the specific realization. Moreover, with \(100\) relays, the vast majority of network realizations achieved a gain of more than \(12\) dB over traditional optimization. ## VI Conclusions We presented a novel approach for the optimization of relay networks. Unlike the traditional approach that approximates relays as linear amplifiers, our novel approach takes into account the true non-linear nature of the relays. Using the similarity between the transfer function of a relay and the transfer function of a neuron, we employ deep-learning methodology to better optimize the network. Numerical study shows huge gains compared to traditional optimization. This paper focused on the optimization of cascade relay networks. We first solved the optimization in the traditional approach, i.e., treating relays as linear amplifiers. In this approach, we formulated the min-max BER optimization problem and presented a novel algorithm with guaranteed convergence, at least to a local maximum. Then, we introduced the novel DR optimization approach for the optimization of relay networks. Departing from the 'linear amplifiers' paradigm and leveraging deep-learning techniques we introduced a completely new approach for relay network optimization. Numerical studies demonstrated significant performance gains compared to traditional optimization methods. For large networks, these gains were shown to often exceed \(10\)dB. Moreover, we explored the capability of non-linear relay networks to implement versatile functions, paving the way for the implementation of new functionalities over the network. As an example, we demonstrated the non-linear separation of data for different users, which reduced the receiver complexity and improved the data delivery. As a pioneering work on deep relay optimization, our primary objective was to unveil the potential of this approach and reveal its fundamental characteristics. Hence, this work took simplifying assumptions that allowed us to better focus on the core issues of this network. Future research is required to address practical concerns such as proper learning with complex signals and gains, imperfect directional antennas and gain loops. Also, additional research is required to determine the effect of the network structure on learning flexibility (recalling that, unlike neural networks, here the network structure is determined by the physical channel gains between relays).
2303.08739
Detecting Nontrilocal Correlations In Triangle Networks
Correlations in quantum networks with independent sources exhibit a completely novel form of nonclassicality in the sense that the nonlocality of such correlations can be demonstrated in fixed local input scenarios. Before the pioneering work by M.O.Renou, et al., in [1], the nonlocal feature of such network correlations was directly attributable to standard Bell nonlocality. In [1], the authors provided some of the first examples of triangle network correlations, whose nonlocality cannot be deduced from Bell-CHSH nonlocality. To date, a complete characterization of such scenarios is yet to be provided. Present work characterizes correlations arising due to fixed local measurements in a triangle network under a source independence assumptions. Precisely speaking, a set of criteria is framed in the form of Bell-type inequalities, each of which is necessarily satisfied by trilocal correlations. Possible quantum violation of at least one criterion from the set is analyzed, which in turn points out the utility of the set of criteria to detect nonlocality (nontrilocality) in quantum triangle networks. Interestingly, measurement on a local product state basis turns out to be sufficient to generate nontrilocal correlations in some quantum networks. Noise tolerance of the detection criteria is discussed followed by a generalization of the framework for demonstrating correlations in any n-sided polygon where n is finite.
Kaushiki Mukherjee
2023-03-15T16:25:32Z
http://arxiv.org/abs/2303.08739v1
# Detecting Nontrilocal Correlations In Triangle Networks ###### Abstract Correlations in quantum networks with independent sources exhibit a completely novel form of nonclassicality in the sense that the nonlocality of such correlations can be demonstrated in fixed local input scenarios. Before the pioneering work by M.O.Renou, _et al._, in [1], the nonlocal feature of such network correlations was directly attributable to standard Bell nonlocality. In [1], the authors provided some of the first examples of triangle network correlations, whose nonlocality cannot be deduced from Bell-CHSH nonlocality. To date, a complete characterization of such scenarios is yet to be provided. Present work characterizes correlations arising due to fixed local measurements in a triangle network under a source independence assumptions. Precisely speaking, a set of criteria is framed in the form of Bell-type inequalities, each of which is necessarily satisfied by trilocal correlations. Possible quantum violation of at least one criterion from the set is analyzed, which in turn points out the utility of the set of criteria to detect nonlocality (nontrilocality) in quantum triangle networks. Interestingly, measurement on a local product state basis turns out to be sufficient to generate nontrilocal correlations in some quantum networks. Noise tolerance of the detection criteria is discussed followed by a generalization of the framework for demonstrating correlations in any \(n\)-sided polygon where \(n\) is finite. ## I Introduction Violation of Bell's inequalities is considered a cornerstone in the study of quantum foundations[2; 3]. It points out the impossibility of interpreting quantum predictions in terms of any physical model dependent only on the local variables. Violations of such correlator-based inequalities are referred to as Bell nonlocality and the corresponding experimental set-up is commonly known as the standard Bell experiment[4]. A standard Bell experiment involves a single source that distributes particles to two or more distant observers. In case the source distributes an entangled quantum state and the parties perform suitable local measurements, the resulting correlations may be Bell nonlocal in nature. Over the years, extensive research activities have revealed multifaceted features of Bell nonlocality. However, in the last decade, the study of nonlocality has witnessed remarkable advancement beyond the paradigm of the Bell scenario. Such a trend of study is motivated to manifest behavior of quantum correlations associated with network topologies compatible with different practical tasks[5; 6; 7; 8]. Apart from the presence of multiple distant parties, involvement of multiple independent sources is an inherent feature of such network scenarios[10; 11; 12]. In a network scenario each source typically connects to a proper subset of parties in contrast to the standard Bell scenario. Independence of sources in quantum networks adds new physical insights in analyzing non classicality of network correlations[1; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. For instance consider the simplest network of three parties, Alice (\(A\)), Bob (\(B\)) and Charlie (\(C\)), and two independent sources \(\mathcal{S}_{1},\mathcal{S}_{2}\). \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) distribute particles among pairs (A,B) and (B,C) respectively (see Fig.1, for \(n\)=2). This network structure is referred to as the _Bilocal Network[10; 11]_. In contrast to the standard tripartite Bell experiment, initially, they do not share any common past. Bob performs joint measurement, in particular, the Bell state measurement (BSM[11]) on two particles (received from \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\)), whereas Alice and Charlie each perform local measurements. Under suitable measurement choices, nonbilocal tripartite correlations may be shared at the end of the experiment. over time, the bilocal network has been generalized on basis of various factors such as increasing number of parties, sources[14; 18], a pattern of arranging independent sources and parties[13], etc. Quantum network structures with independent sources are compatible with repeater networks[5; 33], where entanglement gets distributed via the entanglement swapping procedure among initially uncorrelated nodes. In general, under the source independence constraint, a set of network local correlations being non convex[11], framing Bell-type inequalities becomes a challenging task. Over the years different Bell-type network inequalities showing quantum violation have been established (see [26] for a review). In an \(n\)-local network (\(n\) denoting the number of sources), quantum correlations can exhibit nonlocality (in sense of non \(n\)-locality) when some or all parties perform single measurements, i.e., they have fixed input in the sense that they do not choose from a collection of measurements[1; 11]. This contributes to another striking difference with a standard Bell experiment where each of the parties must choose an input randomly and independently from a collection of two or more inputs[4]. Such a choice of inputs forms one of the key factors required for Bell nonlocal behavior of the corresponding correlations. The source independence assumption, referred to as the _n-local_ constraint, thus reduces requirements for demonstrating non classicality in quantum networks[1; 10; 11]. Non _n_-locality even in absence of randomness in input selection points out a genuine form of nonlocality typical of network scenarios. In [1], such a notion of nonlocality was referred to as _nonlocality without inputs_. However, in literature, before the publication of [1], all instances of network nonlocality in fixed input measurement scenarios relied on Bell nonlocality in the sense that violation of _n_-local inequality can be exploited in terms of violation of the standard Bell-CHSH inequality[37]. This in turn questioned whether network correlations can truly exhibit any form of non-classicality that will be completely independent of standard Bell nonlocality. In [1], the authors first clarified the doubt. They provided examples of correlations that are non trilocal (non _n_-local for _n_=3) in a triangle network. They also justified that such non-classical correlations are typical of measurement scenarios involving independent sources and thereby can never be simulated in a standard Bell experiment. In [1], the authors provided specific instances of non trilocal correlations. For that, they analyzed the structure of correlations generated in a triangle network where each source generates the same pure two-qubit state and each observer performs specific joint measurements[1]. In this context, it will be interesting to frame a criterion that can detect the nontrilocality of correlations (in triangle networks) in general. This work characterizes the set of trilocal correlations arising in a triangle network where each party performs a fixed measurement with four outputs. For such manifestation, a set of non-linear Bell-type inequalities has been constructed. This set emerges as a necessary condition for trilocality. Violation of at least one of these inequalities suffices to ensure the nontrilocality of corresponding correlations. Under a suitable measurement context, there exist families of quantum states violating at least one such inequality. Interestingly, it is observed that some of these inequalities can detect nontrilocal quantum correlations even when none of the two-qubit states is Bell-CHSH nonlocal. Consequently, the set of inequalities emerges as the first trilocal criteria in a fixed input scenario whose violation cannot be traced back to the violation of the Bell-CHSH inequality. Besides, it is observed that nontrilocal correlations can be simulated in the network involving separable states and/or choosing a product state basis as a local measurement basis for each party. An inequality from the set is used for detecting pure entanglement. The procedure of constructing trilocal inequalities can be generalized beyond the triangle topology so that the network involves more than three independent sources. The rest of the work is organized as follows: in sec.II, the motivation of the present discussion is illustrated. Sec.III provides some basic pre-requisites.. A set of trilocal inequalities for triangle network is introduced in sec.IV. Sec.V deals with quantum violation of members from the set thereby characterizing quantum nontrilocal correlations in a triangle network. In sec.VI, the utility of the set of trilocal criteria is discussed from the perspective of entanglement detection. The noise tolerance of some members from the set of the inequalities is analyzed in sec.VII. In sec.VIII, the comparison is made between some of the trilocal inequalities for a triangle network with that of the existing trilocal inequality for a linear trilocal network[14]. The triangle network scenario is generalized in sec.IX. Finally, the discussion ends with some concluding remarks in sec.X. ## II Motivation Network scenarios with multiple independent sources correspond to different quantum network topologies[5; 6; 7; 8; 9; 33]. From experimental perspectives, quantum non _n_-locality has emerged as a non-classical resource in different network-based practical tasks[26]. Moreover, non-classicality (non _n_-locality) in fixed input scenario points out some intriguing features of network correlations[1] which have no analog in standard Bell experiment. Detection of such type of network nonlocality motivates the present discussion. Regarding the fixed input scenario, only a few particular instances of quantum nontrilocal correlations exist in a triangle network[1]. Recently, triangle networks along with other network scenarios have been analyzed from different perspectives[27; 28; 29]. However, to the best of the author's knowledge, there does not exist any criterion to detect nontrilocality in such a scenario. In this context, a set of criteria in form of non-linear Bell-type inequalities for detecting nontrilocality will be derived here. Testing any such criterion is supposed to be experimentally feasible as it is based on correlators. A network is said to be open if at least one of the parties receives particles from a single source. Considering the parties as vertices, the arrangement of parties in such a network does not correspond to the vertices of any polygon. Non linear \(n\)-local inequality exists for open type \(n\)-local networks[26]. In this context, it becomes interesting to derive Bell-type inequalities for closed \(n\)-local networks. A closed network refers to the arrangement of the parties and the sources such that each pair of parties receives particles from a common source. So in a closed network, the parties can be interpreted as vertices of a polygon. The simplest form of closed network topology is considered here. The establishment of trilocal triangle network inequalities and then multi-directional analysis of their quantum violation forms the basis of the current discussion. Once framed, a comparison of triangle trilocal inequalities with that of existing trilocal inequality for a linear network[14] becomes essential. Besides, it is also crucial to generalize the present study for analyzing \(n\)-local closed networks. At this junction, it may be noted that Finner inequalities[38] can be used to detect whether a given set of correlations is generated in a network or not[39]. However, in a network, the corresponding Finner inequality is not only compatible with a local distribution but is also satisfied when the sources generate bipartite quantum states or arbitrary no signaling resources[39]. So any Finner inequality can be used to differentiate between network (compatible with Finner inequality considered) correlations from those which are not generated in the network[39]. But no such inequality can be used to exploit nonlocal behavior (if any) of the corresponding set of network correlations. It may be noted that because of the source independence assumption, trilocal hidden variable models form a subclass of general local hidden variable (LHV) models. The inexplicability of network correlations by any trilocal hidden variable model does not ensure nonlocality in general as there may exist some more general LHV models explaining those correlations. So non tri-locality forms a restricted form of nonlocality. Yet, a study of non-trulocality is important as this form of non-classicality is compatible with network topologies involving distant sources such that each source independently distributes particles to a subgroup of parties. Such networks are commonly used in the field of standard information technology (e.g. internet). With progress in quantum information science, quantum counterparts of this type of network are forming building blocks in different information processing tasks[6; 30; 31]. Apart from theoretical advancement, there has been rapid technological development towards scalable quantum networks which in turn leads to multiple uses of quantum networks[5; 6; 8]. Hence, from a practical perspective, studying the non \(n\)-locality of quantum correlations warrants attention. ## III Preliminaries ### Bloch Matrix Representation of Two Qubit State Let \(\rho\) denote any two-qubit state. The corresponding density matrix of \(\rho\) is given in terms of Bloch parameters: \[\varrho=\frac{1}{4}(\mathbb{I}_{2}\times\mathbb{I}_{2}+\vec{a}\vec{\sigma} \otimes\mathbb{I}_{2}+\mathbb{I}_{2}\otimes\vec{b}\vec{\sigma}+\sum_{j_{i},j=1 }^{3}w_{j_{1}}\sigma_{j_{1}}\otimes\sigma_{j_{2}}), \tag{1}\] with \(\vec{\sigma}{=}(\sigma_{1},\sigma_{2},\sigma_{3})\), \(\sigma_{j_{k}}\) denoting Pauli operators along three mutually perpendicular directions (\(j_{k}{=}1,2,3\)). \(\vec{a}{=}(x_{1},x_{2},x_{3})\) and \(\vec{b}{=}(y_{1},y_{2},y_{3})\) denote local blob vectors (\(\vec{a},\vec{b}{\in}\mathbb{R}^{3}\)) corresponding to party \(\mathcal{A}\) and \(\mathcal{B}\) respectively with \(|\vec{a}|,|\vec{b}|{\leq}1\) and \((w_{i,j})_{3\times 3}\) stands for the correlation tensor matrix \(\mathcal{W}\) (real matrix). Components \(w_{j_{1}j_{2}}\) of \(\mathcal{W}\) are given by \(w_{j_{1}j_{2}}{=}\mathrm{Tr}[\rho\,\sigma_{j_{1}}\otimes\sigma_{j_{2}}]\). \(\mathcal{W}\) can be diagonalized by applying suitable local unitary operations[34; 35],where the simplified expression is then given by: \[\varrho^{{}^{\prime}}=\frac{1}{4}(\mathbb{I}_{2}\times\mathbb{I}_{2}+\vec{m}. \vec{\sigma}\otimes\mathbb{I}_{2}+\mathbb{I}_{2}\otimes\vec{n}.\vec{\sigma}+ \sum_{j=1}^{3}\mathbb{t}_{j}\sigma_{j}\otimes\sigma_{j}), \tag{2}\] Correlation tensor in Eq.(2) is given by \(T{=}\mathrm{diag}(t_{11},t_{22},t_{33})\) where \(t_{11},t_{22},t_{33}\) are the eigen values of \(\sqrt{\mathcal{W}^{\dagger}\mathcal{W}}\), i.e., singular values of \(\mathcal{W}\). ### \(n\)-local Linear Network Consider \(n\) independent sources \(\mathcal{S}_{1},\mathcal{S}_{2},...\mathcal{S}_{n}\) together with \(n+1\) distant parties \(\mathcal{P}_{1},\mathcal{P}_{2},...,\mathcal{P}_{n+1}\) arranged in a linear fashion (see Fig.1). \(\forall i{=}1,2,...,n\), \(\mathcal{S}_{i}\) independently distributes physical systems to a pair of parties \((\mathcal{P}_{i},\mathcal{P}_{i+1})\). Each of \(\mathcal{P}_{1}\) and \(\mathcal{P}_{n+1}\) receives particle from a single source \(\mathcal{S}_{1}\) and \(\mathcal{S}_{n}\) respectively. So the network is an open type of network. A source \(\mathcal{S}_{i}\) is characterized by variable \(\lambda_{i}\). The sources being independent, the joint distribution of \(\lambda_{1},...,\lambda_{n}\) is factorizable: \[\rho(\lambda_{1},...\lambda_{n})=\Pi_{i=1}^{n}\rho_{i}(\lambda_{i}). \tag{3}\] where \(\forall i\), \(\rho_{i}\) denotes the normalized distribution of \(\lambda_{i}\). Eq.(3) represents the \(n\)-local constraint. For \(n{=}2\), Eq.(3) reduces to the bilocal constraint. \(\forall i{=}2,3,...n-1\) party \(\mathcal{P}_{i}\) performs fixed measurement \(y_{i}\) on the joint state of two subsystems received from two sources \(\mathcal{S}_{i-1}\), \(\mathcal{S}_{i}\). Each of \(\mathcal{P}_{1}\) and \(\mathcal{P}_{n+1}\) chooses from a collection of two dichotomous measurements. \(n+1\) partite network correlations are local if those can be decomposed as : \[\begin{split}& p(\mathfrak{o}_{1},\mathfrak{\bar{o}}_{2},..., \mathfrak{\bar{o}}_{n},\mathfrak{o}_{n+1}|y_{1},y_{n+1})=\\ &\int_{\Lambda_{1}}\int_{\Lambda_{2}}...\int_{\Lambda_{n}}d \Lambda_{1}d\lambda_{2}...d\lambda_{n}\rho(\lambda_{1},\lambda_{2},...\lambda _{n})R\end{split} \tag{4}\] where \[R=p(\mathfrak{o}_{1}|y_{1},\lambda_{1})\Pi_{i=2}^{n}p(\mathfrak{\bar{o}}_{i}| \lambda_{i-1},\lambda_{i})p(\mathfrak{o}_{n+1}|y_{n+1},\lambda_{n})\] Notations appearing in the above expression are detailed below: * \(\forall i\), \(\Lambda_{i}\) denotes the set of all possible values of local hidden variable \(\lambda_{i}\). * \(y_{1},y_{n+1}{\in\{0,1\}}\) denote measurements of \(\mathcal{P}_{1}\) and \(\mathcal{P}_{n+1}\) respectively. * \(\mathfrak{o}_{1},\mathfrak{o}_{n+1}{\in\{0,1\}}\) denote outputs of \(\mathcal{P}_{1}\) and \(\mathcal{P}_{n+1}\) respectively. * \(\forall i\), \(\mathfrak{\bar{o}}_{i}{=}(\mathfrak{o}_{i1},\mathfrak{o}_{i2})\) denotes four outputs of input \(y_{i}\) for \(\mathfrak{o}_{ij}{\in\{0,1\}}\) In this linear scenario, \(n+1\) partite network correlations are \(n\)-local if those satisfy both Eqs.(3,4). So any set of correlations that do not satisfy both of these constraints are said to be non \(n\)-local. The \(n\)-local inequality[14] for this scenario is given by: \[\sqrt{|I|}+\sqrt{|J|}\leq 1, \tag{5}\] \[I=\frac{1}{4}\sum_{y_{1},y_{n+1}}\langle O_{1}O_{2}^{0}....O_{n} ^{0}O_{n+1}\rangle\] \[J=\frac{1}{4}\sum_{y_{1},y_{n+1}}(-1)^{y_{1}+y_{n+1}}\langle O_{ 1}O_{2}^{1}...O_{n}^{1}O_{n+1}\rangle\text{ with }\] \[\langle O_{1}O_{2}^{1}....O_{n}^{i}O_{n+1}\rangle=\sum_{ \mathcal{D}}(-1)^{\mathfrak{o}_{1}+\mathfrak{o}_{n+1}+\mathfrak{o}_{2i}+.. \mathfrak{o}_{ni}}R_{1},\ i=0,1\] where \(R_{1}=p(\mathfrak{o}_{1},\mathfrak{\bar{o}}_{2},...,\mathfrak{\bar{o}}_{n}, \mathfrak{o}_{n+1}|y_{1},y_{n+1})\) and \(\mathcal{D}=\{\mathfrak{o}_{1},\mathfrak{o}_{21},\mathfrak{o}_{22},..., \mathfrak{o}_{n1},\mathfrak{o}_{n2},\mathfrak{o}_{n+1}\}\) Violation of Eq.(5) guarantees non \(n\)-local behavior of corresponding correlations. ### Triangle Network[\(i\)] A triangle network is a non-linear arrangement of three parties \(\mathcal{P}_{1},\mathcal{P}_{2},\mathcal{P}_{3}\) (say) and three independent sources \(\mathcal{S}_{1},\mathcal{S}_{2}\) and \(\mathcal{S}_{3}\) with the parties corresponding to vertices of a triangle (see Fig.2). Each source is placed between two parties. \(\mathcal{S}_{1},\mathcal{S}_{2}\) and \(\mathcal{S}_{3}\) distribute particles to the pairs \((\mathcal{P}_{1},\mathcal{P}_{2})\), \((\mathcal{P}_{2},\mathcal{P}_{3})\) and \((\mathcal{P}_{1},\mathcal{P}_{3})\) respectively. Each party \(\mathcal{P}_{1},\mathcal{P}_{2},\mathcal{P}_{3}\) thus receives two physical systems. Local variables \(\lambda_{j}\) characterizes \(\mathcal{S}_{i}\) for \(i{=}1,2,3\). The trilocal (source independence) assumption takes the form: \[\rho(\lambda_{1},\lambda_{2},\lambda_{3})=\rho_{1}(\lambda_{1})\rho_{2}( \lambda_{2})\rho_{3}(\lambda_{3}) \tag{6}\] with \(\rho_{i}\) corresponding to the normalized stochastic distribution of the local variable \(\lambda_{i},\forall i\). Let \(x_{1},x_{2}\) and \(x_{3}\) denote the fixed local input of \(\mathcal{P}_{1},\mathcal{P}_{2}\) and \(\mathcal{P}_{3}\) respectively. For \(i{=}1,2,3,x_{i}\) has four outputs denoted by a two component vector \(\mathfrak{\bar{o}}_{i}{=}(\mathfrak{o}_{i1},\mathfrak{o}_{i2})\) where each component \(\mathfrak{\bar{o}}_{ij}{\in}\{0,1\}\). Tripartite local correlations are of the form: \[\begin{split} p(\mathfrak{\bar{o}}_{1},\mathfrak{\bar{o}}_{2}, \mathfrak{\bar{o}}_{3})=\int_{\Lambda_{1}}\int_{\Lambda_{2}}\int_{\Lambda_{3}}d \lambda_{1}d\lambda_{2}d\lambda_{3}\rho(\lambda_{1},\lambda_{2},\lambda_{3})R _{2}\\ \text{ where }R_{2}=p(\mathfrak{\bar{o}}_{1}|\lambda_{1},\lambda_{3})p( \mathfrak{\bar{o}}_{2}|\lambda_{1},\lambda_{2})p(\mathfrak{\bar{o}}_{3}| \lambda_{2},\lambda_{3}).\end{split} \tag{7}\] Figure 1: _Schematic representation of linear \(n\)-local network_ Figure 2: _Schematic representation of triangle network_ Any set of correlations generated in the network is trilocal if it is local, i.e., satisfies Eq.(7) and also satisfies the trilocal constraint given by Eq.(6). Correlations lacking this type of representation (as specified by Eqs.(6,7)) are said to be _nontrilocal_. ## IV Characterizing trilocal network correlations By construction, as specified in subsec.III.3, the collection of trilocal correlations forms a proper subset of the network local correlations satisfying only Eq.(7). Due to the nonlinear constraint (Eq.(6)), the set of trilocal correlations is non convex. A set of non-linear Bell-type inequalities is now framed. This set characterizes correlations in a triangle network in the sense that every member from the set is necessarily satisfied by trilocal correlations generated in the network. _Theorem.1:_ In a triangle network, an arbitrary set of tripartite trilocal correlations necessarily satisfies: \[\sqrt{|I_{1}|}+\sqrt{|I_{2}|}\leq 1, \tag{8}\] where correlator terms \(I_{1}\), \(I_{2}\) are specified as follows: \[I_{j} =\sum_{i,k=0,1}(-1)^{j(i+k)}\langle C_{1}^{i}C_{2}^{j-1}C_{3}^{k}\rangle,\] where \[\langle C_{1}^{i}C_{2}^{j-1}C_{3}^{k}\rangle =\sum_{C}(-1)^{f(i,\vec{\sigma}_{1})+g(j-1,\vec{\sigma}_{2})+h(k, \vec{\sigma}_{3})}p(\vec{\sigma}_{1},\vec{\sigma}_{2},\vec{\sigma}_{3}) \tag{9}\] with \(j\)=\(1,2\), \(C\)=\(\{\sigma_{11},\sigma_{12},\sigma_{21},\sigma_{22},\sigma_{31},\sigma_{32}\}\) and \(\sigma_{ij}\)\(\in\)\(\{0,1\}\). In the above expression, each of \(f(.,.),g(.,.)\) and \(h(.,.)\) denotes arbitrary integer-valued function. _Proof:_ See Appendix.A \(\forall f,g,h\), the criterion provided by Eq.(8) is satisfied by trilocal correlations. So, Eq.(8) gives a set of trilocal inequalities. Violation of at least one of these inequalities (for suitable form of \(f,g,h\)) ensures _nontrilocality_ of the measurement statistics. The corresponding triangle network is said to be _nontrilocal_. Violation of at least one of the inequalities specified by Eq.(8) thus acts as a detection criterion of nontrilocality in a triangle network. However, such a criterion being only sufficient for the purpose, there may exist nontrilocal correlations satisfying Eq.(8) for all possible forms of \(f,g,h\). Now, one may note that the labeling of the parties can be interchanged in Eq.(9) to get a different set of trilocal inequalities. Having established a set of trilocal inequalities, quantum violation of the same is analyzed in the next section. ## V Quantum triangle network Let each source \(\mathcal{S}_{i}\) distribute a two qubit state \(\varrho_{i}\,(i\!=\!1,2,3)\) Each of \(\mathcal{P}_{1},\mathcal{P}_{2}\) and \(\mathcal{P}_{3}\) receives two qubits (see Fig.2). To be specific, \(\varrho_{1}\) gets distributed among parties \(\mathcal{P}_{1},\mathcal{P}_{2}\); \(\varrho_{2}\) gets distributed among \(\mathcal{P}_{2},\mathcal{P}_{3}\) and \(\varrho_{3}\) gets distributed among \(\mathcal{P}_{1},\mathcal{P}_{3}\); Overall quantum state in the network is thus given by: \[\varrho_{123}=\varrho_{1}\otimes\varrho_{2}\otimes\varrho_{3} \tag{10}\] Each of the parties now performs a fixed local quantum measurement on the joint state of their respective subsystems (two qubits). In case the measurement statistics violate the trilocal inequality (Eq.(8)), nontrilocality is observed in the quantum network. However, no such definite conclusion can be made otherwise. In a triangle network, a six qubit state \(\varrho_{123}\) is distributed among the three parties where each party is receiving two qubits from two different bipartite states. In case violation is observed, it may be attributed as _nontrilocality_ of \(\varrho_{123}\) (Eq.(10)). ### Quantum Violation To illustrate quantum violation, one of the instances given in [1] is first considered. Let \(\mathcal{S}_{1},\mathcal{S}_{2},\mathcal{S}_{3}\) each generate a Bell state: \[\varrho_{i}=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle),\ i=1,2,3 \tag{11}\] On receiving the qubits, each of \(\mathcal{P}_{1},\mathcal{P}_{2}\) and \(\mathcal{P}_{3}\) performs measurement on the following orthonormal basis: \[|b_{1}\rangle=|01\rangle,\ |b_{2}\rangle=|10\rangle,\ |b_{3}\rangle= \alpha_{1}|00\rangle+\alpha_{2}|11\rangle\text{ and}\] \[|b_{4}\rangle=\alpha_{2}|00\rangle-\alpha_{1}|11\rangle,\text{ with}\ 0<\alpha_{2},\alpha_{1}<1\text{ and }\alpha_{1}^{2}+\alpha_{2}^{2}=1 \tag{12}\] Now violation of Eq.(8) for any specific form of the integer-valued functions \(f,g,h\) ensures nontrilocality of corresponding correlations. Let us consider the following integer-valued functions: \[f(s,\vec{\tau}) =g(s,\vec{\tau})=(1-s)r_{1}+s(r_{2}+1)\text{ and} \tag{13}\] \[h(s,\vec{\tau}) =(1-s)|r_{1}+r_{2}-1|+s|r_{1}r_{2}-1|\] \[\text{where }\vec{\tau} =(r_{1},r_{2})\text{ and }s,r_{1},r_{2}\in\{0,1\}\] In our discussions, we will refer to the specific tri-local inequality obtained using the above functions by Eqs.(8,13). For corresponding measurement statistics, Eqs.(8,13) give: \[\sqrt{|2\alpha_{1}^{2}-\alpha_{2}^{2}|}+\sqrt{|\alpha_{1}^{6}-\alpha_{1}^{4} \alpha_{2}^{2}+\alpha_{2}^{6}+\alpha_{1}^{2}(3-\alpha_{2}^{4})|}\leq 2^{\frac{3}{2}} \tag{14}\] Relation (Eq.(14)) between the measurement parameters \(\alpha_{1},\alpha_{2}\) points out that violation of Eqs.(8,13) is obtained for \(\alpha_{1}\)\(>\)0.892 (see Fig.3). Consequently, nontritlocality is observed in the network. _Reproducing Result of [1]:_ In [1], the authors provided an instance of nontrilocal correlations. They used a Bell state (Eq.(11)) and the measurement contexts given by Eq.(12). Precisely speaking, they claimed that there does not exist any trilocal model explaining the correlations arising for 0.785\(<\)\(\alpha_{1}^{2}\)\(<\)1, i.e., for \(\alpha_{1}\)\(\in\)\((0.886,1)\). Now, the correlations violate Eqs.(8,13) for \(\alpha_{1}\)\(\in\)\((0.892,1)\). So excepting the small gap \((0.886,0.892]\), the trilocal inequality (Eqs.(8,13)) successfully detects nontrilocality as was ensured in [1] by proving non existence of any trilocal model. Nontrilocal Correlations In Network Involving Separable States And/Or Measurement In Product State Basis Let us first consider a triangle network where each of the three sources generate an identical copy of a pure entangled state (Eq.(11)). Each party (\(\mathcal{P}_{i}\)) performs joint measurement (on the state of their respective two qubits received) in the following product state basis: \[|b_{1}\rangle=|01\rangle,|b_{2}\rangle=|10\rangle,\] \[|b_{3}\rangle=|11\rangle,|b_{4}\rangle=|00\rangle. \tag{15}\] For instance, consider a trilocal inequality (Eq.(8)) specified by the following functions: \[f(s,\vec{r})=(1-s)|r_{1}+r_{2}-1|+s|r_{1}r_{2}-1|\,\text{and}\] \[g(s,\vec{r})=h(s,\vec{r})=(1-s)r_{1}+s(r_{2}+1) \tag{16}\] \[\text{where }\vec{r}=(r_{1},r_{2})\text{ with }r_{1},r_{2},s\in\{0,1\}\] Correlator terms in the inequality (Eqs.(8,16)) become: \[I_{1}=\frac{1}{4}\,\text{and}\,\,I_{2}=\frac{1}{2}. \tag{17}\] Clearly, violation of the inequality (Eqs.(8,16)) is observed. Tripartite correlations generated in the network thus turn out to be nontrilocal. Next, let each source generate an identical copy of a separable state: \[\varrho_{sep}=\frac{1}{2}(|00\rangle\langle 00|+|11\rangle\langle 11|) \tag{18}\] Let each party now measure on the orthonormal basis given by: \[|b_{1}\rangle=\alpha_{1}|01\rangle+\alpha_{2}|10\rangle,|b_{2} \rangle=\alpha_{2}|01\rangle-\alpha_{1}|10\rangle,\] \[|b_{3}\rangle=\alpha_{3}|00\rangle+\alpha_{4}|11\rangle,|b_{4}\rangle =\alpha_{3}|00\rangle-\alpha_{4}|11\rangle \tag{19}\] with \(0\leq\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4}\)\(\leq\)1 and \(\alpha_{1}^{2}+\alpha_{2}^{2}\)\(=\)\(\alpha_{3}^{2}+\alpha_{4}^{2}\)\(=\)1. Violation of the trilocal inequality (Eqs.(8,16)) is observed if: \[\sqrt{|1+\alpha_{2}^{2}+2\alpha_{2}^{4}-\alpha_{4}^{2}-6\alpha_{ 2}^{2}\alpha_{4}^{2}+4\alpha_{4}^{4}|}+\] \[\sqrt{|(1-2\alpha_{2}^{2})(1+\alpha_{2}^{2}-3\alpha_{4}^{2})|}>2^ {\frac{3}{2}}. \tag{20}\] There exists a range of the measurement parameters \(\alpha_{2},\alpha_{4}\) for which the above relation is true (see Fig.4). So, nontrilocality is obtained even if the network involves separable states. Figure 4: _Shaded region indicates the measurement parameters \(\alpha_{2},\alpha_{4}\) of the orthogonal basis (Eq.(19)) for which nontrilocal correlations are observed in triangle network involving identical copies of a separable state (Eq.(18))._ Figure 3: _Plotting range of measurement parameter \(\alpha_{1}\) for which Eq.(14) is violated assuming \(\alpha_{2}\)\(=\)\(\sqrt{1-\alpha_{1}^{2}}\) (Eq.(12)). In this case three identical copies of a pure entangled state (Eq.(11)) are used in the network._ Now for \((\alpha_{2},\alpha_{4}){=}(1,0)\) or \((0,1)\), the basis given by Eq.(19) is a product state basis. Also, one may note that violation is observed for these particular values of the measurement parameters. This in turn justifies that nontrilocal correlations are generated in a triangle network involving separable states and/or measurement on a product state basis. In this context, it may be noted that the same is not true in case each source generates a product state. Precisely speaking, for no form of integer-valued functions \(f,g,h\), the trilocal inequality (Eq.(8)) is violated in a triangle network involving two-qubit product states. The illustration follows below. ### No Detection of Non trilocality in Network Involving Product States \(\forall i\), let \(\mathcal{S}_{i}\) distribute a two-qubit product state: \[\varrho_{prod}^{(i,i+1)} =\frac{1}{4}(\mathbb{I}_{2}+\vec{u}_{i}\vec{\sigma})(\mathbb{I}_{ 2}+\vec{u}_{i+1}.\vec{\sigma}) \tag{21}\] \[\text{where}\quad\ i=1,3,5\,\text{and}\,||\vec{u}_{i}||,||\vec{u} _{i+1}||\leq 1.\] The overall state in the network is thus given by: \[\varrho_{prod}=\frac{1}{2^{6}}\Pi_{i=1}^{6}(\mathbb{I}_{2}+\vec{u}_{i}.\vec{ \sigma}). \tag{22}\] As per the arrangement of the parties discussed in the subset.III.3, party \(\mathcal{P}_{1}\) receives two qubits forming the product state: \(\frac{1}{4}(\mathbb{I}_{2}+\vec{u}_{1}.\vec{\sigma})(\mathbb{I}_{2}+\vec{u}_{ 6}.\vec{\sigma})\). Similarly \(\mathcal{P}_{2}\) and \(\mathcal{P}_{3}\) have: \(\frac{1}{4}(\mathbb{I}_{2}+\vec{u}_{2}.\vec{\sigma})(\mathbb{I}_{2}+\vec{u}_{ 3}.\vec{\sigma})\) and \(\frac{1}{4}(\mathbb{I}_{2}+\vec{u}_{4}.\vec{\sigma})(\mathbb{I}_{2}+\vec{u}_{ 5}.\vec{\sigma})\) respectively. Let \(M_{i}\) denote the single measurement (with four possible outputs) performed by \(\mathcal{P}_{i}\,(i{=}1,2,3)\) on its respective two-qubit state. Let \(\vec{\sigma_{i}}{=}\,(\sigma_{1i},\sigma_{1i})\) with \(\sigma_{\text{ij}}{\in}\{0,1\}\) denote the four outputs of \(M_{i}\). Clearly, in this case, each probability term \(p(\vec{\sigma_{1}},\vec{\sigma_{2}},\vec{\sigma_{3}})\) is factorizable. So, \[\langle C_{1}^{i}C_{2}^{j-1}C_{3}^{k}\rangle=\langle C_{1}^{i}\rangle\langle C _{2}^{j-1}\rangle\langle C_{3}^{k}\rangle,\ \forall i,j,k. \tag{23}\] Using the above relation, we get: \[I_{1} =\frac{\langle C_{1}^{0}-C_{1}^{1}\rangle\langle C_{2}^{0}\rangle \langle C_{3}^{0}-C_{3}^{1}\rangle}{4}\] \[I_{2} =\frac{\langle C_{1}^{0}+C_{1}^{1}\rangle\langle C_{2}^{1}\rangle \langle C_{3}^{0}+C_{3}^{1}\rangle}{4}. \tag{24}\] For \(M_{2}\), the corresponding correlator term \(\langle C_{2}^{j}\rangle\) is given by: \[\langle C_{2}^{j}\rangle=\sum_{\sigma_{21},\sigma_{22}}(-1)^{g(j,\vec{\sigma_ {2}})}p(\vec{\sigma}_{2}), \tag{25}\] where \(j{=}0,1\) and \(g(j,\vec{\sigma_{2}})\) is any integer-valued function. Eq.(25) implies that \(|\langle C_{2}^{j}\rangle|{\leq}1\). So Eq.(24) gives: \[\sqrt{|I_{1}|}+\sqrt{|I_{2}|}\leq\frac{1}{2}\sum_{t=0}^{1}\sqrt{\Pi_{j=1,3}| \sum_{i=0}^{1}((-1)^{it}C_{j}^{i})|}. \tag{26}\] Now for any 4 positive numbers, \(k_{1},k_{2},k_{3},k_{4}\), we have: \[\sqrt{k_{1}k_{2}}+\sqrt{k_{3}k_{4}}\leq\sqrt{k_{1}+k_{3}}\sqrt{k_{2}+k_{4}}. \tag{27}\] Using Eq(27), we get from Eq.(26): \[\sqrt{|I_{1}|}+\sqrt{|I_{2}|} \leq\Pi_{j=1,3}\sqrt{\frac{\Sigma_{i=0}^{1}|\langle C_{j}^{0} \rangle+(-1)^{i}\langle C_{j}^{1}\rangle|}{2}}\] \[=\Pi_{j=1,3}\sqrt{\max\{|\langle C_{j}^{0}\rangle|,|\langle C_{j} ^{1}\rangle|\}}\] \[\leq 1 \tag{28}\] Hence, none of the trilocal inequalities from the set (Eq.(8)) shows a violation. Consequently, non trilocality is not detected by Eq.(8) when each source distributes a product state in the network. Such an observation points out that starting with no correlation between any pair of parties, non trilocality cannot be detected if the parties are not allowed to interact among themselves in the network. Based on the above analysis of quantum violation of the trilocal inequality (Eq.(8)), a pure bipartite entanglement detection scheme is designed next. ## VI Detection of Pure Entanglement Consider the particular trilocal inequality given by Eqs.(8,16) in a triangle network. Let each of the three sources \(\mathcal{S}_{i}\) generate an unknown pure bipartite state \(\kappa_{i}\) (\(i{=}1,2,3\)). Qubits of the states are distributed to the parties. Each of \(\mathcal{P}_{1},\mathcal{P}_{2}\) and \(\mathcal{P}_{3}\) performs a measurement in the basis given by Eq.(12). The corresponding tripartite correlations are used to test the trilocal inequality (Eqs.(8,16)). Violation of this inequality ensures that each of \(\kappa_{1},\kappa_{2}\) and \(\kappa_{3}\) is entangled. Such a deduction is based on the observation that violation of this inequality is impossible if at least one of the three sources generates a pure product state. The justification of the claim is stated below. It is already discussed in subsec.V.3 that if all the states used in the network are product states, then there is no violation of any trilocal inequality from the set given by Eq.(8). So, given the information that all the sources generate a pure state, for violation of Eqs.(8,16), at least one of those pure states must be entangled. Let only one source generate a product state (Eq.(21) for \(||\vec{u}_{i}||,||\vec{u}_{i+1}||{=}1\)) and each of the remaining sources generate a pure entangled state[41]: \[\varrho_{ent}^{i}=\frac{1}{\sqrt{2}}(\tau_{1i}|00\rangle+\tau_{2i}|11\rangle) \text{ and }\tau_{1i}^{2}+\tau_{2i}^{2}=1. \tag{29}\] In Eq.(29), \(i\) denotes labeling of the source generating the corresponding state (\(\mathcal{S}_{i}\) generating \(\phi^{j}_{ent}\)). The constraint required for a violation of the trilocal inequality (Eqs.(8,16)) for all possible cases (depending on which of the two sources distribute pure entanglement) are given in Table.1 in Appendix.B. By numerical maximization it is observed that the upper bound of Eqs.(8,16) does not exceed 1 for all possible values of the state and measurement parameters. So, violation of the particular inequality (Eqs.(8,16)) is impossible even if two of the three states involved in the network are entangled. However, it is evident from discussions in subsec.V.1 that the same inequality can be violated when all the sources generate pure entanglement. Consequently, in case of violation is observed in the network involving only pure states, all of them must be entangled. However, in case the sources distribute mixed states, no definite conclusion can be drawn on observing the violation of the same inequality. ## VII Resistance to noise A detection criterion provided by any of the trilocal inequality (Eq.(8)) is based on correlators. From experimental perspectives, it thus becomes important to discuss noise tolerance of any such criterion. In practical situations, testing any such inequality (Eq.(8)) may encounter different types of noise. Here illustration is provided for some of the possible cases and considering some particular trilocal inequalities, i.e., considering Eq.(8) for some specific forms of \(f,g,h\). ### Error in Entanglement Generation Let us first consider the ideal situation. Initially, each of \(\mathcal{S}_{1},\mathcal{S}_{2}\) and \(\mathcal{S}_{3}\) has the state \(\varrho{=}|01\rangle\langle 01|\). Application of the Hadamard gate (\(\mathcal{H}\)) on the first qubit followed by the C\(-\)NOT (\(\mathcal{C}-\mathcal{NOT}\)) gate (with the first qubit as control qubit) results in the generation of the Bell state \(|\phi^{-}\rangle\langle\phi^{-}|\)[5]. In practical situations, while generating the entangled state, imperfections may occur in applications of both one and two-qubit operations. At each source, let \(p_{1}\) and \(p_{2}\) denote the imperfection parameters characterizing \(\mathcal{H}\) and \(\mathcal{C}-\mathcal{NOT}\) respectively. After application of a noisy Hadamard gate, state (\(\varrho\)) becomes[42]: \[\varrho^{{}^{\prime}} = p_{1}(\mathcal{H}\otimes\mathbb{I}_{2}\varrho\mathcal{H}^{ \dagger}\otimes\mathbb{I}_{2})+\frac{1-p_{1}}{2}\mathbb{I}_{2}\otimes\varrho_ {2},\] \[\text{where }\varrho_{2}=\mathbb{T}_{\text{tr}}(\varrho).\] \[= \frac{1}{2}(|00\rangle\langle 00|+|10\rangle\langle 10|)-\frac{p_{1} }{2}(|00\rangle\langle 10|+|10\rangle\langle 00|)\] When subjected to noisy \(\mathcal{C}-\mathcal{NOT}\), \(\varrho^{{}^{\prime}}\) becomes[42]: \[\varrho^{{}^{\prime\prime}} = p_{2}(\mathcal{C}-\mathcal{NOT}\varrho^{{}^{\prime}}(\mathcal{C }-\mathcal{NOT})^{\dagger})+\frac{1-p_{2}}{4}\mathbb{I}_{2}\otimes\mathbb{I}_ {2} \tag{31}\] \[= \frac{1}{4}(\sum_{i,j=0}^{1}(1+(-1)^{i+j}p_{2})|ij\rangle\langle ij |-2p_{1}p_{2}(|11\rangle\langle 00|+\] \[|00\rangle\langle 11|))\] In realistic situations, each source distributes \(\varrho^{{}^{\prime\prime}}\)(Eq.(31)) in the network.On receiving the qubits, each of the parties measures in the basis given by Eq.(12). Now, let us consider the particular trilocal inequality given by Eqs.(8,16). In an ideal scenario, this particular trilocal inequality detects nontrilocal correlations in the network. In a triangle network involving noisy states (Eq.(31)), the measurement statistics violate the inequality if: \[\sqrt{|p_{2}(p_{2}-3p_{2}\alpha_{2}^{2}-(1-p_{2})\alpha_{2}^{4}) |}+\] \[\sqrt{|p_{2}^{2}(p_{2}-\alpha_{2}^{2}+4\alpha_{2}^{4})|}>2^{\frac{ 3}{2}}. \tag{32}\] Above relation (Eq.(32)) is satisfied for some range of \(p_{2}\) and the measurement parameter \(\alpha_{2}\) (see Fig.5). This in turn illustrates the noise tolerance of the trilocal inequality (Eqs.(8,16)). ### Communication Over Noisy Quantum Channels Now let each source generate a pure entangled state (Eq.(11)). However, the states received by the parties are noisy due to the communication of one or both qubits Figure 5: _Subspace of the parameter space \((\alpha_{2},p_{2})\) characterizing error (in entanglement generation) tolerance of the trilocal inequality (Eqs.(8,16)) is shown here._ (from sources in the network) over noisy quantum channels. For instance, let the bipartite state (from each source) be passed through a depolarizing channel characterized by noise parameter \(p_{3}\) (say). Let depolarization occurs with probability \(1-p_{3}\)[41]. Identical copies of the noisy class of states involved in the network are given by: \[\varrho_{dep}=p_{3}|\phi^{-}\rangle\langle\phi^{-}|+\frac{1-p_{3}}{4}\mathbbm{1 }_{4}. \tag{33}\] Let each of the parties perform a measurement in the measurement basis given by Eq.(12). The trilocal inequality (Eqs.(8,16)) gives: \[|p_{3}^{2}(\alpha_{2}^{2}-4\alpha_{2}^{4}+p_{3})|+|p_{3}(p_{3}-3\alpha_{2}^{2}p _{3}+\alpha_{2}^{4}(1+p_{3}))|\leq 2^{\frac{3}{2}}. \tag{34}\] Numerical maximization of the left-hand side expression of the above inequality yields \(0.568\) approximately. Hence, Eq.(34) is satisfied for all possible values of \(p_{3},\alpha_{2}\). This in turn implies that the particular trilocal inequality (Eqs.(8,16)) has zero noise tolerance in this case. So no violation is observed. Non trilocality can be detected over some range of \(p_{3}\), considering some other trilocal inequality from the set given by Eq.(8). For example, let us consider the following functions in Eq.(8): \[f(s,\vec{r}) =1\text{ if }s=1\text{ and }r_{1}r_{2}=0\] \[=0\,\text{else}\] \[g(s,\vec{r}) =h(s,\vec{r})=(1-s)|r_{1}+r_{2}-1|+s|r_{1}r_{2}-1|\text{ with }\] \[\vec{r} =(r_{1},r_{2})\text{ and }s,r_{1},r_{2}\in\{0,1\} \tag{35}\] Clearly all these functions(Eq.(3)) are integer-valued. Eq.(8) together with these functional forms, therefore, gives a trilocal inequality. So all trilocal correlations necessarily satisfies the inequality given by Eqs.(8,3). Its violation thus guarantees the generation of nontrilocal correlations in the triangle network. Now violation of this particular trilocal inequality is observed if: \[\sqrt{|W_{1}|}+2\sqrt{|W_{2}|}>8 \tag{36}\] \[\text{with }W_{1}=2+6p_{3}+2p_{3}^{2}-2p_{3}^{3}+8\alpha_{2}^{4}p _{3}(1+p_{3})\] \[-8\alpha_{2}^{2}p_{3}(2+p_{3})\text{ and }\] \[W_{2}=p_{3}(9+\alpha_{2}^{2}(-22-4p_{3})+p_{3}+\alpha_{2}^{4}(14+ 6p_{3})).\] Above relation is satisfied for some range of \(p_{3}\) (see Fig.6). ### Imperfect Measurements Detection inefficiency is another potent factor of the experimental noise which forms the basis of detection loophole[43]. Let each of the parties perform a measurement in the orthogonal basis given by Eq.(12) with detection efficiency \(p_{4}\). To be precise, on being measured in the basis (Eq.(12)), the output is detected efficiently with probability \(p_{4}\) whereas no output is obtained with probability \(1-p_{4}\). The corresponding POVM elements of such imperfect basis measurement are given by \(p_{4}b_{i}+\frac{1-p_{4}}{4}\mathbbm{1}_{4}\) where \(b_{1},b_{2},b_{3},b_{4}\) are the elements of basis given by Eq.(12). In case identical copies of the pure entangled state(\(|\phi^{+}\rangle\langle\phi^{+}|\)) are used in the network, violation of the trilocal inequality given by Eqs.(8,16) is observed if: \[\sqrt{|p_{4}^{2}(3\alpha_{2}^{2}p_{4}-\alpha_{2}^{4}(1-p_{4})-p_ {4})|}+\] \[\sqrt{|(1-\alpha_{2}^{2}+4\alpha_{2}^{4})p_{4}^{3}|}>2^{\frac{3}{ 2}}. \tag{37}\] Over some restricted range of \(p_{4}\) (see Fig.7), nontrilocal correlations are detected via violation of the trilocal inequality (Eqs.(8,16)). ## VIII Comparison with existing trilocal inequality (Eq.(5)) There exist dissimilarities between a linear trilocal network (Fig.1) and a triangle network (Fig2). Structural dissimilarity between an open type (linear trilocal network) and a closed form of network (triangle network) is obvious. Apart from that, it may be noted that while the measurement Figure 6: _Shaded region indicates the measurement parameter (\(\alpha_{2}\)) and depolarizing noise parameter (\(p_{3}\)) for which nontrilocal correlations are observed in triangle network involving identical copies of the noisy state \(\varrho_{dep}\) (Eq.(33))._ scenario involved in a triangle network is a fixed input scenario, not all parties in a linear network perform fixed measurement (subsec.III.2). As already discussed before, under fixed input assumption, nonclassical network correlations can be simulated which are not attributable to the standard Bell nonlocality. To that end, it becomes imperative to compare the trilocal inequalities for a triangle network with the existing trilocal inequality (Eq.(5)) for a linear network[14]. ### Detecting Nontrilocality in Triangle Network Involving Bell-CHSH local States Let us first consider the trilocal linear network. Here each of the three independent sources distributes two-qubit states among four parties \(\mathcal{P}_{1},\mathcal{P}_{2},\mathcal{P}_{3}\) and \(\mathcal{P}_{4}\). Each of the two parties \(\mathcal{P}_{2},\mathcal{P}_{3}\), receiving two qubits, performs Bell basis measurement. Each of the remaining two parties \(\mathcal{P}_{1},\mathcal{P}_{4}\), receiving single qubit, performs local projective measurements. In case an arbitrary two qubit state \(\varrho_{i}\) (Eq.(1)) is generated by source \(\mathcal{S}_{i}\) (\(i{=}1,2,3\)), nontrilocal correlations are detected by violation of Eq.(5) if[22]: \[\sqrt{\Pi_{i=1}^{3}t_{i11}+\Pi_{i=1}^{3}t_{i22}}>1 \tag{38}\] where \(t_{i11},t_{i22}\) denote the largest two singular values of correlation tensor (\(T_{i}\),say) of \(\varrho_{i}\) (\(i{=}1,2,3\)). So in case Eq.(38) does not hold, the correlations turn out to be trilocal as per the trilocal inequality (Eq.(5)). Now, let us consider the case where all the three states (\(\varrho_{1},\varrho_{2},\varrho_{3}\)) are Bell-CHSH local[40]: \[\sqrt{t_{i11}^{2}+t_{i22}^{2}}\leq 1,\ \forall i=1,2,3. \tag{39}\] Again, \[\begin{split}&\Pi_{i=1}^{3}t_{i11}+\Pi_{i=1}^{3}t_{i22}\leq \sqrt{AB}\\ &\text{where},A=\Pi_{i=1}^{2}t_{i11}^{2}+\Pi_{i=1}^{2}t_{i22}^{2} \\ &\text{and}\ B=t_{311}^{2}+t_{322}^{2}.\end{split} \tag{40}\] Using Eq.(39) for \(i{=}1,2\), we get: \[\begin{split}&\sqrt{\Pi_{i=1}^{2}t_{i11}^{2}+\Pi_{i=1}^{2}t_{i22}^ {2}+t_{111}^{2}t_{i22}^{2}}+t_{122}^{2}t_{211}^{2}\\ &=\sqrt{\Pi_{i=1}^{2}t_{i11}^{2}+\Pi_{i=1}^{2}t_{i22}^{2}+C}\, \\ &\leq 1\\ &\text{where},\ C=t_{111}^{2}t_{222}^{2}+t_{122}^{2}t_{211}^{2}. \end{split} \tag{41}\] Using Eq.(41), expression \(\sqrt{\Pi_{i=1}^{2}t_{i11}^{2}+\Pi_{i=1}^{2}t_{i22}^{2}}\) in Eq.(40) turns out to be \(\leq\)1. Also as Eq.(39) holds for \(i{=}3\), Eq.(40) gives: \[\Pi_{i=1}^{3}t_{i11}+\Pi_{i=1}^{3}t_{i22}\leq 1. \tag{42}\] So, in case each state used in the network is Bell-CHSH local, corresponding network correlations are not detected as non trilocal by Eq.(5). Consequently, the violation of Eq.(5) relies on Bell-CHSH inequality violation by the states distributed in the network. However, in the tri-angle network, in some cases, it is observed that for a suitable choice of the integer-valued functions (\(f,g,h\)), violation of a trilocal inequality (Eq.(8)) is possible even if all the sources distribute Bell-CHSH local states. Consider the family of Bell diagonal states[41]: \[\varrho_{Bell} =\omega_{1}|\psi^{-}\rangle\langle\psi^{-}|+\omega_{2}|\phi^{+} \rangle\langle\phi^{+}|+\omega_{3}|\phi^{-}\rangle\langle\phi^{-}|\] \[\quad+\omega_{4}|\psi^{+}\rangle\langle\psi^{+}| \tag{43}\] with \(\omega_{i}{\in}[0,1]\)\(\forall i{=}1,2,3,4\), \(\sum_{i=1}^{4}\omega_{i}{=}1\) and \(|\phi^{\pm}{\rangle}{=}\frac{|00\rangle\pm|11\rangle}{\sqrt{2}}\), \(|\psi^{\pm}{\rangle}{=}\frac{|01\rangle\pm|10\rangle}{\sqrt{2}}\) denote the Bell states. Correlation matrix of this class is given by: \(\text{diag}(1-2(\omega_{1}+\omega_{3}),1-2(\omega_{2}+\omega_{3}),1-2(\omega_ {1}+\omega_{2}))\). Let us consider a subclass of the family (Eq.(43)) specified by \(\omega_{3}{=}0\). Members from this subclass are Bell-CHSH local if: \[\sum_{i=1}^{2}(1-2\omega_{i})^{2}\leq 1. \tag{44}\] As discussed above, nontrilocal correlations cannot be detected in any linear trilocal network involving Bell-CHSH local states. Now, let each of the sources in the triangle network distribute an identical copy of the Bell diagonal state characterized by \(\omega_{3}{=}0\). Let the parties perform the local measurement on the basis given by Eq.(12). Figure 7: _Shaded region specifies the ideal measurement parameter (\(\alpha_{2}\)) and imperfection parameter (\(p_{\phi}\)) for which nontrilocal correlations are detected in a triangle network involving identical copies of Bell state \(|\phi^{+}\rangle\langle\phi^{+}|\)._ Violation of the inequality (Eqs.(8,16)) occurs if: \[\sqrt{|(1-2\omega_{2})^{2}-3\alpha_{2}^{2}(1-2\omega_{2})^{2}+\alpha _{2}^{4}(2-6\omega_{2}+4\omega_{2}^{2})|}+\] \[\sqrt{|(1-2\omega_{2})^{2}(-1-\alpha_{2}^{2}+4\alpha_{2}^{4}+2 \omega_{2})|}>1. \tag{45}\] It is observed that some members of this family (see Fig.8), satisfy both the above relations (Eqs.(44,45)). Hence, the violation of Eqs.(8,16) does not stem from the Bell-CHSH violation by the individual states involved. Consequently, the nontrilocality of the network correlations can be detected using at least one trilocal inequality (Eq.(8)) irrespective of Bell's nonlocality of the states used in the triangle network. ### Higher Noise Tolerance For suitable choice of \(f,g,h\) there exist trilocal inequalities (Eq.(8)) which are more tolerant to noise compared to the existing inequality (Eq.(5)) for linear networks. For the sake of discussion, let us consider the particular inequality given by Eqs.(8,16). Let three identical copies of the noisy state (Eq.(31)) be used in a linear trilocal network. Correlation tensor of this family of states is given by \(\mathrm{diag}(-p_{1}p_{2},p_{1}p_{2},p_{2})\). Using Eq.(38), it is clear that nontrilocality cannot be detected in the linear network if: \[\mathrm{Max}(\sqrt{2p_{1}^{3}p_{2}^{3}},p_{2}^{\frac{3}{2}}\sqrt{1+p_{1}^{3}} )\leq 1. \tag{46}\] But when these identical copies are used in the triangle network, nontrilcality is detected if Eq.(32) is satisfied. For some members of this family of states (Eq.(31)), nontrilocality is detected in the triangle network but not in a linear trilocal network (see Fig.9). Only one example of trilocal inequality for the triangle network has been discussed here. However, one may find further examples of trilocal inequalities for the triangle network (suitably choosing f, g and h) such that those are more resistant to noise compared to Eq.(5). ## IX Polygon network scenario Consider a closed network having \(n\) edges and \(n\) vertices. \(\mathcal{S}_{1},\mathcal{S}_{2},...,\mathcal{S}_{n}\) denote \(n\) independent sources corresponding to \(n\) sides. The network involves \(n\) parties \(\mathcal{P}_{1},\mathcal{P}_{2},....,\mathcal{P}_{n}\) corresponding to \(n\) vertices (see Fig.11). Each source \(\mathcal{S}_{i}\) distributes particles to the parties \(\mathcal{P}_{i},\mathcal{P}_{i+1}(i{=}1,2,...,n-1)\) and \(\mathcal{S}_{n}\) sends particles to parties \(\mathcal{P}_{n}\) and \(\mathcal{P}_{1}\). Figure 8: _Shaded region is a subspace of parameter space \((\omega_{1},\omega_{2},\alpha_{2})\). Corresponding Bell diagonal states (for \(\omega_{3}{=}0\)) are Bell-CHSH local (Eq.(44)). However nontrilocal correlations are detected (using Eq.(45)) by using identical copies of these states in a triangle network for any value of the measurement parameter \(\alpha_{2}\) lying in this region._ Figure 9: _Shaded region indicates the measurement parameter (\(\alpha_{2}\)) and noise parameters(\(p_{1},p_{2}\)) for which nontrilocal correlations are observed in a triangle network but cannot be detected in a linear trilocal network involving identical copies of the noisy state (Eq.(31))._ Let local variable \(\lambda_{i}\) characterize \(\mathcal{S}_{i}\,\forall i\). \(n\)-local (source independence) assumption takes the form: \[\rho(\lambda_{1},\lambda_{2},...,\lambda_{n})=\Pi_{i=1}^{n}\rho_{i}(\lambda_{i}), \tag{47}\] where \(\rho_{i}\) corresponds to normalized stochastic distribution of \(\lambda_{i}\), \(\forall i\). Let \(x_{i}\) denote the fixed local ternary input of \(\mathcal{P}_{i}\,\forall i\). Let outcomes of \(x_{i}\) be labeled as \(\overline{\sigma}_{i}{=}(\sigma_{i1},\sigma_{i2})\) with each component \(\sigma_{ij}\) being binary-valued. \(n\)-partite local correlations are of the form: \[p(\overline{\sigma}_{1},...,\overline{\sigma}_{n})=\int_{\Lambda _{1}}...\int_{\Lambda_{n}}d\lambda_{1}...d\lambda_{n}\rho(\lambda_{1},..., \lambda_{n})R_{3} \tag{48}\] \[\text{where }R_{3}=\Pi_{i=2}^{n}p(\overline{\sigma}_{i}|\lambda_{i}, \lambda_{i-1})p(\overline{\sigma}_{1}|\lambda_{1},\lambda_{n}).\] Any set of \(n\)-partite correlations generated in the network is \(n\)-local if it is local, i.e., satisfies Eq.(48) and also satisfies the \(n\)-local constraint (Eq.(47)). Correlation inexplicable in this form (Eqs.(47,48)) is said to be _non \(n\)-local_. The set of \(n\)-local inequalities characterizing correlations is given by the following theorem. _Theorem.2:_ In a polygon network, an arbitrary set of \(n\)-local correlations necessarily satisfies: \[\sqrt{|I_{1,n}|}+\sqrt{|I_{2,n}|}\leq 1, \tag{49}\] where \(\forall j=1,2\) and for every fixed \(t{=}1,...,n\), \[I_{j,n}=\sum_{\mathcal{C}_{1}}(-1)^{j/\sum_{\mathcal{C}_{i}\in \mathcal{C}_{2}}i_{i}}\langle\Pi_{\mathcal{S}\in\mathcal{C}_{2}}c_{s}^{i_{s}}c _{t}^{j-1}\rangle,\] \[\langle\Pi_{\mathcal{S}\in\mathcal{C}_{2}}c_{s}^{i_{s}}C_{t}^{j-1}\rangle=\sum _{\mathcal{C}_{3}}(-1)^{\sum_{\mathcal{C}_{i}\in\mathcal{C}_{2}}f_{i}(s_{i}, \overline{\sigma_{i}})+f_{i}(j-1,\overline{\sigma_{i}})}p(\overline{\sigma}_{ 1},...,\overline{\sigma}_{n})\] with \(\mathcal{C}_{1}=\{i_{k}|k\neq t\}\), \(\mathcal{C}_{2}=\{k=1,2,...,n|k\neq t\}\), \(k_{i}\in\{0,1\}\) and \(\mathcal{C}_{3}=\{\sigma_{11},\sigma_{12},\sigma_{21},\sigma_{22},...,\sigma_ {n1},\sigma_{n2}\}\). In above expression, \(\forall i\), \(f_{i}(.,.)\) denotes arbitrary integer-valued function. _Proof:_ Proof is based on the same procedure that was used for proving the previous theorem. The role of the parties is interchangeable in the correlator terms appearing in Eq.(49). The above theorem characterizes any \(n\)-local correlation arising in a \(n\)-sided closed network. Violation of at least one of the inequalities specified by Eq.(49) detects non \(n\)-locality in the network. However, such a detection criterion being only sufficient, there may exist non \(n\)-local correlations satisfying Eq.(49) for all possible forms of the integer-valued functions. For a demonstration of quantum violation of Eq.(49), let us consider that each of \(n\) sources generates a two-qubit state. On receiving the qubits, let each of the parties measure on the basis given by Eq.(12). It is conjectured that for suitable choice of the functions \(f_{1},...,f_{n}\), Eq.(49) will be violated. An instance of violation is provided for \(n{=}4\). Let each of \(\mathcal{S}_{1},\mathcal{S}_{2},\mathcal{S}_{3},\mathcal{S}_{4}\) generate a Bell state (Eq.(11)) in a square network. Consider a particular 4-local inequality given by Eq.(49) for \(t{=}2\) and integer-valued functions specified as follows: \[f_{1}(.,.)=f_{4}(.,.)=h(.,.)\text{ and }f_{2}(.,.)=f_{3}=f(.,.) \tag{50}\] where functions \(f,h\) are given by Eq.(13) Correlations violate the 4-local inequality (Eqs.(49,50)) if: \[\sqrt{|2-5\alpha_{2}^{2}+8\alpha_{2}^{4}|}+\sqrt{|\alpha_{2}^{2}(3-2\alpha_{2} ^{2})|}>2^{\frac{3}{2}}. \tag{51}\] Violation is observed over some range of \(\alpha_{2}\) (see Fig.11). ## X Discussions Measurement independence is one of the potent factors in manifesting nonlocality in standard Bell Figure 11: Plotting range of measurement parameter \(\alpha_{2}\) for which \(4\)-local inequality (Eqs.(49,50)) is violated, i.e., the relation given by Eq.(51) is satisfied. Figure 10: Schematic representation of polygon network with independent sources. scenario[4]. However, there exist examples[1] of nonlocality (nontrilocality) in a network scenario where each of the parties performs a fixed measurement. The present work deals with the study of such correlations introducing trilocal inequalities for any triangle network followed by an extension to a polygon network. More specifically, a complete set of Bell-type inequalities in a triangle network is provided under the source independence assumption. Violation of at least one such inequality is sufficient to detect nontrilocality in the network. As already discussed, these inequalities, unlike the existing trilocal inequality (compatible with linear trilocal networks[14; 22]) can detect nontrilocality in a network using Bell-CHSH local states. To some extent, such an observation is similar to the demonstration of tripartite entanglement of a state even when bipartite entanglement of none of the reduced states can be detected. It will be interesting to further exploit such an analogy between multipartite entanglement detection and non \(n\)-locality detection in closed quantum networks. For analysis of quantum violation, a few specific local measurement bases (Eqs.(12,19)) have been considered. Consideration of more general measurement contexts may contribute toward a better characterization of network correlations. The entire analysis is confined to closed networks where each source distributes particles to only two parties. It will be interesting to study networks where each source generates a \(m\)-partite (\(m\)\(\geq\)3) state. Framing Bell-type inequalities for such a scenario is important in the perspective of developing an experimentally feasible genuine entanglement detection scheme. To that end, it may be noted here that the correlations being manifested in fixed input scenario, the inequalities are devoid of freedom-of-choice loophole[44]. However, the same may suffer from several other loopholes in experimental scenarios. In this context, analysis of noise tolerance of some of these detection criteria is provided for some specific cases. However, a more general treatment of the same can be considered as a direction of potential future research. ## Acknowledgement I am thankful to the anonymous referees of Physical Review A journal for thoroughly checking the manuscript and providing valuable suggestions which helped in improving the work. Also, I would like to thank my brother Mr.Puranjan Mukhopadhyay for helping me to improve my writing style and grammar. ## Appendix.A _Proof of the theorem:_ By assumption of the theorem, the correlations are trilocal (Eq.(7)). Let us define the following terms: \[\langle C_{1}^{i}\rangle_{\lambda_{1},\lambda_{3}} =\sum_{\mathbf{\circ}_{11},\mathbf{\circ}_{12}}(-1)^{f(i,\mathbf{\check{\sigma }}_{1})}p_{\lambda_{1},\lambda_{3}}(\mathfrak{\check{s}}_{1})\] \[\langle C_{2}^{i}\rangle_{\lambda_{1},\lambda_{2}} =\sum_{\mathbf{\circ}_{21},\mathbf{\circ}_{22}}(-1)^{g(i,\mathbf{\check{\sigma }}_{2})}p_{\lambda_{1},\lambda_{2}}(\mathfrak{\check{s}}_{2})\] \[\langle C_{3}^{i}\rangle_{\lambda_{2},\lambda_{3}} =\sum_{\mathbf{\circ}_{31},\mathbf{\circ}_{32}}(-1)^{h(i,\mathbf{\check{\sigma }}_{3})}p_{\lambda_{2},\lambda_{3}}(\mathfrak{\check{s}}_{3})\ \text{with}\,i=0,1 \tag{52}\] Functions \(f,g,h\) are used in framing the terms \(\langle C_{1}^{i}C_{2}^{j-1}C_{3}^{k}\rangle\) in Eq.(9). Each of the three functions appears as an exponent of \(-1\). So, for any even value of these functions, \(+1\) is obtained whereas, for any odd value of these functions, \(-1\) is obtained. So, each probability term appearing in the summation in the expression of any of the marginal expectations Eq.(52), is having either \(+1\) or \(-1\) as a coefficient. Again, adding up all probability terms appearing in the expression of any of these terms gives 1. So, combining these two facts, for any fixed value of hidden variables \(\lambda_{1},\lambda_{2},\lambda_{3}\), absolute value of any of these terms (Eq.(52)) is less than 1. Hence, \[|\langle C_{1}^{i}\rangle_{\lambda_{1},\lambda_{3}}|,|\langle C_{2}^{i}\rangle_ {\lambda_{1},\lambda_{2}}|,|\langle C_{3}^{i}\rangle_{\lambda_{2},\lambda_{3} }|\leq 1,\,i=0,1 \tag{53}\] Now consider any \((\lambda_{1},\lambda_{2},\lambda_{3})\)\(\in\)\(\Lambda_{1}\otimes\Lambda_{2}\otimes\Lambda_{3}\). \(\forall\,i,j,k\)\(=\)\(0,1\), we have, \[\langle C_{1}^{i}C_{2}^{j}C_{3}^{k}\rangle_{\lambda_{1},\lambda_{2 },\lambda_{3}} =\sum_{\mathcal{C}}(-1)^{f(i,\mathbf{\check{\sigma}}_{1})+g(j,\mathbf{ \check{\sigma}}_{2})+h(k,\mathbf{\check{\sigma}}_{3})}p_{\lambda_{1},\lambda_{2}, \lambda_{3}}(\mathfrak{\check{\sigma}}_{1},\mathfrak{\check{\sigma}}_{2}, \mathfrak{\check{\sigma}}_{3})\ \text{with}\ \mathcal{C}=\{\mathfrak{o}_{11},\mathfrak{o}_{12},\mathfrak{o}_{21}, \mathfrak{o}_{22},\mathfrak{o}_{31},\mathfrak{o}_{32}\}\] \[=\sum_{\mathcal{C}}(-1)^{f(i,\mathbf{\check{\sigma}}_{1})+g(j,\mathbf{ \check{\sigma}}_{2})+h(k,\mathbf{\check{\sigma}}_{3})}p_{\lambda_{1},\lambda_{3}}( \mathfrak{\check{\sigma}}_{1}).p_{\lambda_{1},\lambda_{2}}(\mathfrak{\check{ \sigma}}_{2}).p_{\lambda_{2},\lambda_{3}}(\mathfrak{\check{\sigma}}_{3})\] \[=(\sum_{\mathfrak{o}_{11},\mathfrak{o}_{12}}(-1)^{f(i,\mathbf{ \check{\sigma}}_{1})}p_{\lambda_{1},\lambda_{3}}(\mathfrak{\check{\sigma}}_{1 }))(\sum_{\mathfrak{o}_{21},\mathfrak{o}_{22}}(-1)^{g(j,\mathbf{\check{\sigma}}_{2 })}p_{\lambda_{1},\lambda_{2}}(\mathfrak{\check{\sigma}}_{2}))(\sum_{\mathfrak{ o}_{31},\mathfrak{o}_{32}}(-1)^{h(k,\mathbf{\check{\sigma}}_{3})}p_{\lambda_{2}, \lambda_{3}}(\mathfrak{\check{\sigma}}_{3}))\] \[=\langle C_{1}^{i}\rangle_{\lambda_{1},\lambda_{3}}\langle C_{2}^{ j}\rangle_{\lambda_{1},\lambda_{2}}\langle C_{3}^{k}\rangle_{\lambda_{2}, \lambda_{3}} \tag{54}\] Hence, \[(I_{1})_{\lambda_{1},\lambda_{2},\lambda_{3}} = \langle C_{1}^{0}-C_{1}^{1}\rangle_{\lambda_{1},\lambda_{3}}\langle C _{2}^{0}\rangle_{\lambda_{1},\lambda_{2}}\langle C_{3}^{0}-C_{3}^{1}\rangle_{ \lambda_{2},\lambda_{3}}\] \[(I_{2})_{\lambda_{1},\lambda_{2},\lambda_{3}} = \langle C_{1}^{0}+C_{1}^{1}\rangle_{\lambda_{1},\lambda_{3}} \langle C_{2}^{1}\rangle_{\lambda_{1},\lambda_{2}}\langle C_{3}^{0}+C_{3}^{1} \rangle_{\lambda_{2},\lambda_{3}} \tag{55}\] Now, \[I_{1} = \frac{1}{4}\int_{\Lambda_{1}}\int_{\Lambda_{2}}\int_{\Lambda_{3}} \Pi_{i=1}^{3}\rho_{i}(\lambda_{i})d\lambda_{i}(I_{1})_{\lambda_{1},\lambda_{2 },\lambda_{3}}\text{ (By the trilocality constraint Eq.(6))}\] \[= \frac{1}{4}\int_{\Lambda_{1}}\int_{\Lambda_{2}}\int_{\Lambda_{3}} \Pi_{i=1}^{3}\rho_{i}(\lambda_{i})d\lambda_{i}((C_{1}^{0}-C_{1}^{1})_{\lambda _{1},\lambda_{3}})\langle C_{2}^{0}\rangle_{\lambda_{1},\lambda_{2}}\text{ (By Eq.(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeq: pure entangled states(Eq.(29)) and one pure product state (Eq.(21)). Depending on which of the two sources generate entangled states, the expressions differ as follows: \begin{tabular}{|c|c|} \hline States & Expression(\(\mathcal{T}\)) \\ \hline \(\varrho^{1}_{ent},\varrho^{2}_{ent}\) & \(\frac{1}{\sqrt{2}}(\sqrt{|(\tau^{2}_{11})\tau^{2}_{22}(1-\alpha^{2}_{2}(1+u_{ 53}))(1-\alpha^{2}_{2}(1-u_{63})+u_{63})|}\) \\ \(\varrho^{(5,6)}_{pred}\) & \(+\sqrt{|u_{53}((-1+3\alpha^{2}_{2}-2\alpha^{4}_{2})\tau^{2}_{21}\tau^{2}_{22}( -1+u_{63})+(1-2\tau^{2}_{21}+2\alpha^{2}_{2}(-1+\tau^{2}_{21}))\tau^{2}_{12}(-1+ u_{63}+a^{2}_{2}(1+u_{63})))|})\) \\ \hline \(\varrho^{2}_{ent},\varrho^{3}_{ent}\) & \(\frac{1}{\sqrt{2}}(\sqrt{|\tau^{2}_{22}(-\alpha^{4}_{2}\tau^{2}_{23}(-1+u_{1,3})-\tau^{2}_{12}u_{1,3}+a^{2}_{2}(\tau^{2}_{23}u_{1,3}+\tau^{2}_{12}(1+u_{1, 3})))(1+u_{2,3})}+\) \\ \(\varrho^{(1,2)}_{pred}\) & \(\sqrt{|u_{1,3}(\tau^{2}_{23}(2-\tau^{2}_{22}(1+u_{2,3}))+2\alpha^{4}_{2}(\tau^ {2}_{22}\tau^{2}_{23}(1-u_{2,3})-\tau^{2}_{12}\tau^{2}_{13}(1+u_{2,3}))-a^{2}_{2 }(2(-\tau^{2}_{12})\tau^{2}_{13}+\tau^{2}_{23}(2+\tau^{2}_{22}(1-5u_{2,3})+2u_{ 2,3})))|}\) \\ \hline \(\varrho^{1}_{ent},\varrho^{3}_{ent}\) & \(\frac{1}{\sqrt{2}}(\sqrt{|(-\tau^{2}_{11})(2\tau^{2}_{13}u_{3,3}+2\alpha^{4}_{2} \tau^{2}_{13}(1+u_{3,3})-a^{2}_{2}(\tau^{2}_{23}(-1+u_{3,3})+\tau^{2}_{13}(1+ 5u_{3,3})))(-1+u_{4,3})|}\) \\ \(\varrho^{(3,4)}_{pred}\) & \(+\sqrt{|(2\tau^{2}_{23}u_{3,3}+a^{2}_{2}(-\tau^{2}_{23}(-1+u_{3,3})+\tau^{2}_{ 13}(1+u_{3,3})))(1+\tau^{2}_{21}(-1-3u_{4,3})+u_{4,3}+a^{2}_{2}(-2+(-2+4\tau^{2 }_{21})u_{4,3}))|})\) \\ \hline \end{tabular}
2306.04905
ViG-UNet: Vision Graph Neural Networks for Medical Image Segmentation
Deep neural networks have been widely used in medical image analysis and medical image segmentation is one of the most important tasks. U-shaped neural networks with encoder-decoder are prevailing and have succeeded greatly in various segmentation tasks. While CNNs treat an image as a grid of pixels in Euclidean space and Transformers recognize an image as a sequence of patches, graph-based representation is more generalized and can construct connections for each part of an image. In this paper, we propose a novel ViG-UNet, a graph neural network-based U-shaped architecture with the encoder, the decoder, the bottleneck, and skip connections. The downsampling and upsampling modules are also carefully designed. The experimental results on ISIC 2016, ISIC 2017 and Kvasir-SEG datasets demonstrate that our proposed architecture outperforms most existing classic and state-of-the-art U-shaped networks.
Juntao Jiang, Xiyu Chen, Guanzhong Tian, Yong Liu
2023-06-08T03:17:00Z
http://arxiv.org/abs/2306.04905v1
# Vig-UNet: Vision Graph Neural Networks for Medical Image Segmentation ###### Abstract Deep neural networks have been widely used in medical image analysis and medical image segmentation is one of the most important tasks. U-shaped neural networks with encoder-decoder are prevailing and have succeeded greatly in various segmentation tasks. While CNNs treat an image as a grid of pixels in Euclidean space and Transformers recognize an image as a sequence of patches, graph-based representation is more generalized and can construct connections for each part of an image. In this paper, we propose a novel ViG-UNet, a graph neural network-based U-shaped architecture with the encoder, the decoder, the bottleneck, and skip connections. The downsampling and upsampling modules are also carefully designed. The experimental results on ISIC 2016, ISIC 2017 and Kvasir-SEG datasets demonstrate that our proposed architecture outperforms most existing classic and state-of-the-art U-shaped networks. Juntao Jiang\({}^{1}\), Xiyu Chen\({}^{2}\), Guanzhong Tian\({}^{3}\)1 and Yong Liu 1\({}^{*}\)1. College of Control Science and Engineering, Zhejiang University, Hangzhou, China 2. Polytechnic Institute, Zhejiang University, Hangzhou, China 3. Ningbo Innovation Center, Zhejiang University, Ningbo, China Medical image segmentation, ViG-UNet, Graph neural networks, Encoder-decoder Footnote 1: Corresponding authors ## 1 Introduction Recent years have witnessed the rise of deep learning and its broader applications in computer vision tasks. As one of the most heated topics of computer vision applied in medical scenarios, image segmentation, identifying the pixels of organs or lesions from the background, plays a crucial role in computer-aided diagnosis and treatment, improving efficiency and accuracy. Currently, medical image segmentation methods based on deep learning mainly use fully convolutional neural networks (FCN) with U-shaped encoder-decoder architecture such as U-Net [1] and its variants. Composed of a symmetric encoder-decoder with skip connections, U-Net uses convolutional layers and downsampling modules for feature extraction, while convolutional layers and upsampling modules for pixel-level semantic classification. The skip connection operation can maintain spatial information from a high-resolution feature, which may be lost in downsampling. Following this work and based on a fully convolutional structure, a lot of U-Net's variants like Attention-UNet [2], UNet++ [3] and so on, have been proposed and achieved great success. Recently, as Transformer-based methods like ViT [4] achieved good results in image recognition tasks, thanks to their capability of enhancing global understanding of images, extracting information from the inputs and their interrelations, the Transformer-based medical image segmentation models such as Trans-UNet [5] and Swin-UNet [6] also have been proposed and showed competitive performance. While CNNs treat an image as a grid of pixels in Euclidean space and Transformer recognizes an image as a sequence of patches, graph-based representation can be more generalized and reflect the relationship of each part in an image. Since the graph neural network (GNN) [7] was first proposed, the techniques for processing graphs have been researched a lot. A series of spatial-based GCNs [8, 9] and spectral-based GCNs [10, 11, 12, 13] are widely proposed and applied. In recent work, Han et al. [14] proposed a Vision GNN (ViG), which splits the image into many blocks regarded as nodes and constructs a graph representation by connecting the nearest neighbors, then uses GNNs to process it. It contains Grapher modules with graph convolution to aggregate and update graph information and Feed-forward Networks (FFNs) modules with fully connected networks for node feature transformation, which performed well in image recognition tasks. ViG-S has achieved 0.804 Top-1 accuracy and ViG-B has achieved 0.823 on ImageNet [15]. Motivated by the success of ViG model, we propose a ViG-UNet to utilize the powerful functions of ViG for 2D medical image segmentation in this work. The graph-based representation can also be effective in segmentation tasks. ViG-UNet is a GNN-based U-shaped architecture consisting of the encoder, the bottleneck and the decoder, with skip connections. We do comparison experiments on ISIC 2016 [16], ISIC 2017 [17] and Kvair-SEG [18] datasets.The results show that the proposed model outperformed most existing classic and state-of-the-art methods. The code will be released at _[https://github.com/juntaoJianggavin/ViG-UNet_](https://github.com/juntaoJianggavin/ViG-UNet_). ## 2 Methods ### Architecture Overview ViG-UNet is a U-shape model with symmetrical architectures, whose architecture can be seen in Figure 1. It consists of structures of the encoder, the bottleneck, the decoder and skip-connections. The basic modules of ViG-UNet are the stem block, Grapher Modules, Feed-forward Networks (FFNs), downsampling and upsampling modules. The detailed settings of ViG-UNet can be seen in Table 1, where \(D\) means feature dimension, \(E\) means the numbers of convolutional layers in FFNs, \(K\) means the number of neighbors in GCNs, \(H\times W\) means the output image size. ### Stem Block, Upsampling, Downsampling and SkipConnections In the stem block, two convolutional layers are used with stride 1 and stride 2, respectively. The output features have height and width equal to \(\frac{H}{2}\) and \(\frac{W}{2}\), where \(H\), \(W\) are the original height and width of the input image. And the position embedding is added. We used a convolutional layer with stride 2 for downsampling operation and a bilinear for upsampling operation with the scale factor 2 following with a convolutional layer. The output of each FFN in the encoder is added to the output of the FFN in the decoder. ### Grapher Module Vision GNN first builds an image's graph structure by dividing it into \(N\) patches, converting them into feature vectors, and then recognizing them as a set of nodes \(\mathcal{V}=\{v_{1},v_{2},\cdots,v_{N}\}\). A \(K\) nearest neighbors method is used to find \(K\) nearest neighbors \(\mathcal{N}\left(v_{i}\right)\) for each node \(v_{i}\). An edge \(e_{ji}\) is added from \(v_{j}\) to \(v_{i}\) for all \(v_{j}\in\mathcal{N}\left(v_{i}\right)\). In this way, a graph representation of an image \(\mathcal{G}=\left(\mathcal{V},\mathcal{E}\right)\) is obtained, where \(e_{ji}\in\mathcal{E}\). For the constructed graph representation \(\mathcal{G}=G(X)\) and the input feature \(x_{i}\), the aggregation operation calculates the representation of a node by aggregating features of neighboring nodes. Then the update operation merge the aggregated feature. The updated feature \(\mathbf{x}_{i}^{\prime}\) can be represented as: \[\mathbf{x}_{i}^{\prime}=h\left(\mathbf{x}_{i},g\left(\mathbf{x}_{i},\mathcal{ N}\left(\mathbf{x}_{i}\right);W_{aggregate}\right);W_{\text{update}}\right), \tag{1}\] where \(W_{aggregate}\) and \(W_{update}\) are the learnable weights of the aggregation and update operations. \[g(\cdot)=\mathbf{x}_{i}^{\prime\prime}=\left[\mathbf{x}_{i},\max\left(\left\{ \mathbf{x}_{j}-\mathbf{x}_{i}\mid j\in\mathcal{N}\left(\mathbf{x}_{i}\right) \right\}\right]\right., \tag{2}\] and \[h(\cdot)=\mathbf{x}_{i}^{\prime}=\mathbf{x}_{i}^{\prime\prime}W_{\text{ update}}+b_{h}, \tag{3}\] \(b_{h}\) is the bias. And following the design of original ViG networks, the \(g(\cdot)\) operation uses the max-relative graph convolution [19]. \begin{table} \begin{tabular}{c|c|c} \hline Module & Output size & Architecture \\ \hline Stem & \(\frac{H}{2}\times\frac{W}{2}\) & Conv\(\times\)2 \\ Grapher + FFN & \(\frac{H}{2}\times\frac{W}{2}\) & \(\left[\begin{array}{c}D=32\\ E=2\\ K=9\end{array}\right]\) \\ Downsampling & \(\frac{H}{4}\times\frac{W}{4}\) & Conv \\ Grapher + FFN & \(\frac{H}{4}\times\frac{W}{4}\) & \(\left[\begin{array}{c}D=64\\ E=2\\ K=9\end{array}\right]\) \\ Downsampling & \(\frac{H}{8}\times\frac{W}{8}\) & Conv \\ Grapher + FFN & \(\frac{H}{8}\times\frac{W}{8}\) & \(\left[\begin{array}{c}D=128\\ E=2\\ K=9\end{array}\right]\) \\ Downsampling & \(\frac{H}{16}\times\frac{W}{16}\) & Conv \\ Grapher + FFN & \(\frac{H}{16}\times\frac{W}{16}\) & \(\left[\begin{array}{c}D=256\\ E=2\\ K=9\end{array}\right]\) \\ Downsampling & \(\frac{H}{32}\times\frac{W}{32}\) & Conv \\ Grapher \(\times\)2 & \(\frac{H}{32}\times\frac{W}{32}\) & \(\left[\begin{array}{c}D=512\\ K=9\end{array}\right]\times 2\) \\ Upsampling & \(\frac{H}{16}\times\frac{W}{16}\) & bilinear + Conv \\ Grapher + FFN & \(\frac{H}{16}\times\frac{W}{16}\) & \(\left[\begin{array}{c}D=256\\ E=2\\ K=9\end{array}\right]\) \\ Upsampling & \(\frac{H}{8}\times\frac{W}{8}\) & bilinear + Conv \\ Grapher + FFN & \(\frac{H}{8}\times\frac{W}{8}\) & \(\left[\begin{array}{c}D=128\\ E=2\\ K=9\end{array}\right]\) \\ Upsampling & \(\frac{H}{4}\times\frac{W}{4}\) & bilinear + Conv \\ Grapher + FFN & \(\frac{H}{4}\times\frac{W}{4}\) & \(\left[\begin{array}{c}D=64\\ E=2\\ K=9\end{array}\right]\) \\ Upsampling & \(\frac{H}{2}\times\frac{W}{2}\) & bilinear + Conv \\ Grapher + FFN & \(\frac{H}{2}\times\frac{W}{2}\) & \(\left[\begin{array}{c}D=32\\ E=2\\ K=9\end{array}\right]\) \\ Final Layer & \(H\times W\) & bilinear + Conv \\ \hline \end{tabular} \end{table} Table 1: Detailed settings of ViG-UNet The aggregated feature \(\mathbf{x}_{i}^{\prime\prime}\) is split into \(h\) heads, then each head is updated with different weights. Then the updated feature \(\mathbf{x}_{i}^{\prime}\) can be obtained by concatenating all the heads: \[\begin{split}\mathbf{x}_{i}^{\prime}=[\mathbf{x}_{\text{ineal}}^{ \prime\prime}\,\,W_{\text{update}}^{1}\,+b_{h1},\mathbf{x}_{\text{ineal}\,2}^{ \prime\prime}\,W_{\text{update}}^{2}\,+b_{h2},\\ \mathbf{x}_{\text{ineal}\,3}^{\prime\prime}\,W_{\text{update}}^{3} \,+b_{h3},\cdots\mathbf{x}_{\text{ineal}h}^{\prime\prime}\,W_{\text{update}}^{ h}\,+b_{hh}]\end{split} \tag{4}\] where \(\mathbf{x}_{\text{ineal}\,1}^{\prime\prime}\,\mathbf{x}_{\text{ineal}\,2}^{ \prime\prime}\,\cdots,\mathbf{x}_{\text{ineal}h}^{\prime\prime}\) represent the split heads from \(\mathbf{x}_{i}^{\prime}\), \(\,W_{\text{update}}^{1}\,W_{\text{update}}^{2}\,\cdots,W_{\text{update}}^{h}\,\) represent different weights and \(b_{h1},b_{h2},\cdots,b_{hh}\) represent different biases. For the input feature \(X\), the output feature \(Y\) after a Grapher module can be represented as: \[X_{1}=XW_{in}+b_{in}, \tag{5}\] \[Y=\mathrm{Droppath}(\mathrm{GELU}(\mathrm{GraphConv}(X_{1})W_{\text{out}}\,+b_ {out})+X, \tag{6}\] where \(Y\) has the same size as \(X\), \(W_{\text{in}}\,\) and \(W_{\text{out}}\,\) are the weights. The activation function used is \(\mathrm{GELU}\)[20]. \(b_{in}\) and \(b_{out}\,\) are biases. In the implementation, all \(InputW+b\) are achieved by using a convolutional layer following a batch normalization operation. \(\mathrm{GraphConv}\) means aggregating and updating the discussed graph-level processing. The Grapher module is with a shortcut structure. The \(\mathrm{Droppath}\)[21] operation is used. ### Feed-forward Network Feed-forward Networks (FFNs) are used to help with the feature transformation capacity and relief the over-smoothing phenomenon after the Grapher module. The FFN can be represented as \[Z=\mathrm{Droppath}(\mathrm{GELU}\left(YW_{1}+b_{1}\right)W_{2}+b_{2})+Y, \tag{7}\] where \(W_{1}\) and \(W_{2}\) are weights, \(b_{1}\) and \(b_{2}\) are biases. In the implementation, each feed-forward operation \(InputW+b\) is achieved by using a convolutional layer following a batch normalization operation. The \(\mathrm{Droppath}\) operation is used. The workflow of Grapher and FFN modules is shown in Figure2. ## 3 Experiments ### Datasets **ISIC 2016** is a dataset of dermoscopic images of skin lesions. We used the dataset of lesion segmentation in this paper. There are 900 pairs of images and corresponding masks in the training set. In the testing set, there are 379 pairs. **ISIC 2017** is a dataset of dermoscopic images of skin lesions. We used the dataset of the lesion segmentation task, which contains images and corresponding masks. There are 2000 pairs in the training set, 150 in the validation set, and 600 in the testing set. The **Kvasir-SEG** dataset contains 1000 pairs of polyp images and masks. We split the dataset into training and testing sets with a ratio of 0.2 with a random state of 41. ### Implementation Details The experiments are all done on PG500-216(V-100) with 32 GB memory. The training and validation set of ISIC 2016 and Figure 1: The architecture of ViG-UNet: the basic modules are the stem block for visual embedding, Grapher Modules, Feed-forward Networks (FFNs) modules, downsampling modules in the encoder and upsampling modules in the decoder. Figure 2: The workflow of Grapher and FFN modules: graph processing and feature transformation are applied Kvair-SEG are split with a ratio of 0.2 with the random state 41. The total training epochs are 200 and the batch size is 4. The input images are all resized to 512\(\times\)512. The optimizer used is ADAM [10]. The initial learning rate is 0.0001 and a CosineAnnealingLR [11] scheduler is used. The minimum learning rate is 0.00001. Only rotation by 90 degrees clockwise for random times, flipping and normalization methods are used for augmentation. The evaluation metrics in validation are \(IOU\) of the lesions. A mixed loss combining binary cross entropy (BCE) loss and dice loss [1] is used in the training process: \[\mathcal{L}=0.5BCE(\hat{y},y)+Dice(\hat{y},y)\] We implemented ViG-UNet and six other U-Net variants for comparison experiments. The pre-trained Swin-T model of \(224\times 224\) input size on ImageNet 1k is used for Swin-UNet, and the pre-trained ViT-B/16 on ImageNet 21k is used for Trans-UNet. ### Results The performances of different methods are shown in Table 2 and Table 3 with metrics of \(IoU\) and \(Dice\). The example segmentation results of different methods are displayed in Figure 3. From these experiments, we can see that on all three datasets, our approach performs best. And we can expect that if we use pre-trained models of ViG on ImageNet, the performance may be better. For the Swin-UNet, it's strange, but [1] also reports its low performance on small datasets. In conclusion, our method shows competitive performance compared to classical and state-of-the-art techniques. We also calculate the parameters by using fvcore Python package with (1, 3, 512, 512) input. Admittedly, our model is larger than others and needs more computational resources. ## 4 Conclusion In this work, we propose a ViG-UNet for 2D medical image segmentation, which has a GNN-based U-shaped architecture consisting of the encoder, the bottleneck, and the decoder with skip connections. Experiments are done on the ISIC 2016, the ISIC 2017 and the Kvasir-SEG dataset, whose results show \begin{table} \begin{tabular}{c c} \hline Methods & Parameters \\ \hline UNet & 7.8M \\ Attention-UNet & 8.7M \\ UNet++ & 9.2M \\ Trans-UNet & 92.3M \\ Swin-UNet & 27.3M \\ UNext & 1.5M \\ \hline ViG-UNet & 0.7G \\ \hline \end{tabular} \end{table} Table 4: Comparison of Parameters of Different Models \begin{table} \begin{tabular}{c c c c} \hline Methods & ISIC 2016 & ISIC 2017 & Kvasir SEG \\ \hline UNet & 0.8984 & 0.7708 & 0.8023 \\ Attention-UNet & 0.9058 & 0.7739 & 0.8065 \\ UNet++ & 0.9070 & 0.7768 & 0.8033 \\ Trans-UNet & 0.9158 & 0.8244 & 0.6439 \\ Swin-UNet & 0.8568 & 0.7914 & 0.4974 \\ UNext & 0.9103 & 0.8241 & 0.8122 \\ \hline ViG-UNet & **0.9206** & **0.8292** & **0.8188** \\ \hline \end{tabular} \end{table} Table 3: Comparison Experimental Results on ISIC 2016, ISIC 2017 and Kvasir SEG (using the \(Dice\) metric) Figure 3: The example segmentation experimental results of different methods on ISIC 2016, ISIC 2017 and Kvasir-SEG datasets that our method is effective. ## 5 Acknowledgements This work was supported by a Grant from The National Natural Science Foundation of China (No. U21A20484).
2301.11411
Anisotropic Electron Heating in an Electron Cyclotron Resonance Thruster with Magnetic Nozzle
In a grid-less Electron Cyclotron Resonance (ECR) plasma thruster with a diverging magnetic nozzle, the magnitude of the ambipolar field accelerating the positive ions depends of the perpendicular energy gained by the electrons. This work investigates the heating of the electrons by electromagnetic waves, taking their bouncing motion into account in a confining well formed by the magnetic mirror force and the electrostatic potential of the thruster. An electromagnetic Particle-In-Cell (PIC) code is used to simulate the plasma in a magnetic field tube. The code's Maxwell solver is based on a semi-Lagrangian scheme known as the Constrained Interpolation Profile (CIP) which enables larger time steps. The results show that anisotropic plasma heating takes place exclusively inside the coaxial chamber, along a Doppler-broadened zone. It is also shown that a trapped population of electrons with a larger perpendicular energy exists in the plume.
Jean Porto, Paul-Quentin Elias, Andrea Ciardi
2023-01-26T20:53:47Z
http://arxiv.org/abs/2301.11411v1
# Anisotropic Electron Heating in an Electron Cyclotron Resonance ###### Abstract In a grid-less Electron Cyclotron Resonance (ECR) plasma thruster with a diverging magnetic nozzle, the magnitude of the ambipolar field accelerating the positive ions depends of the perpendicular energy gained by the electrons. This work investigates the heating of the electrons by electromagnetic waves, taking their bouncing motion into account in a confining well formed by the magnetic mirror force and the electrostatic potential of the thruster. An electromagnetic Particle-In-Cell (PIC) code is used to simulate the plasma in a magnetic field tube. The code's Maxwell solver is based on a semi-Lagrangian scheme known as the Constrained Interpolation Profile (CIP) which enables larger time steps. The results show that anisotropic plasma heating takes place exclusively inside the coaxial chamber, along a Doppler-broadened zone. It is also shown that a trapped population of electrons with a larger perpendicular energy exists in the plume. ## I Introduction Electric thrusters play a fundamental role in the field of space propulsion. Their main advantage lies in an efficient use of the propellant mass, and therefore a reduced consumption of propellant. Hall Effect Thrusters or Gridded Ion Engines are examples of the most well-known and flight-proven technologies in the current propulsion market nowadays. Both technologies eject an ion beam which is subsequently neutralized to prevent the spacecraft from charging. Several components of these technologies, such as the acceleration grid or the neutralizer, are subject to erosion and wear and for this reason, meeting the challenging lifetime targets requires careful optimization and demanding testing [1]. The complexity of some of the components has driven the investigation of alternative concepts of propulsion devices that require neither a grid nor a neutralizer. The Electron Cyclotron Resonance (ECR) plasma thruster [2; 3] is one of these concepts and it is the subject of the present study. The ECR plasma thruster consists of a semi-open chamber where a quasi-neutral plasma is heated by electron cyclotron resonant microwaves at 2.45 GHz, and accelerated by a magnetic nozzle. This concept was first proposed in the 1960s in the works of Miller _et al._[4] and Nagatomo [5], then further developed by Sercel [6]. These studies used a prototype with a wave-guide structure to couple the microwaves to the plasma. Their results showed that it was possible to achieve specific impulses and thrust values high enough to be of interest for space propulsion applications [6]. Nonetheless, the inefficiency, size and weight of the micro-wave sources and electromagnets at that time led to a stagnation of the research on ECR thrusters for several years. Interest for this technology arose again recently with experimental works [7; 8]. In particular, the use of coaxial microwave coupling structures and compact rare-earth permanent magnets were instrumental in designing compact sources (a schematic of the design is shown in Fig. 1a). More experimental and theoretical efforts has since been made in order to get a deeper understanding of the physical phenomena governing the plasma heating and acceleration in the thruster. Experimental characterizations of the plasma properties have been carried out using different measurement techniques such as Langmuir and Faraday probes, Laser Induced Fluorescence diagnostics, diamagnetic loops and thrust balances [9; 10; 11; 12]. Unfortunately, most of the experimental studies so far have been limited to survey the plasma outside the thruster coaxial chamber. Recently, a resonant probe was developed to measure an electron density of about \(1\times 10^{11}\,cm^{-3}\) at the source exit plane, close to the coaxial chamber [13]. In the source, it is likely that the plasma density is higher (\(\sim 1\times 10^{12}\,cm^{-3}\)) with electron temperatures of a few tens of \(eV\). From a theoretical point of view, as a first step, global models describing the energy balance in the plasma source were proposed as a means to obtain the key parameters of the thruster [14; 15]. While this approach yielded good agreement with measured electron temperature at high mass flow rate or high pressure, they failed at the lower mass-flow rate where the thruster achieves its best performance. Indeed, the assumptions of uniform electron temperature and isotropic Maxwellian electron distribution are too crude approximations when collisionality decreases and the electron mean free path becomes much larger than the source length: in that range non-local effects become prevalent, as electrons undergo a bouncing motion along the magnetic field line. Those electrons which cross the ECR surface can gain energy depending on their phase in the gyromotion [16], which leads to a strong anisotropy of the distribution function. An attempt to account for this stochastic heating in the plasma was made by considering the electron heating as a random walk in phase space [17].
2303.09427
Logical Implications for Visual Question Answering Consistency
Despite considerable recent progress in Visual Question Answering (VQA) models, inconsistent or contradictory answers continue to cast doubt on their true reasoning capabilities. However, most proposed methods use indirect strategies or strong assumptions on pairs of questions and answers to enforce model consistency. Instead, we propose a novel strategy intended to improve model performance by directly reducing logical inconsistencies. To do this, we introduce a new consistency loss term that can be used by a wide range of the VQA models and which relies on knowing the logical relation between pairs of questions and answers. While such information is typically not available in VQA datasets, we propose to infer these logical relations using a dedicated language model and use these in our proposed consistency loss function. We conduct extensive experiments on the VQA Introspect and DME datasets and show that our method brings improvements to state-of-the-art VQA models, while being robust across different architectures and settings.
Sergio Tascon-Morales, Pablo Márquez-Neila, Raphael Sznitman
2023-03-16T16:00:18Z
http://arxiv.org/abs/2303.09427v1
# Logical Implications for Visual Question Answering Consistency ###### Abstract Despite considerable recent progress in Visual Question Answering (VQA) models, inconsistent or contradictory answers continue to cast doubt on their true reasoning capabilities. However, most proposed methods use indirect strategies or strong assumptions on pairs of questions and answers to enforce model consistency. Instead, we propose a novel strategy intended to improve model performance by directly reducing logical inconsistencies. To do this, we introduce a new consistency loss term that can be used by a wide range of the VQA models and which relies on knowing the logical relation between pairs of questions and answers. While such information is typically not available in VQA datasets, we propose to infer these logical relations using a dedicated language model and use these in our proposed consistency loss function. We conduct extensive experiments on the VQA Introspect and DME datasets and show that our method brings improvements to state-of-the-art VQA models, while being robust across different architectures and settings. ## 1 Introduction Visual Questioning Answering (VQA) models have drawn recent interest in the computer vision community as they allow text queries to question image content. This has given way to a number of novel applications in the space of model reasoning [54, 29, 56, 8], medical diagnosis [51, 60, 21, 37] and counter factual learning [1, 2, 11]. With ability to combine language and image information in a common model, it is unsurprising to see a growing use of VQA methods. Despite this recent progress however, a number of important challenges remain when making VQAs more proficient. For one, it remains extremely challenging to build VQA datasets that are void of bias. Yet this is critical to ensure subsequent models are not learning spurious correlations or shortcuts [49]. This is particularly daunting in applications where domain knowledge plays an important role (_e.g_., medicine [15, 27, 33]). Alternatively, ensuring that responses of a VQA are coherent, or _consistent_, is paramount as well. That is, VQA models that answer differently about similar content in a given image, imply inconsistencies in how the model interprets the inputs. A number of recent methods have attempted to address this using logic-based approaches [19], rephrashing [44], question generation [40, 18, 41] and regularizing using consistency constraints [47]. In this work, we follow this line of research and look to yield more reliable VQA models. Specifically, we wish to ensure that VQA models are consistent in their ability to answer questions about images. This implies that if one poses multiple questions about the same image, then the model's answers should not contradict themselves. For instance, if one question about the image in Fig. 1 asks "Is there snow on the ground?", then the answer inferred should be consistent with that of the question "Is it the middle of summer?" As noted in [43], such question pairs involve reasoning and perception, and consequentially Figure 1: Top: Conventional VQA models tend to produce inconsistent answers as a consequence of not considering the relations between question and answer pairs. Bottom: Our method learns the logical relation between question and answers pairs to improve consistency. lead the authors to define inconsistency when the reasoning and perception questions are answered correctly and incorrectly, respectively. Along this line, [47] use a similar definition of inconsistency to regularize a VQA model meant to answer medical diagnosis questions that are hierarchical in nature. What is critical in both cases however is that the consistency of the VQA model depends explicitly on its answers, as well as the question and true answer. This hinges on the assumption that perception questions are sufficient to answer reasoning question. Yet, for any question pair, this may not be the case. As such, the current definition of consistency (or inconsistency) has been highly limited and does not truly reflect how VQAs should behave. To address the need to have VQA models that are self-consistent, we propose a novel training strategy that relies on logical relations. To do so, we re-frame question-answer (QA) pairs as propositions and consider the relational construct between pairs of propositions. This construct allows us to properly categorise pairs of propositions in terms of their logical relations. From this, we introduce a novel loss function that explicitly leverages the logical relations between pairs of questions and answers in order to enforce that VQA models be self-consistent. Unfortunately however, datasets typically do not contain relational information about pairs of QA and collecting this would be extremely laborious and difficult to achieve. To overcome this, we propose to train a dedicated language model capable of inferring logical relations between propositions. By doing so, we show in our experiments that not only are we able to effectively infer logical relations from propositions, but that these can be explicitly used in our loss function to train VQA models that improve state-of-the-art methods via consistency. We show this over two different VQA datasets, against different consistency methods and with different VQA model architectures. Our code and data are available at [https://github.com/sergiotasconmorales/imp_vqa](https://github.com/sergiotasconmorales/imp_vqa). ## 2 Related work Since its initial presentation in Antol _et al_. [4], VQA has thoroughly advanced. Initial developments focused on multimodal fusion modules, which combine visual and text embeddings [8, 36]. From basic concatenation and summation [4], to more complex fusion mechanisms that benefit from projecting the embeddings to different spaces, numerous approaches have been proposed [6, 32, 16]. The addition of attention mechanisms [8, 31, 36] and subsequently transformer architectures [50] has also contributed to the creation of transformer-based vision-language models, such as LXMERT, which have shown state-of-the-art performances [46]. More recently, methods have proposed to improve other aspects of VQA, including avoiding shortcut learning and biases [25, 12], improving 3D spatial reasoning [5], Out-Of-Distribution (OOD) generalization [9, 49], improving transformer-based vision-language models [61, 57], external knowledge integration [14, 17] and model evaluation with visual and/or textual perturbations [52, 22]. With the awareness of bias in VQA training data some works have also addressed building better datasets (_e.g_., v2.0 [20], VQA-CP [3], CLEVR [30] and GCP [28]). Furthermore, these developments have now given rise to VQA methods in specific domains. For instance, the VizWiz challenge [23, 10, 24] aims at creating VQA models that can help visually impaired persons with routine daily tasks, while there is a growing number of medical VQA works with direct medicine applications [21, 37, 51, 60]. Consistency in VQAConsistency in VQA can be defined as the ability of a model to produce answers that are not contradictory. This is, given a pair of questions about an image, the answers predicted by a VQA model should not be contrary (_e.g_. answering "Yes" to "Is it the middle of summer?" and "Winter" to "What season is it?"). Due to its significance in reasoning, consistency in VQA has become a focus of study in recent years [19, 41, 43, 44, 29]. Some of the first approaches for consistency enhancement focused on creating re-phrasings of questions, either by dataset design or at training time [44]. Along this line, entailed questions were proposed [19, 41], such that a question generation module was integrated into a VQA model [18, 40], used as a benchmarking method to evaluate consistency [59] or as a rule-based data-augmentation technique [41]. Other approaches tried to shape the embedding space by imposing constraints in the learned representations [48] and by imposing similarities between the attention maps of pairs of questions [43]. Another work [47] assumed entailment relations between pairs of questions to regularize training. A more recent approach attempts to improve consistency by using graph neural networks to simulate a dialog in the learning process [29]. While these approaches show benefits in some cases, they typically only consider that a subset of logical relationships exists between pairs of question-answers or assume that a single relation holds for all QA pairs. Though true in the case of re-phrasings, other question generation approaches cannot guarantee that the produced questions preserve unique relations or that grammatical structure remains valid. Consequently, these methods often rely on metrics that either over or under estimate consistency by relying on these assumptions. In the present work, we propose a strategy to alleviate these limitation by considering all logical relations between pairs of questions and answers. Entailment predictionNatural Language Inference (NLI), or Recognizing Textual Entailment (RTE), is the task of predicting how two input sentences (namely _premise_ and _hypothesis_) are related, according to three pre-established categories: entailment, contradiction and neutrality [35]. For example, if the premise is "A soccer game with multiple males playing" and the hypothesis is "Some men are playing a sport," then the predicted relation should be an entailment, because the hypothesis logically follows from the premise. Several benchmarking datasets (_e.g._, SNLI [58], MultiNLI [55], SuperGLUE [53], WIKIFACTCHECK [42] and ANLI [38]) have contributed to the adaption of general-purpose transformer-based models like BERT [13], RoBERTa [34] and DeBERTa [26] for this task. In this work, we will leverage these recent developments to build a model capable of inferring relations between propositions. ## 3 Method Given an image \(\mathbf{x}\in\mathcal{I}\), a question \(\mathbf{q}\in\mathcal{Q}\) about the image and a set \(\mathcal{A}=\{a_{1},\ldots,a_{K}\}\) of possible answers to choose from, a VQA model is expected to infer the answer \(\hat{a}\in\mathcal{A}\) that matches the true answer \(a^{*}\). This can be formulated as, \[\hat{a}=\operatorname*{arg\,max}_{a\in\mathcal{A}}p(a|\mathbf{x},\mathbf{q}; \theta), \tag{1}\] where \(\theta\) represents the parameters of the VQA model. In this context, we observe that two QA pairs \((\mathbf{q}_{i},a_{i})\) and \((\mathbf{q}_{j},a_{j})\) for the same image \(\mathbf{x}\) can have different kinds of logical relations. In the simplest case, the two pairs may be unrelated, as with the pairs ("Is it nighttime?", "Yes") and ("Is there a bench in the image?", "No"). Knowing that one of the pairs is true gives no information about the truth value of the other. On the other hand, two pairs may be related by a logical implication, as in the pairs ("Is the horse brown?", "No") and ("What is the color of the horse?", "White"). Knowing that the second pair is true implies that the first pair must be true as well. Conversely, if the first pair is false (_the horse is brown_), it implies that the second pair must also be false. In this case, the first pair is a necessary condition for the second one or, equivalently, the second pair is a sufficient condition for the first one. Finally, it can be that two QA pairs are related by a double logical implication, as with the pairs ("Is this a vegetarian pizza?", "Yes") and ("Does the pizza have meat on it?", "No"). The veracity of the former implies the veracity of the latter, but the veracity of the latter also implies the veracity of the former. In this case, each pair is simultaneously a necessary and sufficient condition for the other pair, and both pairs are then equivalent. Note that the logical implication existing between two QA pairs is an intrinsic property of the QA pairs, and does not depend on the correctness of the predictions coming from a VQA model. If a VQA model considers a sufficient condition true and a necessary condition false, it is incurring an _inconsistency_ regardless of the correctness of its predictions. Since logical implications are the basis of reasoning, we propose to explicitly use them when training a VQA model to reduce its inconsistent predictions. Unfortunately, doing so requires overcoming two important challenges: (1) a strategy is needed to train VQA models with logical relations that leverage consistency in a purposeful manner. Until now, no such approach has been proposed; (2) VQA datasets do not typically contain logical relations between pairs of QA. Acquiring these manually would, however, be both time-consuming and difficult. We address these challenges in this work by formalizing the idea of consistency and treating QA pairs as logical propositions from which relations can comprehensively be defined. Using this formalism, we first propose a strategy to solve (1) and train a VQA model more effectively using logical relations and the consistency they provide (Sec. 3.2). We then show in Sec. 3.3 how we infer relations between pairs of propositions, whereby allowing standard VQA datasets sets to be augmented with logical relations. ### Consistency formulation We begin by observing that QA pairs \((\mathbf{q},a)\) can be considered and treated as logical propositions. For instance, the QA ("Is it winter?", "Yes") can be converted to "It is winter," which is a logical proposition that can be evaluated as _true_ or _false_ (_i.e._, its _truth value_). Doing so allows us to use a broad definition of consistency, namely one that establishes that two propositions are inconsistent if both cannot be true at the same time [7]. In the context of this work, we assume the truth value of a proposition \((\mathbf{q},a)\) is determined by an agent (either a human annotator or the VQA model) after observing the information contained in an image \(\mathbf{x}\). Let \(\mathcal{D}=\mathcal{I}\times\mathcal{Q}\times\mathcal{A}\) be a VQA dataset that contains triplets \((\mathbf{x}^{(n)},\mathbf{q}_{i}^{(n)},a_{i}^{(n)})\), where \(\mathbf{x}^{(n)}\) is the \(n\)-th image and \((\mathbf{q}_{i}^{(n)},a_{i}^{(n)})\) is the \(i\)-th question-answer pair about \(\mathbf{x}^{(n)}\). In the following, we omit the index \(n\) for succinctness. For a given image \(\mathbf{x}\), we can consider a pair of related question-answers as \((\mathbf{q}_{i},a_{i})\) and \((\mathbf{q}_{j},a_{j})\) as a pair of propositions. Following propositional logic notation, if both propositions are related in such a way that \((\mathbf{q}_{i},a_{i})\) is a sufficient condition for the necessary condition \((\mathbf{q}_{j},a_{j})\), we write that \((\mathbf{q}_{i},a_{i})\rightarrow(\mathbf{q}_{j},a_{j})\). For convenience, this arrow notation can be adapted to indicate different orderings between the necessary and sufficient conditions: * \((\mathbf{q}_{i},a_{i})\leftarrow(\mathbf{q}_{j},a_{j})\) if the proposition \((\mathbf{q}_{i},a_{i})\) is a necessary condition for \((\mathbf{q}_{j},a_{j})\). * \((\mathbf{q}_{i},a_{i})\leftrightarrow(\mathbf{q}_{j},a_{j})\) if the propositions \((\mathbf{q}_{i},a_{i})\) and \((\mathbf{q}_{j},a_{j})\) are equivalent, _i.e._, both are simultaneously necessary and sufficient. Note that this is just notational convenience for the double implication \((\mathbf{q}_{i},a_{i})\rightarrow(\mathbf{q}_{j},a_{j})\wedge(\mathbf{q}_{j},a _{j})\rightarrow(\mathbf{q}_{i},a_{i})\), and in the following derivations the double arrow will be always considered as two independent arrows. * Finally, we will write \((\mathbf{q}_{i},a_{i})-(\mathbf{q}_{j},a_{j})\) if the propositions \((\mathbf{q}_{i},a_{i})\) and \((\mathbf{q}_{j},a_{j})\) are not related. If a VQA model is asked questions \(\mathbf{q}_{i}\) and \(\mathbf{q}_{j}\) about an image \(\mathbf{x}\) and there exists a relation \((\mathbf{q}_{i},a_{i})\rightarrow(\mathbf{q}_{j},a_{j})\), the answers of the model will be inconsistent whenever it provides answers \(\hat{a}_{i}=a_{i}\) and \(\hat{a}_{j}\neq a_{j}\) (_i.e._, the model evaluates the first proposition as true and the second proposition as false). More generally, for a pair of necessary and sufficient conditions, the agent will be inconsistent if it evaluates the necessary condition as false and the sufficient condition as true [7]. In what follows, we exploit these ideas to quantify model inconsistencies in our experiments and to develop a new loss function that encourages logically consistent VQA models. ### Logical implication consistency loss The core aim of our method is to encourage the VQA model to avoid inconsistent answers. When training, assume that the model receives an image \(\mathbf{x}\) from \(\mathcal{D}\) and two associated propositions \((\mathbf{q}_{1},a_{1})\) and \((\mathbf{q}_{2},a_{2})\) that are related by a logical implication \((\mathbf{q}_{1},a_{1})\rightarrow(\mathbf{q}_{2},a_{2})\). We define, \[\pi_{i}=\pi\left((\mathbf{q}_{i},a_{i}),\mathbf{x}\right)=p(a_{i}\mid\mathbf{ x},\mathbf{q}_{i},\theta), \tag{2}\] as the probability assigned by the VQA model that the proposition \((\mathbf{q},a)\) is true for the image \(\mathbf{x}\). The model has a high probability of incurring an inconsistency if it simultaneously gives a high probability \(\pi_{1}\) to the sufficient condition and a low probability \(\pi_{2}\) to the necessary condition. We thus define our consistency loss as a function, \[\begin{split}\mathcal{L}_{\text{cons}}(\mathbf{x},(\mathbf{q}_{1 },a_{1}),(\mathbf{q}_{2},a_{2}))=-(1-\pi_{2})\log(1-\pi_{1})\\ -\pi_{1}\log(\pi_{2}),\end{split} \tag{3}\] that takes an image and a pair of sufficient and necessary propositions, and penalizes predictions with a high probability of inconsistency. As illustrated in Fig. 2, \(\mathcal{L}_{\text{cons}}\) is designed to produce maximum penalties when \(\pi_{1}=1\) and \(\pi_{2}<1\) (_i.e._, when the sufficient condition is absolutely certain but the necessary condition is not), and when \(\pi_{2}=0\) and \(\pi_{1}>0\) (_i.e._, when the necessary condition can never be true but the sufficient condition can be true). At the same time, \(\mathcal{L}_{\text{cons}}\) produces minimum penalties when either \(\pi_{1}=0\) or \(\pi_{2}=1\), as no inconsistency is possible when the sufficient condition is false or when the necessary condition is true. Interestingly, despite its resemblance, \(\mathcal{L}_{\text{cons}}\) is not a cross-entropy, as it is not an expectation over a probability distribution. Our final loss is then a linear combination of the consistency loss and the cross-entropy loss \(\mathcal{L}_{\text{VQA}}\) typically used to train VQA models. Training with this loss then optimizes, \[\min_{\theta}\mathbb{E}_{\mathcal{D}}[\mathcal{L}_{\text{VQA}}]+\lambda \mathbb{E}_{((\mathbf{x}_{i},\mathbf{q}_{i},a_{i}),(\mathbf{x}_{j},\mathbf{q} _{j},a_{j}))\sim\mathcal{D}^{2}}[\mathcal{L}_{\text{cons}}], \tag{4}\] where the first expectation is taken over the elements of the training set \(\mathcal{D}\) and the second expectation is taken over all pairs of necessary and sufficient propositions from \(\mathcal{D}\) defined for the same image. In practice, we follow the sampling procedure described in [43, 47], where mini-batches contain pairs of related questions. The hyperparameter \(\lambda\) controls the relative strength between the VQA loss and the consistency term. ### Inferring logical implications By and large, VQA datasets do not include annotations with logical relations between question-answ pairs, which makes training a VQA with \(\mathcal{L}_{\text{cons}}\) infeasible. To overcome this, we propose to train a language model to predict logical implications directly and use these predictions instead. We achieve this in two phases illustrated in Fig. 3 and refer to our approach as the Logical-Implication model (LI-MOD). First, we pre-train BERT [13] on the task of Natural Language Inference, using the SNLI dataset [58], which consists of pairs of sentences with annotations of entailment, contradiction or neutrality. In this task, given two sentences, a language model must predict one of the mentioned categories. While these categories do not exactly match the logical implication relevant to our objective, it can be derived from the entailment category. To this end, given two Figure 2: Consistency loss \(\mathcal{L}_{\text{cons}}\) as a function of the estimated probabilities for the sufficient, \(\pi_{1}\), and necessary, \(\pi_{2}\), conditions. Note that the loss diverges to \(\infty\) when \(\pi_{1}=1,\pi_{2}<1\) and when \(\pi_{1}>0,\pi_{2}=0\). propositions \((\mathbf{q}_{i},a_{i})\) and \((\mathbf{q}_{j},a_{j})\), we evaluate them using the finetuned NLI model in the order \((\mathbf{q}_{i},a_{i}),(\mathbf{q}_{j},a_{j})\), and then repeat the evaluation by inverting the order, to evaluate possible equivalences or inverted relations. If the relation is predicted as neutral in both passes, the pair is considered to be unrelated. Then, we finetune the NLI model on a sub-set of annotated pairs from the VQA dataset Introspect [43]. In practice, we use a subset of binary QA pairs that were manually annotated with logical implications. Even though the relation need not be limited to binary questions (_i.e_., yes/no questions), we chose to do so because the relation annotation is simpler than for open-ended questions. Since BERT expects sentences and not QA pairs, these were first converted into propositions using Parts Of Speech (POS) tagging [39] and used simple rules that apply to binary questions (_e.g_., to convert "Is it winter?," "Yes" we invert the first two words of the question and remove the question mark). After finetuning the model, the relations were predicted for the remaining part of the dataset. Further implementation details on this are given in Sec. 4.3. ## 4 Experiments We evaluate our proposed consistency loss function on different datasets, using a variety of VQA models and compare our performances to state-of-the-art methods. ### Datasets Introspect [43]:Contains perception questions (or sub-questions) created by annotators for a subset of reasoning question (or main questions) of the VQA v1.0 and VQA v2.0 datasets [4, 20]. It contains 27,441 reasoning questions with 79,905 sub-questions in its training set and 15,448 reasoning questions with 52,573 sub-questions for validation. For images that have the same sub-question repeated multiple times, we remove duplicates in the sub-questions for every image in both the train and validation sets. DME Dataset [47]:This dataset consists of retinal fundus images, for the task of Diabetic Macular Edema (DME) staging. It contains 9,779 QA pairs for training, 2,380 QA pairs for validation and 1,311 QA pairs for testing. There are three types of questions in the dataset: main, sub, and independent questions. Main questions ask information about the diagnosis (_i.e_. the stage of the disease) and sub-questions ask about presence and location of biomarkers. Sub-questions are further sub-divided into grade questions, questions about the whole images, questions about a region of the eye called macula, and questions about random regions in the image. To deal with questions about image regions, we follow the procedure described in [47], whereby only the relevant region is shown to the model. ### Baseline methods and base models We consider 3 different consistency enhancement baseline methods. To ensure fair comparisons, all methods use the same VQA base models and only differ in the consistency method used. These consist in: * None: Indicating that no consistency preserving method is used with the VQA model. This corresponds to the case where \(\lambda=0\). * SQuINT [43]: Optimizes consistency by maximizing the similarity between the attention maps of pairs of questions. As such, it requires a VQA model that uses guided attention. * CP-VQA [47]: Assumes entailment relations and uses a regularizer to improve consistency. VQA architectures:We show experiments using three VQA models depending on the dataset used. For experiments on Introspect, we make use of the BAN model [31], as its structure with guided attention allows the use of SQuINT. In addition, we evaluate the vision-language architecture LXBERT [46] on this dataset to see how our approach can help improve state-of-the-art, transformer-based, VQA models too. For experiments on the DME dataset, we use the base model described in [47], which we denote by MVQA. ### Implementation details LI-ModelWe first pre-train BERT on SNLI for 5 epochs until it reaches a maximum accuracy of 84.32% on that dataset. For this pre-training stage, we initialize BERT with the _bert-base-uncased_ weights and use a batch size of 16. We use a weight decay rate of 0.01 and the AdamW Figure 3: LI-MOD: Approach to predict logical relations between pairs of propositions. A BERT-based NLP model is first pre-trained on the SNLI dataset [58] to solve a Natural Language Inference task and subsequently fine-tuned fine-tuned with annotated pairs from a subset of Introspect dataset [43]. The resulting model is used to predict the relations of the remaining part of the dataset. optimizer with learning rate \(2\cdot 10^{-5}\) without bias correction. The same setup was kept to finetune the model on a subset of 2'000 pairs of propositions from Introspect which were manually annotated (distribution of labels being: \(\leftarrow 60\%,\leftrightarrow 17\%,-12\%,\to 11\%\)), and an additional 500 pairs were annotated for validation. Notice that LI-MOD is only necessary for the Introspect dataset, since for the DME dataset the implications annotations are available. VQA models:For our base models, we use the official and publicly available implementations (BAN [45], LXMERT [46] and MVQA [47]) with default configurations. We re-implemented SQuINT [43] and used the provided implementation of CP-VQA [47], reporting the best results which were obtained with \(\lambda=0.1,\gamma=0.5\) for BAN and \(\lambda=0.5,\gamma=1\) for MVQA. These parameters refer to the corresponding symbols of the original implementations. For SQuINT, we set the gain of the attention map similarity term to 0.5 for BAN and to 1.0 for MVQA. For Introspect, we train 5 models with different seeds for each parameter set and for DME we train 10 models with different seeds. To train LXMERT, BAN and MVQA, we use batch sizes of 32, 64 and 128, respectively. Regarding the VQA cross-entropy loss, we follow the original implementations and use soft scores for the answers in LXMERT, and categorical answers for BAN and MVQA. ### Quantifying consistency Given a test set \(\mathcal{T}=\{t_{n}\}_{n=1}^{|\mathcal{T}|}\), where \(t_{n}=(\mathbf{x},\mathbf{q},a)\) is a test sample triplet, we wish to measure the level of consistency of a VQA model. To this end, we define \(G(\mathcal{T})\subset\mathcal{T}^{2}\) as the set of all pairs of test samples \(((\mathbf{x}_{i},\mathbf{q}_{i},a_{i}),(\mathbf{x}_{j},\mathbf{q}_{j},a_{j}))\) for which \((\mathbf{q}_{i},a_{i})\rightarrow(\mathbf{q}_{j},a_{j})\) and \(\mathbf{x}_{i}=\mathbf{x}_{j}\). We count the inconsistencies produced by a VQA model \(p\) evaluated on \(\mathcal{T}\) as the number of times the model evaluates a sufficient condition as true and a necessary condition as false, \[I_{p}(\mathcal{T})=\sum_{(t_{i},t_{j})\in G(\mathcal{T})}\mathbb{1}[e_{p}(( \mathbf{q}_{i},a_{i}),\mathbf{x})\wedge\neg e_{p}((\mathbf{q}_{j},a_{j}), \mathbf{x})]. \tag{5}\] The function \(e_{p}\) returns the truth value of the proposition \((\mathbf{q},a)\) for image \(\mathbf{x}\) evaluated by the VQA model \(p\), \[e_{p}((\mathbf{q},a),\mathbf{x})=\mathbb{1}[\hat{a}=a], \tag{6}\] where \(\hat{a}\) is the answer of maximum probability following Eq. (1). In other words, \(e_{p}\) returns whether the estimated answer for question \(\mathbf{q}\) matches the answer of the proposition \(a\). Finally, the consistency ratio \(c\) for model \(p\) on the test set \(\mathcal{T}\) is the proportion of implications in \(G(\mathcal{T})\) that did not lead to an inconsistency, \[c_{p}(\mathcal{T})=1-\frac{I_{p}(\mathcal{T})}{G(\mathcal{T})}. \tag{7}\] ## 5 Results Performance comparison:For both datasets, we first compare the performance of our method against the baseline consistency methods in Tab. 1 and Tab. 2. In either case, we see that our method outperforms previous approaches, by not only increasing overall prediction accuracy but also by increasing consistency. In Fig. 4 and Fig. 5, we show illustrative examples of our approach on the Introspect and DME datasets, respectively (see additional examples in the Supplementary materials). In Tab. 1 we also show the performance of the state-of-the-art LXMERT VQA model when combined with our proposed consistency method. In this case too, we see that our method provides increased performances via consistency improvements. Here we investigate the performance induced when flipping the answers of one of the members of each inconsistent pair at test time. Suppose implication labels are present, either by manual annotation or by LI-MOD. In that case, a trivial manner of correcting an inconsistent QA pair of binary answers is to flip or negate one of the answers. This is far simpler than our proposed method as it permits training the VQA model with the standard VQA loss. Having obtained the answers from the model when \(\lambda=0\), we identify the inconsistent pairs using the relations predicted by our LI-MOD and then flip the answers (1) either randomly, (2) of the first QA or (3) of the second QA. By including the flipping baselines, we confirm \begin{table} \begin{tabular}{l l c c} \hline \hline Model & Cons. Method & Acc. & Cons. \\ \hline \multirow{4}{*}{BAN} & None & 67.14\(\pm\)0.10 & 69.45\(\pm\)0.17 \\ & SQuINT [43] & 67.27\(\pm\)0.19 & 69.87\(\pm\)0.45 \\ & CP-VQA [47] & 67.18\(\pm\)0.24 & 69.52\(\pm\)0.45 \\ & **Ours** (\(\lambda=0.01\)) & **67.36\(\pm\)0.19** & 70.38\(\pm\)0.39 \\ \hline \multirow{4}{*}{LXMERT} & None & 75.10\(\pm\)0.10 & 76.24\(\pm\)0.63 \\ & Random flip & 69.67\(\pm\)1.24 & 75.99\(\pm\)3.91 \\ \cline{1-1} & Flip first & 73.81\(\pm\)0.47 & 71.94\(\pm\)2.82 \\ \cline{1-1} & Flip second & 65.82\(\pm\)1.03 & 87.56\(\pm\)2.51 \\ \cline{1-1} & **Ours** & **75.17\(\pm\)0.08** & 78.75\(\pm\)0.21 \\ \hline \hline \end{tabular} \end{table} Table 1: Results of different consistency methods on the Introspect dataset using two different VQA models: (top) BAN and (bottom) LXMERT. In the case of LXMERT, we show the impact of randomly flipping the answer of either the first or the second question for pairs detected as inconsistent using the relations from LI-MOD. Similarly, _flip first_ and _flip second_ refer to flipping the answer of the first and second question in inconsistent pairs, respectively. that the added complexity in training our method results in improved accuracy compared to merely correcting inconsistencies post-hoc.To explain why the consistency can increase while the accuracy decreases, consider the following: An inconsistent QA pair guarantees that one of the two answers is incorrect, but correcting the inconsistency does not necessarily fix the incorrect answer. By flipping the correct answer, the inconsistency is corrected, thereby increasing the consistency but decreasing the accuracy. This phenomenon is particularly noticeable in the flipping baselines, as they fix inconsistencies without considering their correctness. In general, we observe that training LXMERT with our consistency loss provides performance gains. Indeed, while random flipping based on LI-MOD clearly deteriorates the performance of LXMERT, so are flipping the first or second answers. This implies that our proposed method indeed leverages the predictions of LI-MOD to make LXMERT more consistent as it improves both model accuracy and consistency. Sensitivity of \(\lambda\):We now show the sensitivity of our method and its relation to \(\lambda\). We evaluate the performance of our method for different values of \(\lambda\) to understand the behaviour of the performance, both in terms of accuracy and consistency. Fig. 6 shows the accuracy and consistency of LXMERT and MVQA for different values of \(\lambda\). The difference in the ranges of the values is due to the relative magnitude of the loss function terms and depends on the used loss functions (_e.g._, binary and non-binary cross-entropy) and the ground-truth answer format (_i.e._, soft scores for LXMERT, as mentioned in Sec. 4.3). In general, we observe a very similar behavior for accuracy, which increases and then slowly decreases as \(\lambda\) increases. We sustain that the maximum value the accuracy can reach is established by the number of related pairs that are still inconsistent after training with \(\lambda=0\). In other words, the limitations in size impose a limit for how much our method can improve the accuracy. For LXMERT on Introspect, for instance, our model corrected 4'553 (78.9%) \begin{table} \begin{tabular}{l l c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Consis. Method} & \multicolumn{5}{c}{Accuracy} & \multirow{2}{*}{Consistency} \\ \cline{3-3} \cline{5-8} & & all & grade & whole & macula & region & \\ \hline \multirow{4}{*}{MVQA} & None & 81.15\(\pm\)0.49 & 78.17\(\pm\)2.07 & 83.44\(\pm\)1.87 & 87.25\(\pm\)1.20 & 80.38\(\pm\)2.02 & 89.95\(\pm\)3.20 \\ & SQuINT [43] & 80.58\(\pm\)0.78 & 77.48\(\pm\)0.40 & 82.82\(\pm\)0.74 & 85.34\(\pm\)0.87 & 80.02 & 89.39\(\pm\)2.12 \\ & CP-VQA [47] & 83.49\(\pm\)0.99 & **80.69\(\pm\)1.30** & 84.96\(\pm\)1.14 & 87.18\(\pm\)2.18 & **83.16\(\pm\)1.09** & 94.20\(\pm\)2.15 \\ & **Ours** (\(\lambda=0.25\)) & **83.59\(\pm\)0.69** & 80.15\(\pm\)0.95 & **86.22\(\pm\)1.67** & **88.18\(\pm\)1.07** & 82.62\(\pm\)1.02 & 95.78\(\pm\)1.19 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of methods on the DME dataset with common MVQA backbone. Accuracy and consistency is reported for all questions, as well as different medically relevant sub-question categories: grade, whole, macula and region. Figure 4: Qualitative examples from the Introspect dataset using BAN as backbone. Red siren symbols indicate inconsistent cases. of the 5'771 existing inconsistencies and introduced new inconsistencies by mistakenly altering 1'562 (3.5%) of the 44'111 consistent samples. Regarding consistency, we observe a constant increase as \(\lambda\) increases. The simultaneous decrease in accuracy as the \(\lambda\) increases suggests that the relative weight of the consistency loss dominates so that the model no longer focuses on optimizing the cross-entropy. Since it is possible to be consistent without answering correctly, the optimization process results in an increase in consistency at the expense of accuracy for higher values of \(\lambda\). However, it is clear from these results that there is a set of \(\lambda\) values for which both accuracy and consistency improve. LI-MOD performance:We report that the finetuning of BERT on the subset of annotated relations from Introspect produced \(78.67\%\) accuracy in the NLI task. We analyze the performance of this model for entailment and report an AUC value of 0.86, which indicates good generalization capability considering that only \(\approx 2\%\) of the dataset was annotated with relations. In addition, the amount of overlap in the QA pairs between the train and validation sets of the Introspect dataset is only 1.12% for binary questions. This shows that our LI-MOD is generalizing to variations in questions as well as to new combinations of QA pairs. Fig. 7 shows the ROC curve for entailment and examples of LI-MOD's predictions. Some of the observed sources of Figure 5: Examples from the DME dataset and comparison of methods. Red siren symbols indicate inconsistent cases. DME is a disease that is staged into grades (0, 1 or 2), which depend on the number of visual pathological features of the retina. _Top_ and _middle:_ Although all methods correctly predict the answer to the first question, some inconsistencies appear when a necessary condition is false. _Bottom_: Only the None baseline produces an inconsistency. Note that SQuINT and CP-VQA’s answers do not produce inconsistent pairs because both questions were answered incorrectly, and those answers (”2” and “yes”) respect all known relations. Figure 6: Behavior of the accuracy and consistency as a function of \(\lambda\) with 95% confidence intervals. _Left:_ LXMERT trained on the Introspect dataset (5 models with random seeds for each value of \(\lambda\)). _Right:_ MVQA trained on the DME dataset (10 models with random seeds for each \(\lambda\)).
2305.15380
Sentiment Analysis Using Aligned Word Embeddings for Uralic Languages
In this paper, we present an approach for translating word embeddings from a majority language into 4 minority languages: Erzya, Moksha, Udmurt and Komi-Zyrian. Furthermore, we align these word embeddings and present a novel neural network model that is trained on English data to conduct sentiment analysis and then applied on endangered language data through the aligned word embeddings. To test our model, we annotated a small sentiment analysis corpus for the 4 endangered languages and Finnish. Our method reached at least 56\% accuracy for each endangered language. The models and the sentiment corpus will be released together with this paper. Our research shows that state-of-the-art neural models can be used with endangered languages with the only requirement being a dictionary between the endangered language and a majority language.
Khalid Alnajjar, Mika Hämäläinen, Jack Rueter
2023-05-24T17:40:20Z
http://arxiv.org/abs/2305.15380v1
# Sentiment Analysis Using Aligned Word Embeddings for Uralic Languages ###### Abstract In this paper, we present an approach for translating word embeddings from a majority language into 4 minority languages: Erzya, Moksha, Udmurt and Komi-Zyrian. Furthermore, we align these word embeddings and present a novel neural network model that is trained on English data to conduct sentiment analysis and then applied on endangered language data through the aligned word embeddings. To test our model, we annotated a small sentiment analysis corpus for the 4 endangered languages and Finnish. Our method reached at least 56% accuracy for each endangered language. The models and the sentiment corpus will be released together with this paper. Our research shows that state-of-the-art neural models can be used with endangered languages with the only requirement being a dictionary between the endangered language and a majority language. ## 1 Introduction Most of the languages spoken in the world are endangered to one degree or another. The fact of being endangered sets some limitations on how modern NLP research can be done with such languages given that many endangered languages do not have vast textual resources available online, and even with the resources that are available, there is a question about the quality of the data resulting from a variety of factors such as fluency of the author, soundness of spelling and, on the lowest level, inconsistencies in character encoding (see Hamalainen, 2021). This paper focuses on the following Uralic languages: Erzya (myv), Moksha (mdf), Komi-Zyrian (kpv) and Udmurt (udm). Unesco classifies these languages as definitely endangered (Moseley, 2010). In terms of NLP, these languages have FSTs (Rueter et al., 2020, 2021), Universal Dependencies Treebanks (Partanen et al., 2018; Rueter and Tyers, 2018) (excluding Udmurt) and constraint grammars available in Giella repositories (Moshagen et al., 2014). For some of the languages, there have also been efforts in employing neural models in disambiguation (Ens et al., 2019) and morphological tasks (Hamalainen et al., 2021). Out of these languages, only Erzya has several neural based models available such as machine translation models (Dale, 2022), a wav2vec model and a Stanza model (Qi et al., 2020). In this paper, we present a method for translating word embeddings models from larger languages into the endangered languages in question. Furthermore, we fine-tune the models with language specific text data, align them and show results in a sentiment analysis task where no training data is provided in any of the endangered languages. We have made our data and models publicly available on Zenodo1. Footnote 1: [https://zenodo.org/record/7866456](https://zenodo.org/record/7866456) ## 2 Related work Apart from the work described earlier in the context of the endangered languages in question, there has been a lot of previous work on multilingual NLP where a model is trained in one language to sentence classification and then applied in the context of other languages. In this section, we will describe some of those approaches together with sentiment analysis approaches. A recent paper demonstrates sentiment analysis on 100 languages (Yilmaz et al., 2021). The authors use RoBERTa-XLM to extract feature vectors. These are then used in training a bi-directional LSTM based classifier model. Another line of work (Liu and Chen, 2015) compares several different multilabel classification methods on the task of sentiment analysis showing that RAkEL Tsoumakas et al. (2010) gave the best performance on raw token input. A recent paper Hamalainen et al. (2022) demonstrated promising results in French sentiment analysis on a model that was trained in English, Italian, Spanish and German. The approach relied on a multilingual BERT Devlin et al. (2019). Ohman (2021) suggests that lexicon based approaches, while viable for endangered languages, are not particularly suitable for sentiment analysis. In the context of cross-lingual NLP, there is work on POS tagging. For instance, Kim et al. 2017 propose a new model that does not require parallel corpora or other resources. The model uses a common BLSTM for knowledge transfer and another BLSTM for language-specific representations. It is trained using language-adversarial training and bidirectional language modeling as auxiliary objectives to capture both language-general and language-specific information. Another line of work by Xu et al. 2018 focuses on cross-lingual transfer of word embeddings, which aims to create mappings between words in different languages by learning transformation functions over corresponding word embedding spaces. The proposed algorithm simultaneously optimizes transformation functions in both directions by using distributional matching and minimizing back-translation losses. This approach uses a neural network implementation to calculate the Sinkhorn distance, a distributional similarity measure, and optimize objectives through back-propagation. For machine translation Chen et al. 2022 demonstrate the importance of both multilingual pretraining and fine-tuning for effective cross-lingual transfer in zero-shot translation using a neural machine translation (NMT) model. The paper presents SixT+, a many-to-English NMT model that supports 100 source languages but is trained on a parallel dataset in only six languages. SixT+ initializes the decoder embedding and full encoder with XLM-R large Conneau et al. (2020) and trains encoder and decoder layers using a two-stage training strategy. ## 3 Data We use two books, Suomi eilen ja nyt (_Finnland yesterday and now_) by Hakkinen (1997) and Plabjtik Morozov (Pavlik Morozov) by Gubarev (1948) both of which are available in Finnish, Erzya, Moksha, Komi-Zyrian and Udmurt. The sentences of the books have been aligned across all the languages at the Research Unit for Volgaic Languages in University of Turku. The size of the corpus for each language can be seen in Table 1. Out of the entire corpus, we annotate 35 negative sentences and 33 positive sentences for evaluation for Finnish. We use the alignment information to project this annotation for the rest of the languages as well and verify manually that the sentences express the same sentiment in each language. This forms our test corpus for sentiment analysis that consists of altogether 68 sentiment annotated sentences. Furthermore, we lemmatize all the texts using the FSTs provided in UralicNLP Hamalainen (2019). The corpus is lemmatized because we intend to translate and align a lemmatized word embeddings model. This also makes the overall approach more robust given that covering the entire morphology of a language would require us to have much larger corpora. ## 4 Word embeddings Word embeddings capture the semantic and syntactic links between words by constructing vector representations of words. These vectors can be utilized to measure the semantic similarity between words, find analogous concepts, cluster words Hamalainen and Alnajjar (2019); Stekel et al. (2021) and more. In this work, we use English and Finnish as the big languages that facilitate aligning and classifying words and sentences for the endangered languages. English has an overnumerous amount of linguistic resources, whether as raw text or labeled data, while the endangered resources that we are working with have translation dictionaries for Finnish. For this reason, we use Finnish as the intermediate language that bridges these endangered languages with English resources. The English model that we utilize is trained on the English Wikipedia dump of February 2017 and \begin{table} \begin{tabular}{|l|l|l|} \hline & tokens & sentences \\ \hline Finnish & 43k & 3.1k \\ \hline Erzya & 50k & 3.6k \\ \hline Moksha & 51k & 3.4k \\ \hline Komi-Zyrian & 50k & 3.3k \\ \hline Udmurt & 53k & 3.6k \\ \hline \end{tabular} \end{table} Table 1: The corpus size for each language Gigaword 5th edition2[Fares et al., 2017]. For Finnish, we used recent word embeddings trained by Language Bank of Finland (2022). These embeddings have been trained on several Finnish newspapers. Both of these models have been trained on lemmatized text. Footnote 2: [http://vectors.nlpl.eu/repository/20/17.zip](http://vectors.nlpl.eu/repository/20/17.zip) The English word vectors have a dimension size of 300, while the Finnish word vectors have a dimension size of 100. In order to make the dimension sizes of the two sets of embeddings compatible, dimensionality reduction is applied to the English embeddings using principal component analysis (PCA) [Tipping and Bishop, 1999]. This process reduces the dimensionality of the English embeddings to 100, allowing them to be compared and analyzed alongside the Finnish embeddings. ### Creation of embeddings We aim to create word embeddings for endangered languages, which currently lack pre-existing embeddings. We use dictionaries from GiellaLT3, which we augment using graph-based methods to predict new translations through the Ve'rdd4 platform [Alnajjar et al., 2022, 2021]. We present the number of dictionary translations from each endangered language to Finnish that we obtained from the base dictionaries and predictions in Table 2. Footnote 3: [https://github.com/giellalt](https://github.com/giellalt) Footnote 4: [https://akusanat.com/verdd/](https://akusanat.com/verdd/) To create embeddings for the endangered languages, we adopt a method of cloning the Finnish embeddings and substituting the Finnish lemma with its corresponding translation in the endangered language. Where translations were absent, we omitted the word vector. The resulting embeddings consist of 7,908, 10,338, 7,535, and 9,505 word vectors for kpv, mdf, myv, and udm, respectively. The lower number of word coverage can be attributed to multi-word expressions present in the dictionaries but not the embeddings. In the next step of our study, we fine-tuned the word embeddings for both Finnish and the endangered languages by using two books as additional data sources. This involved expanding the vocabulary of each embeddings model whenever a new word was encountered in the data. We also adjusted the embeddings weights based on the co-occurrences of words in the text, using a window size of 5 and a minimum count of 5 for a word to be considered in the vocabulary. After completing this process, the vocabulary size of the endangered language embeddings were 10,396, 11,877, 9,030, and 11,080, in the same order as mentioned above. ### Alignment of embeddings Our goal here is to align the Finnish word embeddings with the English ones, followed by aligning the embeddings of endangered languages to the Finnish embeddings, in a supervised manner. This was achieved by creating alignment dictionaries and aligning the embedding spaces together similarly to Alnajjar (2021). To align Finnish embeddings with English, we used the Fin-Eng dictionary by Ylonen (2022), which is based on the March 2023 English Wiktionary dump. We also used the Finnish-English dictionaries provided by MUSE [Conneau et al., 2017]. Regarding the endangered languages, we use the XML dictionaries to align them with Finnish. We set aside 20% of the Wiktionary and XML data for testing the alignments. One thing that we have noticed is the lack of the words "no" and "not" in the English embeddings due to stopword removal. To address this, we appended a translation from "not" to "nt" in the Finnish-English alignment data used in the training stage. Whenever the text contained these words, they were automatically mapped to "nt" in the following steps of our research. We followed the approach described by MUSE [Conneau et al., 2017] to align all the embeddings, with 20 iterations of refinement to align Finnish with English and 5 iterations to align all the other languages to Finnish. ## 5 Sentence embeddings Word embeddings represent the meaning of a single word, whereas sentence embeddings represent the meaning of an entire sentence or document. Sentence embeddings are capable of capturing more the context and excel at tasks that call for comprehension of the meaning of a whole text, such as sentiment analysis. Hence, we build sen \begin{table} \begin{tabular}{|l|l|l|l|} \hline & Translations & Predictions & Total \\ \hline kpv & 10983 & 14421 & 25404 \\ \hline mdf & 36235 & 3903 & 40138 \\ \hline myv & 18056 & 5018 & 23074 \\ \hline udm & 36502 & 6966 & 43468 \\ \hline \end{tabular} \end{table} Table 2: Number of translations and predictions from the source languages to Finnish tence embeddings for English that are based on the English word embeddings. The procedure for creating sentence embeddings was conducted by averaging the word embeddings of a given sentence and subsequently feeding them to two fully-connected feed-forward layers, thereby constructing a Deep Averaging Network (DAN). The sentence embeddings are trained on the STS Benchmark Cer et al. (2017) using SBERT, a method for sentence embeddings that was proposed by Reimers and Gurevych (2019). ## 6 Sentiment analysis We create a sentiment classifier that takes in the sentence embeddings and predicts a sentiment polarity label. For training the sentiment analysis model, we use the Stanford Sentiment Treebank Socher et al. (2013), Amazon Reviews Dataset McAuley and Leskovec (2013) and Yelp Dataset5. These datasets are available in English and we use their sentiment annotations (positive-negative) to train our model. Footnote 5: [https://www.yelp.com/dataset](https://www.yelp.com/dataset) The sentiment classifier is constructed as a three-layer fully-connected network, wherein the hidden layers are comprised of 300 neurons each. In order to mitigate overfitting, a dropout operation Srivastava et al. (2014) is performed prior to the final classification layer. The model consists of 121,202 trainable parameters in total, and is trained over the course of three epochs. ## 7 Results In this section, we show the results of the sentiment classification model on the in-domain, English-language train splits of the sentiment corpora we used to train the model. Furthermore, we show the results of the sentiment classification model when applied on our own annotated data for the 4 endangered Uralic languages in question and Finnish. These results can be seen in Table 3. All in all, our model performs relatively well. The accuracy for Finnish is almost as high as it is for English despite not having any Finnish sentiment annotated training data. This means that our approach can achieve rather good results when there is a lot of translation data available between the two languages. The results drop for the endangered languages, but we do find the 69% accuracy for Erzya to be quite formidable, however, the result for Komi-Zyrian of 56% leaves some room for improvement. ## 8 Conclusions In this paper, we outlined a method for translating word embeddings from a majority language, Finnish, to four minority languages - Erzya, Moksha, Udmurt, and Komi-Zyrian. The word embeddings were aligned and a new neural network model was introduced. This model was trained using English data to carry out sentiment analysis and was then applied to data in the endangered languages using the aligned word embeddings. We built an aligned sentiment analysis corpus for the four endangered languages and Finnish and used it to test our model. The results were promising and our study demonstrated that even the latest neural models can be utilized with endangered languages if a dictionary between the endangered language and a larger language is available. ## Acknowledgments This research is supported by FIN-CLARIN and Academy of Finland (grant 345610 _Kielivarojen ja kieliteknologian tukimusinfrastruktuuri_). \begin{table} \begin{tabular}{c c c c c} \hline \hline **Language** & **Label** & **Precision** & **Recall** & **F1-Score** & **Accuracy** \\ \hline \multirow{2}{*}{eng} & neg & 0.77 & 0.76 & 0.76 & 0.76 \\ & pos & 0.75 & 0.76 & 0.76 & \\ \hline \multirow{2}{*}{fin} & neg & 0.77 & 0.75 & 0.76 & 0.75 \\ & pos & 0.73 & 0.75 & 0.74 & 0.75 \\ \hline \multirow{2}{*}{kpv} & neg & 0.57 & 0.57 & 0.57 & 0.56 \\ & pos & 0.55 & 0.55 & 0.55 & 0.55 \\ \hline \multirow{2}{*}{mdf} & neg & 0.63 & 0.65 & 0.64 & 0.63 \\ & pos & 0.64 & 0.62 & 0.63 & \\ \hline \multirow{2}{*}{mvy} & neg & 0.71 & 0.69 & 0.70 & 0.69 \\ & pos & 0.67 & 0.69 & 0.68 & 0.69 \\ \hline \multirow{2}{*}{udm} & neg & 0.69 & 0.63 & 0.66 & 0.63 \\ & pos & 0.58 & 0.63 & 0.60 & 0.63 \\ \hline \hline \end{tabular} \end{table} Table 3: Precision, recall, f1-score and accuracy for each language and label
2302.11141
GASP -- A Genetic Algorithm for State Preparation
The efficient preparation of quantum states is an important step in the execution of many quantum algorithms. In the noisy intermediate-scale quantum (NISQ) computing era, this is a significant challenge given quantum resources are scarce and typically only low-depth quantum circuits can be implemented on physical devices. We present a genetic algorithm for state preparation (GASP) which generates relatively low-depth quantum circuits for initialising a quantum computer in a specified quantum state. The method uses a basis set of R_x, R_y, R_z, and CNOT gates and a genetic algorithm to systematically generate circuits to synthesize the target state to the required fidelity. GASP can produce more efficient circuits of a given accuracy with lower depth and gate counts than other methods. This variability of the required accuracy facilitates overall higher accuracy on implementation, as error accumulation in high-depth circuits can be avoided. We directly compare the method to the state initialisation technique based on an exact synthesis technique by implemented in IBM Qiskit simulated with noise and implemented on physical IBM Quantum devices. Results achieved by GASP outperform Qiskit's exact general circuit synthesis method on a variety of states such as Gaussian states and W-states, and consistently show the method reduces the number of gates required for the quantum circuits to generate these quantum states to the required accuracy.
Floyd M. Creevey, Charles D. Hill, Lloyd C. L. Hollenberg
2023-02-22T04:41:01Z
http://arxiv.org/abs/2302.11141v1
# GASP - A Genetic Algorithm for State Preparation ###### Abstract The efficient preparation of quantum states is an important step in the execution of many quantum algorithms. In the noisy intermediate-scale quantum (NISQ) computing era, this is a significant challenge given quantum resources are scarce and typically only low-depth quantum circuits can be implemented on physical devices. We present a genetic algorithm for state preparation (GASP) which generates relatively low-depth quantum circuits for initialising a quantum computer in a specified quantum state. The method uses a basis set of \(R_{x}\), \(R_{y}\), \(R_{z}\), and CNOT gates and a genetic algorithm to systematically generate circuits to synthesize the target state to the required fidelity. GASP can produce more efficient circuits of a given accuracy with lower depth and gate counts than other methods. This variability of the required accuracy facilitates overall higher accuracy on implementation, as error accumulation in high-depth circuits can be avoided. We directly compare the method to the state initialisation technique based on an exact synthesis technique by implemented in IBM Qiskit simulated with noise and implemented on physical IBM Quantum devices. Results achieved by GASP outperform Qiskit's exact general circuit synthesis method on a variety of states such as Gaussian states and W-states, and consistently show the method reduces the number of gates required for the quantum circuits to generate these quantum states to the required accuracy. quantum computing, genetic algorithm, state preparation ## I Introduction Quantum state preparation is key in many applications that require data input, such as finance [1; 2; 3], chemistry [4; 5], bioinformatics [6], machine learning [7], and optimisation [8]. The ability to generate arbitrary quantum states efficiently and effectively, with high accuracy and low depth and gate count, will therefore be important in the future applications of quantum computing. Existing state preparation techniques [9; 10] typically produce lengthy circuits with many qubit operations, or quantum gates, compromising their implementation on near-term hardware. In this paper, we present a genetic algorithm for state preparation (GASP) which creates circuits for state preparation to a specified accuracy and depth in an evolutionary framework. This work builds on a relatively old idea on the application of genetic algorithms to quantum circuit evolution [11]. For benchmarking purposes, of the methods that produce circuits for exact state preparation [9; 12; 13], we focus here on the method by Shende, Bullock, and Markov (SBM) [10], which has been encoded in the Qiskit library [14]. The SBM approach produces an \(n\)-qubit arbitrary state using a circuit containing no more than \(2^{n+1}-2n\) CNOT gates. We benchmark the GASP method against Qiskit [14] for Gaussian states and W-states in the context of simulation with gate noise and implementation on physical devices. Because the GASP circuits have much lower depth by design, the prospects for implementation on physical Noisy Intermediate Quantum (NISQ) devices are better, as we will show by explicit examples. The structure of the paper is as follows. Section II will give a brief summary of genetic algorithms. Section III will describe the method presented in detail. Section IV will present the results, and section V our conclusions and potential future work. ## II Genetic Algorithms A genetic algorithm is a classical optimisation technique that aims to mimic the process of biological evolution [15]. The basic structure of a genetic algorithm is to first establish a population of individuals to evolve. This population can be many individuals or only a single individual. Generic genetic algorithms usually have many individuals and use crossover, however, if the population has only a single individual it is asexual, and only uses mutation. Each individual is an attempted solution to the given problem and is defined by a 'chromosome' where each parameter in the chromosome represents a 'gene'. This population is then subject to an iterative process where the individuals are selected by some selection criteria (rank selection, roulette wheel selection, etc.), and the selected individuals are bred together, and/or mutated. Each iteration of the genetic algorithm is referred to as a 'generation'. For each generation, the fitness of each individual in the population is evaluated, by some objective fitness function, and the fittest individuals have the highest probability of being bred together, and/or mutated [16]. These individuals then form the next generation of the algorithm, in an effort to iteratively increase the maximum fitness. Generally, the algorithm is completed when a specific fitness is achieved, or when a given number of iterations are run without achieving a specific fitness. This method of optimisation allows solutions to be pulled out of local optima, and move towards global optima. The remainder of this section will outline the various aspects of a genetic algorithm and how these aspects may be applied to arbitrary quantum state synthesis. _Initial Population/Individual_: In a sexual genetic algorithm the initial population is randomly generated with a specified population size parameter. Ideally, there is a considerable amount of divergence between different individuals, to allow for a broad area of the problem search space to be covered. In an asexual genetic algorithm, the initial population is only a single individual. Each individual in the population is instantiated with a chromosome defining a set of genes, the representation of these is subject to the problem being solved. Examples of possible chromosome representations include binary representations and list representations. _Crossover_: Crossover is the process where two parents are bred together, producing a child that contains one-half of its genomic information from one parent, and one-half from the other. There are various methods of crossover in genetic algorithms, the main two being single-point crossover, and k-point crossover. The crossover point is where in the chromosome 'list' the chromosome is split. Single-point crossover is where the crossover point is chosen and the child's genomic information is taken from one parent before the crossover point, and one parent after the crossover point. k-point crossover is a slightly more generalised version of this process, where there are k crossover points and the genomic information of the child ends up being \(k+1\) sections alternating between parents [17]. _Mutation_: Mutation is the process where individual genes in an individual are randomly mutated, by some probability \(p\), to increase genetic diversity in the population. In sexual genetic algorithms, the mutation rate is set low, as high mutation rates tend towards a primitive random search. In asexual genetic algorithms, the mutation rate is generally higher as it is the only means by which to introduce genetic diversity in the individual. The mutation operation allows the genetic algorithm to have increased genetic diversity, increasing the search space and allowing the algorithm to potentially escape local minima. _Selection_: Selection is the process by which the fittest individuals are chosen for crossover. The procedure generally involves assigning a probability of selection to each individual based on their fitness. There are many selection methods; common methods include roulette wheel selection, rank selection, and tournament selection. In _roulette wheel selection_ each individual is given a probability of being selected dependent on their fitness. This allows the fittest individuals to have the highest probability of passing genes on to the next generation. It also allows lucky unfit individuals to pass their genes on to the next generation, increasing genetic diversity. The advantage of this method is no genetic material is conserved. _Rank selection_ sorts the individuals by fitness and chooses the fittest individuals for crossover. The advantage of this method is you can more quickly converge to an optimal solution by virtue of only taking the fittest individuals. _Tournament selection_ randomly pairs individuals and selects the individual with the higher fitness of the two for crossover. This is an intermediary between the roulette wheel and rank selection, allowing potential faster convergence while maintaining a potentially higher level of genetic diversity. ## III Genetic algorithm for state preparation (GASP) In the quantum circuit space, the introduction of genetic algorithms has focused largely on producing circuits to generate specific unitaries [11; 17; 18; 19; 20; 21]. The flowchart describing GASP is depicted in Figure 1. In this work, the genetic algorithm is designed to evolve quantum circuits for the task of state preparation. In this context, an individual is a quantum circuit, and a gene is a single quantum logic operation (gate) in the given quantum circuit. A gene is represented, \[\mathrm{gene}=[q_{\mathrm{t}}^{i},\,G^{i},q_{\mathrm{c}}^{i},\theta^{i}],\] and an individual is represented, \[P_{i}=[\mathrm{gene}_{1},\mathrm{gene}_{2},\ldots,\mathrm{gene}_{\mathrm{n}}],\] where each gene (\(i=0\ldots n\)) represents a gate application, \(q_{t}\) is the target qubit, \(G\) is the chosen gate type from \(\{R_{x},R_{y},R_{z},\mathrm{CNOT}\}\), \(q_{c}\) is the control qubit if any, and \(\theta\) is the rotation angle in the chosen gate. For a single qubit gate, \(q_{c}\) is set to None. For a two-qubit gate \(q_{c}\) is set to another qubit in the circuit, and \(\theta\) is set to None, allowing the classical optimisation to only optimisation single-qubit rotation angles. The number of genes in an individual is dependent on how many gates the circuit contains. As such, a mutation in the genetic algorithm would be the changing of certain gates with a given probability. _Crossover_: The crossover method used in GASP is a simple 1-point crossover with one half from each parent, applied to every generation (i.e. crossover point is 50%). In the quantum circuit context, this results in a new circuit containing half the gates from the first individual, and half the gates from the second individual. _Mutation_: Mutation in GASP is applied every generation. The default probability for mutation is 5%, however, this parameter can be varied dependent on the problem to be solved. The basic mutation of a quantum circuit changes certain genes in the individual to other genes, resulting in different gates for the quantum circuit, changing the resultant state vector. _Fitness_: Given the target state vector, \(|\psi_{\mathrm{target}}\rangle\), and the resultant state vector of the current individual, \(|\psi(\vec{\theta})_{P_{i}}\rangle\), GASP searches for the individual whose circuit produces the highest fitness. The fitness is calculated by the cost function, \[f(|\psi(\vec{\theta})_{P_{i}}\rangle)=|\langle\psi_{\rm target}|\psi(\vec{\theta })_{P_{i}}\rangle|^{2},\] the norm squared of the inner product between the target state vector and the individual's state vector, which is the similarity between the target state vector and the individual's state vector. The fitness of the individual will always be a value between 0 and 1. _Selection_: The method of selection used in GASP is roulette wheel selection. This allows genetic diversity to be maintained through generations of the algorithm, while also increasing fitness. This is done by initially summing the total fitness of the entire population of individuals, \(f_{T}\), then giving each individual a normalised fitness relative to the fitness of the entire population, \(p(|\psi(\vec{\theta})_{P}\rangle)\), i.e, \[f_{T}=\sum_{i}f(|\psi(\vec{\theta})_{P_{i}}\rangle),\] \[p(|\psi(\vec{\theta})_{P_{i}}\rangle)=\frac{f(|\psi(\vec{\theta })_{P_{i}}\rangle)}{f_{T}}.\] The individuals in the next generation are then selected based on their respective fitnesses. _Algorithm_: For a chosen \(|\psi_{\rm target}\rangle\), a population of individuals is produced each with a certain number of 'genes' based on the entanglement of the target state. The individual is the quantum circuit. Each 'gene' is one of the four gates identified in the universal set listed in section III; \(\{R_{x},R_{y},R_{z},\rm{CNOT}\}\). The fitness of each individual is then assessed with the fitness function. Crossover is then applied to the population, to produce new individuals, doubling the population. Each individual in the population is then'mutated' at a probability of 5%. SLSQP optimisation [22] is then run on each individual in the population to find the optimal \(\theta\)'s for each given individual. The population is then subject to roulette wheel selection, to select the individuals for the next generation, halving the population back down to its original size. This process is then repeated, iteratively increasing the best fitness of the population. If the desired fitness is not achieved within _maxiter_ (1000 in the presented results) iterations of the last increase in fitness, the number of genes for each individual is increased by one, and the process restarts. Once a desired target state vector \(|\psi_{\rm target}\rangle\) (which also determines the number Figure 1: Overview of the GASP approach, given a desired target state vector \(|\psi_{\rm target}\rangle\). **(i)** Then, create initial population \(\{P\}\), of individuals, \(P_{i}\), which are each a quantum circuit, that generates the population states \(|\psi(\vec{\theta})_{P}\rangle\), with the appropriate number of qubits and number of genes for the given state vector. **(ii)** Assess the fitness of the state vectors determined by each individual in the population: \(f(|\psi(\vec{\theta})_{P_{i}}\rangle)=|\langle\psi_{\rm target}|\psi(\vec{ \theta})_{P_{i}}\rangle|^{2}\). **(iii)** Apply crossover to the population, producing \(|\psi(\vec{\theta})_{\rm new}\rangle\). **(iv)** Mutate the entire population with probability \(p=5\%\). **(v)** Run classical optimisation on each mutated individual to obtain the optimal \(\theta\) values between 0 and \(2\pi\), to achieve the highest fitness for their generated circuit. **(vi)** Apply roulette wheel selection to the population, to select the individuals for the next generation based on their assessed fitness. **(vii)** Repeat until the desired fitness is achieved or _maxiter_ iterations since the last increase in fitness was achieved. **(vii)** If _maxiter_ iterations since the last increase in fitness, increase the number of genes by 1 and return to **(i)**. of qubits) is selected GASP can be broken down into the following steps: 1. The initial population \(\{P\}\), of individuals, \(P_{i}\), is created, that dictates the trial state vectors \(|\psi(\vec{\theta})_{P_{i}}\rangle\), with the appropriate number of qubits and number of genes for the given state vector 2. The fitness of the state vectors determined by each individual in the population: \(f(|\psi(\vec{\theta})_{P_{i}}\rangle)=|\langle\psi_{\rm target}|\psi(\vec{ \theta})_{P_{i}}\rangle|^{2}\) is assessed 3. Crossover is applied to the population, generating \(|\psi(\vec{\theta})_{\rm new}\rangle\) 4. The entire population is mutated with probability \(p=5\%\). 5. Classical optimisation is run on each mutated individual to obtain the optimal \(\vec{\theta}\) values between 0 and \(2\pi\), which achieve the highest fitness for their generated circuit 6. Roulette wheel selection is applied to the population, to select the individuals for the next generation based on their assessed fitness. 7. Steps ii-vi are repeated until the desired fitness is achieved or \(maxiter\) (where \(maxiter\) is a parameter set prior to the start of the algorithm) iterations since the last increase in fitness was achieved 8. If \(maxiter\) iterations since the last increase in fitness are achieved, increase the number of genes by 1 and return to step i In GASP, the genetic algorithm is being utilised to determine the course-grained optimisation of quantum circuit structure, while a much better fine-grained algorithm, SLSQP, was used for determining the optimal angles for each circuit structure generated. This allows the optimal, or at least close to optimal, fitness for each generated circuit structure to be achieved. ## IV Results - GASP vs. Qiskit ### Gaussian States Gaussian states have relatively low entanglement and are defined as, \[|\psi_{G}\rangle=\frac{1}{\sqrt{2^{n}}}\sum_{i=0}^{2^{n}-1}g(x)|x\rangle,\] where \(g(x)=\frac{1}{\sigma\sqrt{2^{n}}}\exp{(-\frac{1}{2}\frac{(x-\mu)^{2}}{\sigma^ {2}})}\), \(\mu\) is the mean, and \(\sigma\) is the standard deviation of the desired Gaussian. For the purposes of this paper, we let \(\mu=\frac{2^{n}}{2}\), and \(\sigma=\frac{2^{n}}{8}\). In Figure 2(a) we show an example GASP circuit, comparing the depth and gate count with that produced by Qiskit's initialise function. A comparison of the states produced by Qiskit's initialise function, and GASP, in the absence of noise, is shown in Figure 2(b) for a 6 qubit Gaussian state. In the zero-noise regime, the data shows that the Qiskit method produces the target exactly as expected, and the GASP is within the specified fidelity tolerance (99% in this case). For the same 6 qubit Gaussian, a comparison of the states produced by Qiskit's initialise function, and GASP simulated, in the presence of noise (modelled from _ibmq_quadalupe_) is shown in Figure 2(c), 16384 shots were used. It can be seen that GASP performs much better in reproducing the desired target state vector in the presence of noise, though not perfectly. This shows that a slight reduction in the accuracy of the circuit, for a large reduction in circuit depth improves the desired outcome state in the presence of noise. GASP producing circuits shorter by orders of magnitude allow more realistic circuits to be implemented on NISQ-era hardware. Figure 2(d) shows the comparison between GASP and Qiskit for Gaussian states, in terms of gate scaling vs. the number of qubits. Figure 2(e) shows the improved performance of the circuits generated by GASP relative to Qiskit when simulated with noise, and run on IBM's _ibmq_quadalupe_ machine, averaged over 10 tests as the number of qubits varied from 2 to 10 (noting that this measure does not include the phase information). ### W-states W-states generally have higher entanglement and are defined as, \[|\psi_{W}\rangle=\frac{1}{\sqrt{n}}(|100...0\rangle+|010...0\rangle+...+|000... 1\rangle),\] where \(n\) is the number of qubits. A comparison of the produced states by Qiskit's initialise function, and GASP, with no noise, is shown in Figure 3(b). Figure 2(c) shows the comparison between Qiskit and GASP(99% fidelity) resultant distributions in the presence of noise for a 6 qubit W state. The noise model is that of IBM's _ibmq_quadalupe_ machine. 16384 shots were used. The grey dashed line is the exact distribution without noise, the orange bars are GASP, and the blue bars are Qiskit. Figure 2(d) shows the comparison between the Qiskit and GASP(99% fidelity) for W-states as the number of qubits varied from 2 to 10. Figure 2(e) shows the performance of the circuits generated by GASP and Qiskit when simulated with noise, averaged over 10 tests as the number of qubits varied from 2 to 10 noting that this measure does not include the phase information. As can be seen in the results shown here, GASP consistently outperforms Qiskit in the number of total gates and the number of CNOT gates required; by more than two orders of magnitude in the higher qubit tests. It seems that GASP is a lower polynomial complexity in Figure 2: **(a)** Comparison between sample solution circuits generated by GASP and Qiskit for a 6 qubit Gaussian. GASP produced a circuit with a depth of 13 and 35 gates, Qiskit produced a circuit with a depth of 120 and 125 gates. **Note:** these are circuits before compilation on real hardware. **(b)** Comparison between Qiskit and GASP(99% fidelity) resultant distributions with no noise for a 6 qubit Gaussian state. The grey dashed line is the exact distribution, the orange bars are GASP, and the blue bars are Qiskit. **(c)** Comparison between Qiskit and GASP(99% fidelity) resultant distributions in the presence of noise for a 6 qubit Gaussian state. The noise model is that of IBM’s _ibmq_quadalupe_ machine. 16384 shots were used. The grey dashed line is the exact distribution without noise, the orange bars are GASP, and the blue bars are Qiskit. **(d)** Gate comparison between the Qiskit and GASP(99% fidelity) for Gaussian states as the number of qubits varied from 2 to 10. **(e)** Comparison between GASP(99% fidelity) and Qiskit for Gaussian states as the number of qubits varied from 2 to 10. Lines represent simulations with noise, crosses represent results from IBM’s _ibmq_quadalupe_ machine. 16384 shots were used. The noise model is that of IBM’s _ibmq_quadalupe_ machine. Figure 3: **(a)** Comparison between sample solution circuits generated by GASP and Qiskit for a 6 qubit W state. GASP produced a circuit with a depth of 22 and 59 gates, and Qiskit produced a circuit with a depth of 120 and 125 gates. **Note:** these are circuits before compilation on real hardware. **(b)** Comparison between Qiskit and GASP(99% fidelity) resultant distributions with no noise for a 6 qubit W state. The grey dashed line is the exact distribution, the orange bars are GASP, and the blue bars are Qiskit. **(c)** Comparison between Qiskit and GASP(99% fidelity) resultant distributions in the presence of noise for a 6 qubit W state. The noise model is that of IBM’s _ibmq-guadalupe_ machine. 16384 shots were used. The grey dashed line is the exact distribution without noise, the orange bars are GASP, and the blue bars are Qiskit. **(d)** Comparison between the Qiskit and GASP(99% fidelity) for W-states as the number of qubits varied from 2 to 10. **(e)** Comparison between GASP(99% fidelity) and Qiskit for W-states as the number of qubits varied from 2 to 10. Lines represent simulations with noise, crosses represent results from IBM’s _ibmq-guadalupe_ machine. 16384 shots were used. The noise model is that of IBM’s _ibmq-guadalupe_ machine. the number of gates required compared to Qiskit's initialisation method. It should be noted that the number of gates required for the Gaussian state generation is fewer than that of the W-states. This is likely due to Gaussian states being less entangled than W-states. It can be seen that the circuits produced by GASP have higher noise robustness than circuits produced by Qiskit's initialisation. However, the length of circuits produced by both techniques at high numbers of qubits have so many gates that the noise overwhelms the ability of the circuit to produce the desired state. It should also be noted that at useful fidelities (those above 50%), GASP outperforms Qiskit's initialisation. ## V Conclusion In this paper, we have proposed and demonstrated a state preparation method based on a genetic evolutionary approach. Benchmarking GASP against Qiskit's initialisation method, the results show that in the noisy regime relevant to implementation on actual hardware, GASP significantly outperforms the exact approach through superior circuit compression, by more than an order of magnitude. As GASP is a stochastic algorithm, there is an increase in run time over the deterministic algorithms and an introduced uncertainty in the ability to produce a solution. However, the significant reduction in both the total gate count and the number of required CNOT gates may outweigh these for the application of GASP to the initialisation of quantum states in circuit lengths feasible on NISQ-era hardware. Note: During the preparation of this work, a recent paper by Rindell et. al [23] studying state preparation using a genetic algorithm was posted on the arXiv. Their method is similar to the GASP approach presented here, though differs in that they used a Fast Non-dominated Sorting Genetic Algorithm implemented in the DEAP Python package [24], and a different gate set of \(\{R_{z}(\theta),X,\sqrt{X},\text{CNOT}\}\). They also do not use classical optimisation, instead opting to adjust \(\theta\) by adding a value from a selected Gaussian distribution. Their method was applied to the production of Haar random states up to 5 qubits, demonstrating similar fidelity improvements in the presence of noise. ## Acknowledgements This research was supported by the University of Melbourne through the establishment of the IBM Quantum Network Hub at the University. FMC is supported by an Australian Government Research Training Program Scholarship. This research was supported by The University of Melbourne's Research Computing Services and the Petascale Campus Initiative.
2303.06714
BCSSN: Bi-direction Compact Spatial Separable Network for Collision Avoidance in Autonomous Driving
Autonomous driving has been an active area of research and development, with various strategies being explored for decision-making in autonomous vehicles. Rule-based systems, decision trees, Markov decision processes, and Bayesian networks have been some of the popular methods used to tackle the complexities of traffic conditions and avoid collisions. However, with the emergence of deep learning, many researchers have turned towards CNN-based methods to improve the performance of collision avoidance. Despite the promising results achieved by some CNN-based methods, the failure to establish correlations between sequential images often leads to more collisions. In this paper, we propose a CNN-based method that overcomes the limitation by establishing feature correlations between regions in sequential images using variants of attention. Our method combines the advantages of CNN in capturing regional features with a bi-directional LSTM to enhance the relationship between different local areas. Additionally, we use an encoder to improve computational efficiency. Our method takes "Bird's Eye View" graphs generated from camera and LiDAR sensors as input, simulates the position (x, y) and head offset angle (Yaw) to generate future trajectories. Experiment results demonstrate that our proposed method outperforms existing vision-based strategies, achieving an average of only 3.7 collisions per 1000 miles of driving distance on the L5kit test set. This significantly improves the success rate of collision avoidance and provides a promising solution for autonomous driving.
Haichuan Li, Liguo Zhou, Alois Knoll
2023-03-12T17:35:57Z
http://arxiv.org/abs/2303.06714v1
# BCSSN: Bi-direction Compact Spatial Separable Network for Collision Avoidance in Autonomous Driving ###### Abstract Autonomous driving has been an active area of research and development, with various strategies being explored for decision-making in autonomous vehicles. Rule-based systems, decision trees, Markov decision processes, and Bayesian networks have been some of the popular methods used to tackle the complexities of traffic conditions and avoid collisions. However, with the emergence of deep learning, many researchers have turned towards CNN-based methods to improve the performance of collision avoidance. Despite the promising results achieved by some CNN-based methods, the failure to establish correlations between sequential images often leads to more collisions. In this paper, we propose a CNN-based method that overcomes the limitation by establishing feature correlations between regions in sequential images using variants of attention. Our method combines the advantages of CNN in capturing regional features with a bi-directional LSTM to enhance the relationship between different local areas. Additionally, we use an encoder to improve computational efficiency. Our method takes "Bird's Eye View" graphs generated from camera and LiDAR sensors as input, simulates the position (x, y) and head offset angle (Xw) to generate future trajectories. Experiment results demonstrate that our proposed method outperforms existing vision-based strategies, achieving an average of only 3.7 collisions per 1000 miles of driving distance on the L5kit test set. This significantly improves the success rate of collision avoidance and provides a promising solution for autonomous driving. ## 1 Introduction Trajectory prediction [1, 2, 3] is a critical task in autonomous driving for collision avoidance, as it requires the autonomous vehicle (AV) to accurately estimate the future motion of surrounding objects and plan its own motion accordingly shows in Fig 1. This task is challenging due to the uncertainty of the surrounding environment and the complexity of the motion patterns of other vehicles, pedestrians, and cyclists. To improve the accuracy of trajectory prediction, various techniques have been proposed. For instance, multi-modal prediction [4, 5] and uncertainty estimation [6, 7] have been utilized to consider multiple possible future trajectories and their probabilities. Attention mechanisms [8, 9] and graph-based models [5, 10] have been proposed to capture the importance and relationships among different objects. Convolutional Neural Network (CNN) has made outstanding contributions to vision tasks and has been widely applied to traffic scenes due to its excellent regional feature extraction capabilities. Based on this advantage, CNN will obtain the local feature information of sequential related frames in our network. Additionally, since the motion trajectory is planned for AV, which Figure 1: Different collision cases during driving means each position point has a sequential relation (the later points depend on former points). It is necessary to establish the relationship between each local feature of the image obtained by CNN. To address this, some strategies use CNN plus RNN to deal with sequential graphs as input, such as STDN [11], CRNN [12], LSTM-CNN [13], RCNN [14]. Although the above strategies have performed well in a large number of vision tasks, their performances are still far inferior to similar-sized convolutional neural networks counterparts, such as EfficientNets [15] and RepVGG [16] in Fig.3. We believe this is due to the following aspects. First, the huge differences between the sequential tasks of NLP and the image tasks of CV are ignored. For example, when the local feature information acquired in a two-dimensional image is compressed into one-dimensional time series information, how to achieve accurate mapping becomes a difficult problem. Second, it is difficult to keep the original information of inputs since after RNN layers, we need to recover the dimension from one to three. Besides, due to the several transformations between different dimensions, that process becomes even harder, especially our input size is 224x224x5. Third, the computational and memory requirement of switching between layers are extremely heavy tasks, which also becomes a tricky point for the algorithm to run. Higher hardware requirements as well as more running time arise when running the attention part. In this paper, we propose a new network structure based on CNN, Bi-LSTM, encoder, and attention to trajectory-generating tasks in autonomous driving. The new network structure overcomes these problems by using Bi-direction Compress Sequential Spatial Network (BCSSN). As shown in Fig. 2, input Bird's Eye View (BEV) images first will be divided into three sub-graphs. Then go through bi-directional frame-related(BIFR) blocks which consist of flatten layer, Bi-LSTM layer, and full connect layers. After that, information will be fed into the main stem, the convolution stem for fine-grained feature extraction, and are then fed into a stack of SSN (Sequential Spatial Network) blocks in Fig. 4 for further processing. The Upsampling Convolutional Decreasing (UCD) blocks are introduced for the purpose of local information enhancement by deep convolution, and in SSN block of features generated in the first stage can be less loss of image resolution, which is crucial for the subsequent trajectory adjustment task. In addition, we adopt a staged architecture design using three convolutional layers with different kernel sizes and steps gradually decreasing the resolution (sequence length) and flexibly increasing the dimensionality. Such a design helps to extract local features of different scales and, since the first stage retains high resolution, our design can effectively reduce the resolution of the output information in the first layer at each convolutional layer, thus reducing the computational effort of subsequent layers. The Reinforcement Region Unit (RRU) and the Fast MultiHead Self-Attention (FMHSA) in the SSN block can help obtain global and local structural information within the intermediate features and improve the normalization capability of the network. Finally, average pooling is used to obtain better trajectory tuning. Extensive experiments on the lykit dataset [17] demonstrate the superiority of our BCSSN network in terms of accuracy. In addition to image classification, SSN block Figure 2: Overview of Bi-direction Compact Spatial Separable Network can be easily transferred to other vision tasks and serve as a versatile backbone. ## 2 Related Works Rule-based systems[18, 19, 20]: Rule-based systems are a type of artificial intelligence that uses a set of rules to make decisions. These rules are typically expressed in if-then statements and are used to guide the system's behavior. Rule-based systems have been used in a variety of applications, including expert systems, decision support systems, and automated planning systems. Decision trees[21, 22, 23]: Decision trees are a type of machine learning algorithm that is used for classification and regression analysis. They are built using a tree structure, where each internal node represents a decision based on a feature, and each leaf node represents a prediction. Decision trees are widely used in various fields, including business, medicine, and engineering. Markov Decision Processes[24, 25, 26, 27] (MDPs): Markov Decision Processes are a mathematical framework for modeling sequential decision-making problems, where an agent interacts with an environment in discrete time steps. The framework involves a set of states, actions, rewards, and transition probabilities that govern the agent's behavior. MDPs have been used in a wide range of applications, such as robotics, finance, healthcare, and transportation. Bayesian networks[28, 29, 30]: A Bayesian network is a probabilistic graphical model that represents a set of random variables and their conditional dependencies using a directed acyclic graph. It is a powerful tool for probabilistic inference, learning from data, and decision making under uncertainty. Except for the mentioned four methods, there are some popular strategies as well. Over the past decade, autonomous driving has flourished in the wave of deep learning, where a large number of solution strategies are based on computer vision algorithms, using images as the primary input. The prevailing visual neural networks are typically built on top of a basic block in which a series of convolutional layers are stacked sequentially to capture local information in intermediate features. However, the limited receptive field of the small convolution kernel makes it difficult to obtain global information, which hinders the high performance of the network on highly feature-dependent tasks (such as trajectory prediction and planning). In view of this dilemma, many researchers have begun to deeply study self-attention-based [31] networks with the ability to capture long-distance information. Here, we briefly review traditional CNN and recently proposed visual networks. Convolutional neural network. The first standard CNN was proposed by LeCun [32] et al. and was used for handwritten character recognition. Based on this foundation, a large number of visual models have achieved cross-generational success in a variety of tasks with images as the main input. Google Inception Net [33] and DenseNet [34] showed that deep neural networks consisting of convolutional and pooling layers can yield adequate results in recognition. ResNet [35] in Fig.3 is a classic structure that has a better generalization ability by adding shortcut connections to the underlying network. To alleviate the limited acceptance domain in previous studies, some studies used the attention mechanism as an operator for adapting patterns. Besides, several novel visual networks have been proposed recently, which have achieved remarkable performance in various computer vision tasks. For example, the Mask R-CNN [36] proposed by He et al. extends the Faster R-CNN framework with an additional mask branch to perform instance segmentation. SENet [37] and MobileNetV3 [38] demonstrate the effectiveness of multiple paths within a basic block. The DenseNet [34] proposed by Huang et al. introduced dense connections between layers to improve feature reuse and alleviate the vanishing-gradient problems. Moreover, the Transformers-based networks, such as ViT [39] in Fig.3 proposed by Dosovitskiy et al. and DETR [40] proposed by Carion et al., have achieved state-of-the-art performance on image classification and object detection tasks, respectively, by leveraging the self-attention mechanism. These novel visual networks have shown promising results and have opened up new research directions in the field of computer vision. ## 3 Method ### Trajectory Prediction Trajectory prediction is an essential task in autonomous driving for collision avoidance. In this task, the goal is to predict the future trajectory of the vehicle based on its current and past states. This prediction allows the autonomous vehicle to plan its future path and avoid potential collisions with other objects in the environment. What we obtain from Figure 3: Three popular network structures in vision areas. The structure of ResNet-50 is shown in (a). The structure of RepVGG is shown in (b). The structure of ViT is shown in (c). our model and dataset are X, Y axis position, and yaw. Since the frame slot is known, we can easily get the velocity that AV needs. Besides, after combing fiction between ground and wheels, wind resistance, and other physic parameters, we can calculate the acceleration that AV needs so that we can control the motor force just like human driving via an accelerograph. Moreover, yaw can provide a driving wheel adjustment as well. These processes compose the basic requirements for autonomous driving. Some mathematical models such as kinematic or dynamic models can show these processes. The kinematic model assumes that the vehicle moves in a straight line with a constant velocity and acceleration. The equations of motion for the kinematic model can be represented as: \[\begin{split} x(t)&=x_{0}+v_{0}\cos(\theta_{0})t+ \frac{1}{2}a_{x}t^{2},\\ y(t)&=y_{0}+v_{0}\sin(\theta_{0})t+\frac{1}{2}a_{y }t^{2},\\ \theta(t)&=\theta_{0}+\omega t,\end{split} \tag{1}\] where \(x(t)\) and \(y(t)\) are the position of the vehicle at time \(t\) in the \(x\) and \(y\) axes, respectively, \(v_{0}\) is the initial velocity, \(\theta_{0}\) is the initial orientation, \(a_{x}\) and \(a_{y}\) are the acceleration in the \(x\) and \(y\) axes, respectively, and \(\omega\) is the angular velocity. On the other hand, the dynamic model takes into account the forces acting on the vehicle, such as friction, air resistance, and gravity. Wind resistance is an important factor to consider when predicting the trajectory of an autonomous vehicle. The force of wind resistance can be calculated using the following formula: \[F_{wind}=\frac{1}{2}\rho v^{2}C_{d}A, \tag{2}\] where \(F_{wind}\) is the force of wind resistance, \(\rho\) is the density of air, \(v\) is the velocity of the vehicle, \(C_{d}\) is the drag coefficient, and \(A\) is the cross-sectional area of the vehicle. The drag coefficient and cross-sectional area can be experimentally determined for a specific vehicle. To incorporate wind resistance into the trajectory prediction model, we can use the above formula to calculate the additional force that the vehicle must overcome. This can then be used to adjust the predicted trajectory accordingly. The equations of motion for the total dynamic model can be represented as: \[\begin{split} m\frac{d^{2}x}{dt^{2}}&=F_{x}-F_{ friction}-F_{wind},\\ m\frac{d^{2}y}{dt^{2}}&=F_{y}-F_{g}-F_{friction }-F_{wind},\end{split} \tag{3}\] where \(m\) is the mass of the vehicle, \(F_{x}\) and \(F_{y}\) are the forces acting on the vehicle in the \(x\) and \(y\) axes, respectively, \(F_{friction}\) and \(F_{wind}\) are the forces due to friction and wind resistance, respectively, and \(F_{g}\) is the force due to gravity. To predict the future trajectory of an autonomous vehicle, we need to estimate the parameters of the kinematic or dynamic model based on the current and past states of the vehicle. This can be done using machine learning techniques such as regression or neural networks. A popular neural network model for trajectory prediction in autonomous driving is the Long Short-Term Memory (LSTM) network. LSTM networks are a type of recurrent neural network that can capture long-term dependencies in sequential data. The LSTM network can take as input the current and past states of the vehicle and predict its future trajectory. Given a sequence of input states \(s_{1},s_{2},...,s_{T}\), where \(s_{t}\) represents the state of the vehicle at time \(t\), an LSTM network can predict the future states \(\hat{st}+1,\hat{st}+2,...,\hat{s}_{t+k}\), where \(k\) is the number of future time steps to predict. The LSTM network consists of an input layer, a hidden layer, and an output layer. The input layer takes the sequence of input states as input, and the hidden layer contains recurrent connections that allow the network to maintain a memory of past states. The output layer produces the predicted future states. The LSTM network can be trained using backpropagation through time to minimize the prediction error. Several studies have proposed different variations of the LSTM network for trajectory prediction in autonomous driving, such as the Social LSTM [41], the ConvLSTM [42], and the TrajGRU [43]. These models have shown promising results in predicting the future trajectory of the vehicle and avoiding potential collisions with other objects in the environment. Recent works have also explored the use of self-attention mechanisms to capture both local and global information for trajectory prediction in autonomous driving [44]. These models, such as the proposed SSN architecture, combine Figure 4: Structure of Sequential Spatial Network (SSN) CNN and self-attention to capture spatiotemporal features from input data and improve the ability of sequentially related inputs. In summary, trajectory prediction for collision avoidance in autonomous driving is a crucial task that can be addressed using recurrent neural networks such as LSTM or hybrid models that incorporate self-attention mechanisms to capture both local and global information. ### BIFR Block To address this sequential relation problem, we also propose a bi-directional frame-related (BIFR) block to build up relations between sequential frames, as shown in Fig. 2. This block consists of one flatten layer, one bi-directional LSTM, and two fully connected layers. By utilizing bi-directional LSTMs, the information of both past and future frames can be incorporated into the current frame, which can effectively solve the problem of unreachability and safety of the predicted trajectory for AV. The trajectory planning process is based on several waypoints, and these waypoints have a sequential relationship. The position of the later waypoints depends on the former waypoints, and the position of the former waypoints should consider the later waypoints as well. The potential danger of not considering the sequential relationship between waypoints can be observed from the examples shown in figures. The red circles represent the actual positions of the vehicle, and the blue circles represent the predicted positions based on the current waypoints. As can be seen from the figures shows in Fig. 5, the former waypoints are reachable in the predicted trajectory but the latter waypoints are not, or vice versa. Therefore, it is essential to incorporate the sequential relationship between waypoints into the trajectory planning process to ensure the safety and feasibility of the trajectory. ### Network Structure Overview Our strategy is to take advantage of CNN, encoder, Bi-LSTM, and attention by building a hybrid network. CNN has excellent regional feature extraction capabilities, allowing the network to accurately and quickly capture important visual features within an image. \[\mathbf{h}i=f(\sum j=1^{N}\mathbf{w}j*\mathbf{x}i-j+b)\mathbf{h}_{i}=f(\sum j =1^{N}\mathbf{w}_{j}*\mathbf{x}_{i-j}+b). \tag{4}\] Encoder can efficiently compress and encode the input data, reducing the computational complexity of the neural network. \[Encoder=f_{enc}(x_{1},x_{2},...,x_{T}). \tag{5}\] LSTM can learn and model long-term dependencies within sequential data, making it particularly effective in modeling and predicting temporal information. \[f_{t} =\sigma_{g}(W_{f}x_{t}+U_{f}h_{t-1}+b_{f}), \tag{6}\] \[i_{t} =\sigma_{g}(W_{i}x_{t}+U_{i}h_{t-1}+b_{i}),\] \[\tilde{C}t =\sigma_{c}(W_{c}x_{t}+U_{c}ht-1+b_{c}),\] \[C_{t} =f_{t}\odot C_{t-1}+i_{t}\odot\tilde{C}t,\] \[o_{t} =\sigma_{g}(W_{o}x_{t}+U_{o}ht-1+b_{o}),\] \[h_{t} =o_{t}\odot\sigma_{h}(C_{t}).\] Attention mechanisms can dynamically weight different regions of an image or sequence, allowing the network to selectively focus on the most important information and improving overall performance. \[h_{t} =\sum_{i=1}^{T_{x}}\alpha_{t,i}\cdot h_{i}, \tag{7}\] \[h_{t} =\sum_{i=1}^{T_{x}}\alpha_{t,i}\cdot h_{i},\] \[\alpha_{t,i} =\frac{\exp(e_{t,i})}{\sum_{j=1}^{T_{x}}\exp(e_{t,j})},\] \[e_{t,i} =f(s_{t-1},h_{i}).\] The combination of these techniques can allow the network to effectively model complex and dynamic relationships within a scene, allowing for more accurate and robust predictions. In particular, the use of attention mechanisms can help the network to better handle variable and unpredictable inputs, allowing it to adapt to a wider range of scenarios and improve generalization performance. The bi-directional frame-related(BIFR) block is one of the fundamental building blocks in our networks, especially aims to capture sequential information in image data. The first step in the BIFR is to flatten the input sub-graphs, Figure 5: Two different cases of predicted points can not reach which converts the input data to a one-dimensional format that can be processed by the following layers. Next, the Bi-LSTM layer plays a central role in capturing local sequential information. By recording the temporal dependencies between the input data, the Bi-LSTM layer enables the network to learn the long-term dependencies between the input and output. To enhance the computational efficiency of the network, only the output value is kept, and the hidden layer parameters are ignored. This allows the subsequent layers to process the output value more efficiently. Finally, two fully connected layers are used to map the output value produced by the Bi-LSTM layer as inputs for later blocks, which further process the information to extract higher-level features. Overall, the BIFR is a powerful tool for capturing sequential information in image data and has the potential ability widely used in many state-of-the-art computer vision networks, especially for sequentially related inputs. ### SSN Block Resnet-50 is a powerful architecture that has proven to be effective for various image classification tasks. Its design comprises of five stages, with the initial stage0 consisting of a convolutional layer, a batch normalization layer, and a maxpooling layer. The bottleneck blocks used in stages 1-4 allow for efficient feature extraction and are followed by a fully connected layer for classification. One of the advantages of this design is that it enables efficient classification between different images. However, stage0 only uses one convolutional layer with a large kernel size, which can limit the capture of local information. To address this limitation, the main input block is introduced, which is composed of three different kernel sizes and steps convolutional layers. The main objective of the input block is to extract input information quickly when the output size is large and then use the smaller kernel sizes to capture local information as the input size decreases. This approach helps to improve the overall performance of our model. The proposed SSN block consists of a Reinforcement Region Unit (RRU), a Fast Multi-Head Self-Attention (FMHSA) module and an Information Refinement Unit (IRU), as shown in Fig. 4. We will describe these four components in the following. After taking into account the fast processing and local information processing of the main input block, the input information is transferred to the subsequent blocks for subsequent processing. Furthermore, between each block, we add a UCD layer, which consists of a convolutional layer with 1x1 kernel size and a down-sampling layer which a sampling ratio is 0.5. The UCD layer allows us to speed up the network without reducing the amount of information in the input but maintaining the ratio between the information, and the size of the input is reduced to half of the original size after the UCD layer. Afterward, the feature extraction is performed by a network layer composed of different numbers of SSN blocks, while maintaining the same resolution of the input. Due to the existence of the self-attention mechanism, SSN can capture the correlation of different local information features, so as to achieve mutual dependence between local information and global information. Finally, the results are output through an average pooling layer and a projection layer as well as a classifier layer. The reinforcement region unit (RRU) was proposed as a data augmentation technique for our network. Data augmentation technique can generate high-quality augmented samples that maintain the spatial correlation and semantic information of the original image by selectively amplifying the informative regions of the input images. This technique can improve the robustness and generalization of the model, and it has been applied to various computer vision tasks such as object detection, semantic segmentation, and image classification. Moreover, RRU can be easily integrated with existing models and does not require additional computational resources, making it an efficient and practical data augmentation method for deep learning models. In other words, a good model should maintain effective operating output for similar but variant data as well so that the model has better input acceptability. However, the absolute position encoding used in the common attention was originally designed to exploit the order of the tokens, but it breaks the input acceptability because each patch adds a unique position encoding to it [45]. Furthermore, the concatenation between the local information obtained by the information capture module at the beginning of the model and the structural information inside the patch [46] is ignored. In order to maintain input acceptability, the Reinforcement Region Unit (RRU) is designed to extract the local information from the input to the "SSN" module, defined as: \[RRU(X)=Conv(Conv(X)). \tag{8}\] The FMHSA module in our model is designed to enhance the connection between local information obtained from the RRU. With the use of a convolutional layer, a linear layer, and multi-head self-attention, we can effectively capture the correlation between different local features. This is particularly useful for the collision avoidance task, where the trajectory prediction is based on continuously predicted positions that are sequentially related. The FMHSA module allows for the transfer of local information between different areas, making it suitable for solving this problem. By improving the connection between local information, our model is able to achieve outstanding results in this task. Additionally, the sequential relationship between predicted positions is taken into account, ensuring that the autonomous driving vehicle arrives at each target position before moving on to the next one. The Information Refinement Unit (IRU) is used to effi ciently extract the local information obtained by FMHSA, and after processing by this unit, the extracted local information is fed into the pooling and classifier layers. The original FFN proposed in ViT consists of two linear layers separated by the GELU activation [47]. First, expand the input dimension by 4 times, and then scale down the expanded dimension: \[FFN(X)=GELU(XW1+b1)W2+b2. \tag{9}\] This has the advantage of using a linear function for forward propagation before using the GELU() function, which greatly improves the efficiency of the model operation. The proposed design concept is to address the performance sacrifice issue of using the linear function for forward propagation in the FFN of ViT. The solution is to use a combination of convolutional layers with different kernel sizes and a linear function layer to achieve both operational efficiency and model performance. First, a larger convolutional kernel is used to capture the input information's characteristics with a large field of view. Then, the linear function layer is applied for fast propagation of the input information. Finally, a smaller convolutional kernel is used to refine the obtained information. By using this approach, the model can efficiently process the input information while maintaining high performance during fast propagation. This strategy is particularly useful for vision tasks such as image classification and object detection, where the model's performance and operational efficiency are crucial. \[IRU(X)=Conv(L(Conv(X))), \tag{10}\] where L(X)=WX+b. After designing the above three unit modules, the SSN block can be formulated as: \[A =RRU(X), \tag{11}\] \[B =FMHSA(A),\] \[C =IRU(B)+B.\] In the experiment part, we will prove the efficiency of SSN network. In addition to its ability to capture the temporal dependencies of sequential data, SSN also has versatility in handling different downstream tasks. Specifically, SSN employs a self-attention mechanism to capture the relationships between the different temporal feature maps. This mechanism can effectively encode the dependencies between different timesteps and extract the most relevant information for the downstream tasks. Furthermore, SSN is designed to handle the sparsity and irregularity of the sequential data, which is common in real-world driving scenarios. These features make SSN a promising candidate for various applications in autonomous driving, such as trajectory prediction, behavior planning, and decision-making. ## 4 Experiment The autonomous driving obstacle avoidance task is crucial in ensuring safe driving in real-life scenarios. To evaluate the effectiveness of the BCSSN architecture, experiments were conducted using a driving map as the main input. We use the Earlystopping function with 100 patience for finding the best parameters' value. Besides, we apply cosine dynamic learning rate [48] adjustment for tuning suitable values at each iteration. To implement the cosine dynamic learning rate adjustment, we use the following formula: \[\eta_{t}=\eta_{max}\frac{1+\cos(\frac{t\pi}{T})}{2} \tag{12}\] where \(\eta_{max}\) is the maximum learning rate, \(t\) is the current iteration, and \(T\) is the total number of iterations. The EarlyStopping function is used to monitor the validation loss and stop the training process when the validation loss does not improve for a certain number of epochs. Specifically, we use the patience parameter, which determines how many epochs to wait for the validation loss to improve before stopping the training process. The EarlyStopping function is defined as follows: \[\text{EarlyStopping}(V_{\text{loss}},\text{patience}) \tag{13}\] where \(V_{\text{loss}}\) is the validation loss and patience is the number of epochs to wait for the validation loss to improve. In the experiments, the proposed BCSSN architecture was compared with other popular models. The experimental results in Table 1 were then analyzed to draw conclusions on the performance of the BCSSN architecture. To assess the performance of the BCSSN architecture, collisions were defined into three categories: front collision, rear collision, and side collision. These collisions were caused by different unsuitable physical parameters, and they are depicted in Fig. 1. The experiments aimed to test the ability of the BCSSN architecture to detect and avoid these collisions effectively. The results of the experiments were analyzed, and the performance of the BCSSN architecture was compared with other popular models to determine its effectiveness in addressing the obstacle avoidance task. The analytical conclusions drawn from these experiments provide valuable insights into the efficiency and efficacy of the proposed BCSSN architecture. ### Dataset and Description The 15kit dataset is a vast collection of data, which serves as the primary source for this study. It is an extensive dataset containing over 1,000 hours of data collected over a four-month period by a fleet of 20 autonomous vehicles that followed a fixed route in Palo Alto, California. The dataset comprises 170,000 scenes, where each scene lasts for 25 seconds. Each scene captures the perception output of the self-driving system, which provides precise positions and motions of nearby vehicles, cyclists, and pedestrians over time. This information is invaluable for training and testing autonomous driving models. In addition to the scene data, the 1Skit dataset also contains a high-definition semantic map with 15,242 labeled elements, which includes information about the lane markings, traffic signs, and other relevant information. This semantic map is used to help the autonomous vehicle understand the environment and make informed decisions. Furthermore, the dataset also includes a high-definition aerial view of the area, which provides a top-down perspective of the environment. Overall, the 1Skit dataset is an invaluable resource for researchers and developers working in the field of autonomous driving. With its vast collection of data, including the perception output of the self-driving system, semantic map, and aerial view, the dataset provides a comprehensive understanding of the driving environment, which can be used to train and test autonomous driving models. ### Data Preprocessing The 1Skit dataset has a well-defined structure, consisting of three main concepts: Scenes, Frames, and Agents. Scenes are identified by the host vehicle that collected them and a start and end time. Each scene consists of multiple frames, which are snapshots captured at discretized time intervals. The frames are stored in an array, and the scene datatype stores a reference to its corresponding frames in terms of the start and end index within the frames array. All the frames in between these indices correspond to the scene, including the start index, but excluding the end index. Each frame captures all the information observed at a particular time, including the timestamp, which the frame describes. The frame also includes data about the ego vehicle itself, such as its position and rotation. Additionally, the frame contains a reference to other agents (such as vehicles, cyclists, and pedestrians) that were detected by the ego's sensors. The frame also includes a reference to all traffic light faces for all visible lanes. An agent in the 1Skit dataset is an observation made by the autonomous vehicle of another detected object. Each agent entry describes the object in terms of its attributes, such as position and velocity, and gives the agent a tracking number to track it over multiple frames (but only within the same scene!) and its most probable label. The input of this dataset is images of the Ego car, which is one of the properties of the EgoDataset. The output of the model is the position and yaw, which are also properties of the EgoDataset. By using this method, it is possible to simulate vehicles' driving as human driving actions. During the human driving process, drivers control the accelerator and the driving wheels to move the vehicles, where the accelerator is used for velocity and the driving wheel for yaw. Similarly, the output of the model is also the velocity and yaw, which can be used to simulate the trajectories of vehicles and avoid collisions during driving. ### Result We conducted experiments using four different network structures and presented their results in Table 1. We compared our model's performance with other transformer-based and convolution-based models and found that our model achieved better accuracy and faster processing speed. The results highlighted the significance of our proposed BCSSN for capturing both local and global information, which resulted in a significant reduction of front collision errors. Specifically, our model achieved 2.6 times fewer front collision errors compared to RepVGG, which is 13.6 times less than RepVGG's error rate. Similarly, our model achieved 5.8 times fewer front collision errors than ViT and 12.6 times less than ResNet50. Other comparisons can observe form result table 1 directly as well. These results demonstrate the effectiveness of the BCSSN in capturing both local and global information, leading to significant improvements in the model's accuracy. BCSSN consistently outperformed other models by a large margin across all the test scenarios. These findings indicate that the proposed BCSSN block is a promising technique for improving the performance of autonomous driving systems by effectively capturing both local and global information. \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & Front & Side & Rear & Total \\ \hline BC [49] & 79 & 395 & 997 & 1471 \\ BC-perturb [49] & 14 & 73 & 678 & 765 \\ MS Prediction [49] & 18 & 55 & 125 & 198 \\ Urban Driver [49] & 15 & 46 & 101 & 162 \\ ML Planner [50] & - & - & - & 91.5 \\ L5Kit (ResNet-50) [51] & 15.2 & 20.7 & 8.3 & 44.2 \\ RepVGG [16] & 16.2 & 11.7 & 10.6 & 38.5 \\ Vit [39] & 8.4 & **7.7** & 9.2 & 25.3 \\ SafetyNet [50] & - & - & - & 4.6 \\ BCSSN (Ours) & **0.6** & 1.1 & **2.0** & 3.7 \\ \hline \hline \end{tabular} \end{table} Table 1: Average collision times per 1000 miles. ## 5 Feature Work To integrate our algorithm model with the Unity virtual world, we need to establish communication between them. We accomplish this by using Unity's built-in API and message-passing techniques, a first-person view, and bird eye view figure shown in Fig. 6. The communication protocol is established such that the algorithm sends the input information to the virtual world, and the virtual world returns the output information to the algorithm. To ensure accurate physical modeling and simulation, we need to perform coordinate transformation between the image coordinates and the world coordinates. The process involves mapping the image coordinates to the virtual world coordinates using the camera parameters and the intrinsic and extrinsic calibration matrices. The resulting coordinates are then transformed to the physical world coordinates, where the physical model can calculate the necessary forces and physical parameters for the autonomous driving system. This transformation can be mathematically represented as: \[\begin{bmatrix}X_{world}\\ Y_{world}\\ Z_{world}\end{bmatrix}=R\begin{bmatrix}X_{camera}\\ Y_{camera}\\ Z_{camera}\end{bmatrix}+T \tag{14}\] where \(\mathbf{R}\) is the rotation matrix, \(\mathbf{T}\) is the translation vector, and \([X_{camera},Y_{camera},Z_{camera}]\) are the coordinates in the camera frame. To obtain accurate values for \(\mathbf{R}\) and \(\mathbf{T}\), we need to perform calibration using various methods such as checkerboard pattern or laser scanning techniques [52]. The resulting calibration parameters can then be used to transform the image coordinates to world coordinates for accurate physical modeling and simulation. ## 6 Conclusion In this paper, we introduce a novel hybrid architecture named BCSSN that is designed to improve the ability of vision-based autonomous driving tasks as well as other vision tasks. The BCSSN architecture combines the strengths of several different techniques, including CNN, Bi-LSTM, encoders, and self-attention, to capture both local and global information from sequentially related inputs. The CNN are used to extract local information from the input images, while the Bi-LSTM is used to model the temporal dependency between the frames. The encoder is used to convert the input images into a high-level feature representation, which is then passed to the self-attention mechanism to capture the long-range dependencies and context information. To evaluate the effectiveness of the proposed BCSSN architecture, we conducted extensive experiments on the Lykit dataset, which is a challenging dataset for vision-based autonomous driving tasks. The results of these experiments demonstrate the superiority of the proposed BCSSN architecture compared to other state-of-the-art models. Specifically, the BCSSN architecture achieved better performance in terms of accuracy, speed, and generalization ability. Overall, BCSSN architecture offers a promising solution for vision-based autonomous driving tasks and other vision tasks that require the ability to capture both local and global information from sequentially related inputs. Its hybrid design allows it to take advantage of the strengths of several different techniques, resulting in improved performance and generalization ability.
2305.05193
A Chip-Firing Game for Biocrust Reverse Succession
Experimental work suggests that biological soil crusts, dominant primary producers in drylands and tundra, are particularly vulnerable to disturbances that cause reverse ecological succession. To model successional transitions in biocrust communities, we propose a resource-firing game that captures succession dynamics without specifying detailed function forms. The model is evaluated in idealized terrestrial ecosystems, where disturbances are modeled as a reduction in available resources that triggers inter-species competition. The resource-firing game is executed on a finite graph with nodes representing species in the community and a sink node that becomes active when every species is depleted of resources. First, we discuss the theoretical basis of the resource-firing game, evaluate it in the light of existing literature, and consider the characteristics of a biocrust community that has evolved to equilibrium. We then examine the dependence of resource-firing and game stability on species richness, showing that high species richness increases the probability of very short and long avalanches, but not those of intermediate length. Indeed, this result suggests that the response of the community to disturbance is both directional and episodic, proceeding towards reverse succession in bursts of variable length. Finally, we incorporate the spatial structure of the biocrust community into a Cayley Tree and derive a formula for the probability that a disturbance, modeled as a random attack, initiates a large species-death event.
Shloka V. Janapaty
2023-05-09T06:08:42Z
http://arxiv.org/abs/2305.05193v2
# A Chip-Firing Game for Biocrust Reverse Succession ###### Abstract Experimental work suggests that biological soil crusts, dominant primary producers in drylands and tundra, are particularly vulnerable to disturbances that cause reverse ecological succession. To model successful transitions in biocrust communities, we propose a resource-firing game that captures succession dynamics without specifying detailed function forms. The game is evaluated in idealized terrestrial ecosystems, where disturbances are modeled as a reduction in available resources that triggers inter-species competition. The resource-firing game is executed on a finite graph with nodes representing species in the community and a sink node that becomes active when every species is depleted of resources. First, we discuss the theoretical basis of the resource-firing game, evaluate it in the light of existing literature, and consider the characteristics of a biocrust community that has evolved to equilibrium. We then examine the dependence of resource-firing and game stability on species richness, showing that high species richness increases the probability of very short and long avalanches, but not those of intermediate length. Indeed, this result suggests that the community's response to disturbance is both directional and episodic, proceeding towards reverse succession in bursts of variable length. Finally, we incorporate the spatial structure of the biocrust community into a Cayley Tree and derive a formula for the probability that a disturbance, modeled as a random attack, initiates a large species-death event. **Keywords:** biocrust, succession, chip-firing, terrestrial ## Introduction Global change threatens to destabilize terrestrial ecosystems by triggering ecological state transitions, significant shifts in community structure and function. These shifts may occur in response to diminished resource availability or climate-induced disturbances, including increased frequency and intensity of extreme events, altered precipitation patterns, and rising temperatures. In some cases, changes in disturbance regimes can cause productive ecosystems to transition to less productive ones [1, 2, 3]. Understanding these dynamics is crucial to inform conservation strategies and accurately assess terrestrial ecosystem health. However, predicting state transitions can be difficult--communities of primary producers, which help drive ecosystem dynamics, often exhibit non-deterministic, non-linear, and non-monotonic transition behaviors [4, 5, 6]. Biological soil crusts are communities of cyanobacteria, lichen, and bryophytes that appear ubiquitously in terrestrial ecosystems. As dominant primary producers in drylands and tundra, biocrusts account for 7 percent of terrestrial primary productivity and over 50 percent of biological nitrogen fixation [7]. Bicocrusts also form assemblages of extracellular secretions and biomass filaments that regulate local hydrological cycling and channel resources from nitrogen-enriched, hydrated biocrusts to depleted plants [8, 9, 10, 11]. These products rehabilitate damaged soil, forming the first stage of colonization in ecosystems recovering from disaster. Indeed, biocrusts are expected to regulate climate impacts on up to 70 percent of global dryland surface [12]. Given these ecosystem services and extent of cover, biocrust state transitions are of key consequence. Ecological succession is the development of community structure over time, with each state characterized by different dominant organisms [13]. In the initial stage, bare soil is colonized by pioneer cyanobacteria, which bind filamentous sheaths together to stabilize soil microstructure [14]. Over time, early stage cyanobacteria are replaced by a more diverse community of late-stage biocrusts, including lichen, cyanolichen, and bryophytes in order of succession. Lichen form from a symbiotic relationship between a photosynthetic partner and fungal components, while mosses perform photosynthesis using stem leaves. Both penetrate soil surfaces through hyphal extension [15, 16]. Biocrusts perform different ecosystem services depending on their dominant succession stage. For example, moss-dominated biocrusts sequester carbon and dissimellate nitrogen more effectively, while lichen-dominated biocrusts are better suited to metabolize aromatic compounds and exhibit higher nitrogen uptake [17]. Previous research has established that biocrusts are vulnerable to a range of disturbances, including trampling, wildfire, flooding, vehicle traffic, and livestock grazing. Global change may also exert differential effects on biocrust communities based on succession stage, compromising vital ecosystem processes [18]. Several studies have observed the effects of these disturbances on biocrusts. Chronic physical disturbance and natural disaster have been shown to alter community structure for decades, notably accelerating erosion and reducing C and N concentrations in soil [19]. More recently, climate manipulation studies have observed reduced biocrust metabolic activity and diminished network connectivity in response to experimental warming [20]. Some work suggests that all types of disturbance trigger biocrust reverse ecological succession, a state transition from late to early stages. For instance, Ferrenberg _et al._ (2014) reported shifts from moss cover to cyanobacteria cover in response to both experimental warming and human trampling [12]. Other studies have shown that when subjected to climate manipulation and grazing, early stage cyanobacteria dominate the disturbance gradient, while late-stage cyanobacteria dominate the recovery gradient [21, 22]. Prior to reverse succession, disturbed biocrusty may also often undergo dramatic declines in taxonomic diversity [23]. Though disturbances may vary in the magnitude and rate of reverse succession they trigger, these results suggest that biocrusty exhibit a general, disturbance-agnostic response to suboptimal environmental conditions. The consequences of these induced reverse state transitions can be severe, including deficits in global carbon and nitrogen budgets. In this study, we develop a time-independent resource-firing game that captures succession dynamics without specifying detailed function forms. Our goals were first, to propose a general model of biocrust succession transitions based on unified theory and second, to examine dynamics in idealized terrestrial ecosystems. For this purpose, we model disturbances as a reduction in available resources, triggering competition between species within the community. The resource-firing game is evaluated on (1) its ability to predict reverse succession in response to a generic disturbance, (2) the episodic and directional character of changes in community structure, and (3) the stability of equilibria. First, we discuss the theoretical basis of the resource-firing game and show how the model may be implemented computationally. Then, we turn to the characteristics of a biocrust community that has evolved to an equilibrium state, the various dependences of the model, and attacks on the network. ## Model ### Overview and Theory Shifts in succession often exhibit the same profile when subjected to very different environmental disturbances. We compared the effect of climate manipulation treatments, physical disturbance, and disaster events on the relative cover of four biotic groups that dominate biocrusts: early-stage cyanobacteria, late-stage cyanobacteria, lichen, and bryophytes. Relative cover and geographic data from the literature are summarized in Appendix Table 1. In 47 of the 50 studies in the dataset, biocrusts exhibit a reverse succession response to disturbance or reduced species diversity, irrespective of the type of disturbance. As a result, a key criterion of this model is its ability to predict disturbance-agnostic reverse succession. In each of the 4 studies that did not conform to this trend, moss cover increased in response to warming, physical disturbance, or resource limitation. Notably, this phenomenon appeared in temperate semi-arid regions, including Southeast Spain and the Colorado Plateau [24, 25]. After a disturbance, the order of reverse succession in biocrusts follows a predictable pattern: moss, lichen, late-stage cyanobacteria, and early-stage cyanobacteria. However, when biocrusts recover from disturbance, the order of recolonization is often reversed [26, 27]. We suggest that while different disturbances may vary in the size and frequency of their effects on community structure, succession can arise from the internal dynamics of community-wide resource allocation, rather than the triggering stressor. The resource-firing game proposed in this paper captures these disturbance-agnostic internal dynamics. Each study in Appendix Table 1 observes biocrust cover both prior to disturbance and after disturbance, where \(t_{obs}\) is the elapsed observation period. However, few have captured intermediate succession dynamics. The available literature is sufficient to supply initial conditions and suggest equilibria, but cannot robustly constrain model trajectories at \(t<t_{obs}\). Therefore, we preliminarily assume that state transition behaviours proceed monotonically and call for field work to addresses this gap in existing literature. To build our model assumptions from this dataset, we do not specify function forms of relative cover change, instead focusing on macroscopic changes in succession stage. The strategy of biocrust ecological succession is estimated as the attempt to optimally distribute finite resources throughout an interconnected biomass system. This objective is achieved by considering the entire biomass as a nexus of species and attempting to allocating resources based on their fitness [28]. ### Order Theory Biocrust ecological succession is the development of its community structure over time, with each succession stage characterized by different dominant organisms. In the initial stage, bare soil is colonized by pioneer cyanobacteria. Over time, early stage cyanobacteria are replaced by late-stage biocrusts, including lichen, cyanolichen, and bryophytes in order of succession. We formalize this successional sequence as the partially ordered set \(S\). Let \(S\) be the chain \[\textit{cyanobacteria}\longrightarrow\textit{lichen}\longrightarrow \textit{bryophytes} \tag{1}\] Pioneer early succession populations require fewer resources and thrive in resource-limited conditions, whereas late succession populations outcompete the former in resource-abundant conditions [26]. Let \(\phi\) be a function that maps dominant succession stage to the resources it requires to sustain itself. The notion of resources, while highly abstract, might be captured by applying the species-area relationship, generalized fertility indices, or net primary productivity, the details of which are beyond the scope of this study. \[\phi(s)=\begin{cases}r_{f}&\text{if $s$ = fungi}\\ r_{c}&\text{if $s$ = cyanobacteria}\\ r_{l}&\text{if $s$ = lichen}\\ r_{b}&\text{if $s$ = bryophytes}\end{cases} \tag{2}\] such that \(r_{f}\leq r_{c}\leq r_{l}\leq r_{b}\). We now turn to the order structure of our game. Explicit \(r_{f}\), \(r_{c}\), \(r_{l}\), and \(r_{b}\) values depend on (1) total resources \(R\) available to a community and (2) the number of species of each succession stage \(s_{f},s_{c},s_{l},s_{b}\). The equation for resource allocation is given by \[\left[\begin{array}{c}s_{f}\\ s_{c}\\ s_{l}\\ s_{b}\end{array}\right]^{T}\left[\begin{array}{c}r_{f}\\ r_{c}\\ r_{l}\\ r_{b}\end{array}\right]=R \tag{3}\] **Definition 0.1**.: _Formally, two partially ordered sets \(S\) and \(Q\) are **order-isomorphic** (\(S\cong Q\)) if there exists a map \(\phi\) from \(S\) onto \(Q\) such that \(x\leq y\) in \(S\iff\phi(x)\leq\phi(y)\) in \(Q\)[29]._ That is, \(\phi\) must faithfully mirror the order structure of \(S\). Let \(Q=\phi(s)\) be a partially ordered set of allocated resources for \(\phi\) defined in. Then, \(S\) and \(Q\) are order-isomorphic, allowing us to directly map the successional composition of a community to its resource allocation. \[\phi(cyanobacteria)\longrightarrow\phi(lichen)\longrightarrow\phi(brgophytes) \tag{4}\] ### Representing Biocrust Networks A biocrust community is modeled as an interconnected network that distributes resources to several populations of species competing to maximize ecological fitness. The strategy of biocrust ecological succession is estimated as the attempt to optimally distribute finite resources throughout an interconnected biomass [28]. **Definition 0.2**.: _A **complete graph** is a simple undirected graph \(G\) in which every pair of nodes is connected by a unique edge._ We represent a generic biocrust network as a complete graph. A complete graph with \(n\) nodes has \(\frac{n(n-1)}{2}\) edges. Each node represents a species in the community, and an edge represents competition for shared resources. In other words, each species in the biocrust community is assumed to be competition with every other species. Each node has two fields. The first field is its designated succession stage, \(s\): fungus, early cyanobacteria, late cyanobacteria, lichen, or bryophyte. The second field is the concomitant number of resources, \(\phi(s)\). The succession stage with a plurality of designated nodes is said to dominate the community. Figure 1 shows two complete graphs, representing a fungi- and late cyanobacteria-dominated community with 8 species and a late cyanobacteria-dominated community with 12 species. The following section describes the rules of the resource-firing game on such graphs. ### Resource-Firing Game The chip-firing game models the loss of energy in networks of individuals whose states depend on the state of their neighbors. In this paper, we define the **resource-firing game**, a portmanteau of chip-firing game for resource allocation. The game is executed on a finite graph \(G=(V,E)\) with \(|V|=n\)**nodes**, in addition to a **sink node**, which becomes active when every species in the community is depleted of resources [30]. Given any biocrust network, we can play the resource allocation game as follows. First, we add a sink node to \(G\). For \(G\) to remain complete, the sink node must have edges to every species in the network. Thus, every community \(G\) has \(n\) nodes, 1 sink node and \(n-1\) species, retaining the property of completeness. Next, a non-negative number of resources are distributed to each node, with the exception of the sink node. Recall that the distribution of resources to each species as a function of its succession stage is given by Equation 3. We write the resource configuration as follows: \[\mathbf{r}=\{r_{1},r_{2},...,r_{k}\}\in\mathds{N}_{0} \tag{5}\] where \(r_{k}\) is the number of resources on node \(n_{k}\). The basic step of this game is firing initiated by a generic disturbance, the dynamics of which are detailed in Algorithm 1. Firing represents a transfer of resources from one stressed species to its non-stressed neighbors. Accordingly, the node with highest \(r_{n}\), equivalently lowest fitness, fires. A node \(n_{k}\) can be fired if \(r_{k}\geq deg(n_{k})\), where \(deg(n_{k})\) is equivalent to the number of neighbors \(n_{k}\) has. For example, if \(n_{1}\) has edges to \(n_{2},n_{4}\), and \(n_{5}\), those nodes are considered its neighbors and \(deg(n_{1})=3\). The set of neighbor pairs is denoted \(E(G)\). When node \(n_{k}\) fires, it sends 1 chip to each of its neighbors, losing \(deg(n_{k})\) resources. Biologically, this indicates that the species occupying \(n_{k}\) no longer has sufficient resources to sustain itself. The released resources now become available to competing species. If \(n_{1}\) is selected to fire and initially has 5 resources, it sends 1 resource to each its 3 neighbors and retains 2 resources. As a result, the succession stage of the species occupying \(n_{1}\) regresses to one that can be supported by 2 resources, a process which we will call reverse succession. Dually, \(n_{2},n_{4}\), and \(n_{5}\), \(n_{1}\)'s neighbors, gain resources and can support species with advanced succession stages, a process which we will call forward succession. The mechanisms of forward and reverse succession are beyond the scope of the present manuscript, and we refer readers to the work of others. Formally, the piece-wise function \(r^{\prime}(i)\) denotes the successor configuration of of any node \(i\) in the graph after a given node \(j\) fires: \[\mathbf{r^{\prime}(i)}=\begin{cases}r_{j}-deg(n_{j})&\text{if }i=j\\ r_{i}+1&\text{if }(n_{i},n_{j})\in E(G)\\ r_{i}&\text{if }(n_{i},n_{j})\notin E(G)\end{cases} \tag{6}\] Redistribution of resources after firing may enable new non-sink nodes to fire. The game ends when the community arrives at a **stable configuration** in which no nodes are eligible to fire. At this point, the sink vertex releases its chips to each of its neighbors and the game restarts. The length of the game is defined as the number of firings until a stable configuration is reached. Figure 1: **Biocrust networks of varying scale.** Both graphs are complete, with 8 and 12 species respectively (Color Required) ``` 0:\(s\) is sink node, \(V\neq\emptyset\) and contains all nodes except \(s\) 0:\(t_{limit}\) iterations of resource-firing on a graph with \(n\) nodes and \(r\) resources while\(t<t_{limit}\)do \(n_{max}=\{n_{i}\in V\ |\ \forall n_{j}\in V\ \ \ \ r_{j}\leq r_{i}\}\) if\(r_{max}\geq deg(n_{max})\)then \(r_{max}=r_{max}-deg(n_{max})\) \(\forall(n_{i},n_{j})\in E(G)\ \ \ \ r_{j}=r_{j}+1\) else \(r_{sink}=0\) \(\forall n_{j}\in V\ \ \ \ r_{j}=r_{j}+1\) endif endwhile ``` **Algorithm 1** Resource-Firing Algorithm Because chip firing exhibits terminating behavior, the stable configuration is always unique. Chip-firing also displays local confluence, meaning nodes can be fired in any order without affecting the length or final configuration of the game [31]. ## Dynamics in Terrestrial Ecosystems ### Example: \(n=5\) Figure 2 shows a simple firing sequence on a biocrust community with \(n=5\) nodes (4 species) and \(r=10\) resources. As the sink node accumulates resources and the number of resources available to the community decreases, it undergoes reverse succession. Conversely, the release of resources from \(s\) triggers forward succession, yielding the initial configuration. The resulting stable configuration appears on the bottom. Mathematically, the stable configuration refers to a configuration of resources in which each node has at most as many chips as its degree. Furthermore, there is no subset of nodes that can fire. The resource-firing game will always proceed to a stable configuration and remain there forever, irrespective of how nodes fire. In other words, the stable configuration represents an equilibrium or fixed point. Now we turn to the stability of the configurations in our \(n=5\) example. Let \(c_{0}\) represent the fixed point of \(f^{\prime}(r^{\prime}(i))=(r^{\prime}(1),r^{\prime}(2),r^{\prime}(3),r^{\prime }(4))\), where i is the node index. Given that \(f^{(n)}\to c_{0}\), we conclude that \(c_{0}\) is a stable fixed point. For the given initial configuration, \(f\) has a four-cycle, \[f^{(4)}(c_{0})=c_{0},f^{\prime}(c_{0})=f^{\prime\prime}(c_{0})=f^{(3)}(c_{0}) \neq c_{0}. \tag{7}\] We might also attempt to determine the stability of \(c_{0}\) by using the Tutte polynomial. **Definition 0.3**.: _For an undirected graph \(G=(V,E)\), the **Tutte polynomial** is \(T_{G}(x,y)=\sum_{A\subseteq E}(x-1)^{k(A)-k(E)}(y-1)^{k(A)+|A|-|V|}\), where \(k(A)\) denotes the number of connected components of the graph \((V,A)\)[32]._ **Definition 0.4**.: _A **minimum spanning tree** is a tree that includes every node of a given graph and the minimal set of edges needed to connect each node without cycles [33]._ Evaluating the Tutte polynomial of a graph \(T_{G}\) at \(T_{G}(1,1)\) yields the number of spanning trees, which provides information about stable configurations. [34]. The Tutte polynomial of \(G_{5}\) in Figure 2 is given in Figure 3. Note that the coefficients are non-negative. ### Species Richness Dependence In the following two sections, we consider additive factors to the basic model introduced in Section. We begin with a first approximation of the species richness of global terrestrial ecosystems compiled from available literature data in Appendix Table 1. First, we examine how the dependence of the resource-firing on the number of species (\(n\)) affects game stability. A Figure 3: **Tutte Polynomial** of the \(n=5\) game shown in Figure 2 Figure 2: **Sequence of chip firings** on a biocrust community with \(n=5\) node and \(10\) resources. The sink nodes in each configuration are denoted \(s_{0},s_{1},s_{2},s_{3},s_{4}\), respectively. Graph 5 represents the stable configuration common approach derived from topological graph theory to measure chip-firing stability is to compute the genus of the graph. **Definition 0.5**.: _The **genus** of a graph is the minimum integer \(k\) such that the graph can be topologically embedded onto a sphere with \(k\) handles without crossing itself [35]._ If the graph embedding of \(G\) is topologically simple, \(G\) has a low genus and the game may likely requires fewer resources to be available at the outset. Dually, if \(G\) has a high genus, the game may require a larger number of resources [35, 36]. In Figure 4, we compute the genus of resource-firing games for \(0<n<100\) for idealized desert, steppe, and temperate forest biocrust communities. One consequence of this analysis is that genus increases quadratically with respect to \(n\). Ergo, the resource stability threshold of idealized forest communities may be significantly higher than those of desert and semi-desert communities. Next, we consider how species richness impacts the avalanche distribution of the resource-firing game. When a node fires, a stressed species releases resources to non-stressed neighbors. This reallocation of resources may enable other species to release resources, resulting a percolation of firings across the graph. An avalanche is precisely this trickling sequence of firings. The length of the avalanche is the length of the sequence, that is, the number of firings that occur until firing must be induced for the game to proceed. The avalanche distribution is the probability distribution of the lengths of avalanches that occur, given some initial configuration of chips. For idealized desert, steppe, and temperate forest communities with \(n=15\), \(n=40\), and \(n=75\) species respectively, the resource-firing game was initiated 10,000 times and the avalanche distribution in Figure 5 generated. The distribution suggests that reverse succession is both episodic and directional, proceeding towards reverse succession in bursts of variable length. In all ecosystems, avalanches are frequently short or long, but not often medium-sized. As \(n\) increases, two effects are observed: the probability of medium-sized avalanches tends towards zero and the average length of avalanches increases. ### Attacks on Biocrust Communities The resource-firing game assumes a uniformly competitive interaction between species. In reality, species compete most aggressively with species in their proximity for local pools of resources. For a complete graph \(G\) representing a biocrust community, we define an accompanying path graph--a Cayley tree--that approximates the spatial structure of the community. By Cayley's formula, for \(G\) with \(n\) nodes, \(n^{n-2}\) unique spanning trees exist [37]. To build our Biocrust Cayley Tree (BCT), we replicate all \(n\) nodes on \(G\), then place edges on them such that a desired spanning tree is generated. The structure of the BCT is as follows. There is a singular node \(v\) located at the root at the tree with edges to \(k\) neighbors. These neighbors comprise the first shell of the tree. Each node in the first shell has \(k-1\) neighbors, forming the second shell. Each node in every subsequent shell also has \(k-1\) neighbors [38]. A random attack on the BCT compromises community structure by arbitrarily removing one node at a time. Attacks represent critical disturbances, such as climate change, trampling, or altered precipitation. Three types of nodes are defined: alive, incapacitated, and dead. If a node is attacked, the species it represents dies, and as the attack percolates through the community, its descendants are critically incapacitated. The species represented by all other nodes are considered alive. Let \(p_{A}=\frac{n_{A}}{n}\) be the fraction of alive species in the community, where \(n\) is the total number of nodes in the BCT. We define \(A\) as the event that the species selected for attack is alive and \(D\) as the event that it is already dead. Given layers \(L\) and the number of neighbors \(k\), Equation 8 approximates the conditional probability of \(A\) given \(\tilde{D}\)[38]. Note that a very large \(L\) is assumed. Figure 4: Genus by Ecosystem (Color Required) Figure 5: Avalanche Distribution by Ecosystem (Color Required) Figure 6: Cayley Tree with \(n=45\) and \(k=3\) \[P(A|\bar{D})=\frac{(k-2)(1-p_{A})^{L+1}}{(k-2)-(k-1)p_{A}} \tag{8}\] Based on their degree of severity, critical disturbances can lead to species death events of various sizes. We denote such events \(E\). We estimate the probability of \(E\) with a death toll of at least \(k\) as \[P(E)=\frac{L(k-1)}{k-2}\cdot P(L_{D}) \tag{9}\] where \(P(L_{D})\) is the probability that the disturbance is severe enough to attack the BCT, that is, cause species death [38]. ## 4 Conclusions We have presented the resource-firing game as a model of resource allocation in complex networks, with a particular focus on its application to biocrust communities. First, the model was motivated by experimental results. Then, we discussed its theoretical basis and the characteristics of a biocrust community that has evolved to an equilibrium state. We note here that the initial formulation of the resource-firing game hinges on some simplifying assumptions. For example, it assumes species are equally likely to compete with one another. In real-world ecosystems, species interaction also depends on differences in behavior or spatial proximity. Additionally, the model assumes that interactions are either present or absent, though they may exist on a spectrum of strength or frequency. However, the simple rules of our formulation adequately predict reverse succession in response to a generic disturbance, the episodic and directional character of changes in community structure, and the stability of equilibria. One key result of our analysis is that changes in species richness have nonlinear effects on network structure and the avalanche distribution of the resource-firing game. In particular, the resource stability threshold for idealized forest communities is significantly higher than that of desert and semi-desert communities, and high species richness increases the probability of very short and long avalanches, but not those of intermediate length. These results suggests that the community's response to disturbance is both directional and episodic, proceeding towards reverse succession in bursts of variable length. We also introduce the BCT as an approximation of the spatial structure of a biocrust community and derive probabilities of critical disturbances that can compromise community structure.
2310.01143
Preliminary Performance Evaluation of a Satellite-to-HAP Communication Link
The emergence of Fifth-Generation (5G) communication networks has brought forth unprecedented connectivity with ultra-low latency, high data rates, and pervasive coverage. However, meeting the increasing demands of applications for seamless and high-quality communication, especially in rural areas, requires exploring innovative solutions that expand 5G beyond traditional terrestrial networks. Within the context of Non-Terrestrial Networks (NTNs), two promising technologies with vast potential are High Altitude Platforms (HAPs) and satellites. The combination of these two platforms is able to provide wide coverage and reliable communication in remote and inaccessible areas, and/or where terrestrial infrastructure is unavailable. This study evaluates the performance of the communication link between a Geostationary Equatorial Orbit (GEO) satellite and a HAP using the Internet of Drones Simulator (IoD-Sim), implemented in ns-3 and incorporating the 3GPP TR 38.811 channel model. The code base of IoD-Sim is extended to simulate HAPs, accounting for the Earths curvature in various geographic coordinate systems, and considering realistic mobility patterns. A simulation campaign is conducted to evaluate the GEO-to-HAP communication link in terms of Signal-to-Noise Ratio (SNR) in two different scenarios, considering the mobility of the HAP, and as a function of the frequency and the distance.
Giovanni Grieco, Giovanni Iacovelli, Mattia Sandri, Marco Giordani, Michele Zorzi, Luigi Alfredo Grieco
2023-10-02T12:31:33Z
http://arxiv.org/abs/2310.01143v1
# Preliminary Performance Evaluation of a Satellite-to-HAP Communication Link ###### Abstract The emergence of Fifth-Generation (5G) communication networks has brought forth unprecedented connectivity with ultra-low latency, high data rates, and pervasive coverage. However, meeting the increasing demands of applications for seamless and high-quality communication, especially in rural areas, requires exploring innovative solutions that expand 5G beyond traditional terrestrial networks. Within the context of Non-Terrestrial Networks (NTNs), two promising technologies with vast potential are High Altitude Platforms (HAPs) and satellites. The combination of these two platforms is able to provide wide coverage and reliable communication in remote and inaccessible areas, and/or where terrestrial infrastructure is unavailable. This study evaluates the performance of the communication link between a Geostationary Equatorial Orbit (GEO) satellite and a HAP using the Internet of Drones Simulator (IoD-Sim), implemented in ns-3 and incorporating the 3GPP TR 38.811 channel model. The code base of IoD-Sim is extended to simulate HAPs, accounting for the Earth's curvature in various geographic coordinate systems, and considering realistic mobility patterns. A simulation campaign is conducted to evaluate the GEO-to-HAP communication link in terms of Signal-to-Noise Ratio (SNR) in two different scenarios, considering the mobility of the HAP, and as a function of the frequency and the distance. 6G; Non-Terrestrial Network (NTN); satellite communication; High Altitude Platform (HAP); ns-3. ## I Introduction Fifth-generation (5G) wireless networks [1] started a new era in terms of connectivity by promising ultra-low latency, high data rates, and ubiquitous coverage. Still, as the demands for seamless, pervasive, and high-quality communications grow, innovative solutions are being explored to extend 5G beyond traditional terrestrial networks. Notably, Non-Terrestrial Networks (NTNs) [2] are capable of bridging geographical divides, and provide broadband standalone connectivity even in the absence of terrestrial infrastructures (e.g., in rural or remote areas) or when terrestrial infrastructures are unavailable (e.g., in case of emergency). In the context of NTN, two key technologies that hold very high potential are High Altitude Platforms (HAPs) and satellites. HAPs, also known as stratospheric platforms, are Unmanned Aerial Vehicles (UAVs) soaring in the stratosphere at altitudes ranging from \(20\) to \(50\,\mathrm{km}\). These platforms can be equipped with propulsion systems, typically based on propellers and electric motors, to move to different locations [3]. This capability allows them to be deployed as needed, providing coverage to specific areas or addressing changing communication demands. Moreover, HAPs can establish wireless links with satellites, other HAPs, low-altitude UAVs such as drones, and/or terrestrial networks. Indeed, flying at high altitudes, HAPs can offer wide coverage and great line-of-sight connections, and establish reliable communication links in previously inaccessible regions. In turn, satellites have been used for decades, primarily for navigation, meteorology, or television broadcasting. However, with the advent of 5G, satellites are now considered as an integral part of the communication infrastructure, to support cost-effective, high-capacity, wide-coverage connectivity on the ground [4, 5]. The integration of HAPs and satellites into the 5G ecosystem brings several advantages. First, these aerial and space platforms can effectively bridge the digital divide by bringing high-speed connectivity to remote areas where ground infrastructure is limited or absent [6]. Moreover, HAPs and satellites can play a key role in disaster response and recovery scenarios, providing emergency communication networks when terrestrial infrastructure is (temporally or permanently) disrupted or unavailable. Furthermore, as the demands of data-hungry applications increase, HAPs and satellites can supplement existing terrestrial networks, and relieve congestion by offloading traffic. Additionally, they can support critical applications requiring ubiquitous and uninterrupted connectivity, such as for autonomous vehicles, smart cities, and Internet of Things (IoT) devices that operate in remote or mobile environments. However, we claim that this potential can be maximized if HAPs and satellites work together as a multi-layered integrated network [7], rather than as standalone solutions. For example, the HAP layer can act as a wireless relay to improve the link quality of an upstream satellite. At the same time, the satellite layer can offer the HAP a ready-to-use link for the backhaul, as well as an easy access to the core network. However, it is still unclear whether satellite-to-HAP communication is feasible and, if so, how it can be realized, which motivates our study. In this context, the Internet of Drones Simulator (IoD-Sim) [8] is a comprehensive simulation platform for the Internet of Drones (IoD) [9], which extends the Network Simulator 3 (ns-3) code base with additional features to sim ulate IoD networking elements (e.g., drones, network access points), entities, mobility models. The scenario configuration can be easily defined in JavaScript Object Notation (JSON) by the user, and does not require particularly advanced coding expertise. Given its flexible and modular structure, it represents a valuable tool to design and evaluate NTN scenarios. In light of the above, this work evaluates the PHY-layer performance of the communication link between a HAP and a Geostationary Equatorial Orbit (GEO) satellite. With this aim, IoD-Sim has been extended to incorporate: (i) the channel model, standardized in 3GPP TR 38.811 [10] and implemented in the ns3-ntn module [11], to simulate HAP-to-satellite communication; (ii) HAP-specific mobility models that take into account the impact of the Earth's curvature; and (iii) new coordinate systems, i.e., geographic, geocentric, topocentric, and projected, to facilitate object placement and mobility. To assess this preliminary implementation, IoD-Sim and its extensions are tested in terms of Signal-to-Noise Ratio (SNR) as a function of the distance between the HAP and the GEO satellite and of the frequency. We consider two different scenarios with a real satellite position and (i) a HAP that moves from northern Europe to central Africa, and (ii) a HAP that hovers below the satellite. Notice that, even though the current version of IoD-Sim primarily focuses on channel and physical layer aspects, it can be readily integrated with the rest of the ns-3 protocol stack, and therefore represents a crucial tool to support more advanced end-to-end protocol design and evaluations in the context of NTN. The remainder of the paper is organized as follows. Sec. II describes our system and channel models, Sec. III presents the simulation campaign and numerical results, and Sec. IV concludes the paper with final suggestions for future work. ## II System and Channel Model Implementation In this section we describe our system (Sec. II-A) and channel (Sec. II-B) models, and their relative implementation in IoD-Sim according to the structure in Figure 1. ### _System Model Implementation_ In this paper, GEO-to-HAP communication is referred to as a "mission" of \(T\) seconds, discretized into \(K\) time slots of equal duration \(\delta\). As a consequence, the HAP pursues a trajectory embodied by a set of discrete points, each expressed in terms of latitude, longitude, and altitude by vector \(\textbf{q}_{k}\in\mathbb{R}^{3}\) Fig. 1: Class diagram of the recent additions introduced in IoD-Sim to simulate HAP-to-satellite communication. with \(k=1,\ldots,K\). Similarly, the GEO satellite is located at \(\textbf{w}\in\mathbb{R}^{3}\), which obviously does not change over time. While latitude and longitude are expressed in radians, altitude is expressed in meters. For the sake of practicality, the trajectory curve of the HAP is generated leveraging a revised version of the original Bezier equation. It is defined by a set of \(N\) Points of Interest (PoIs), denoted as \(\textbf{P}=\left\{\textbf{p}_{0},\textbf{p}_{1},\ldots,\textbf{p}_{N-1}\right\}\), with \(\textbf{p}_{i}\in\mathbb{R}^{3}\) expressed in geographic coordinates. These PoIs are then projected over a Cartesian space, known as the EPSG:3857 WGS84/Pseudo-Mercator projection [12], which is defined as \[x =\frac{2^{\alpha}}{2\pi}(\lambda+\pi), \tag{1}\] \[y =\frac{2^{\beta}}{2\pi}\left(\pi-\ln\left(\tan\left(\frac{\pi}{4} +\frac{\varphi}{2}\right)\right)\right),\] (2) \[z =z, \tag{3}\] where \(\lambda\) is the latitude, \(\varphi\) is the longitude, \(\alpha=25.059\), and \(\beta=24.665\). The latter two constants are used to normalize the Cartesian space's unit of measurement to meters. Each PoI is assigned a level of interest in \(\textbf{1}=\left\{l_{0},l_{1},\ldots,l_{N-1}\right\}\). As the level of interest of a certain PoI grows, the resulting trajectory of the HAP is configured to pass closer to the PoI itself, and can be expressed as \[\overline{\textbf{q}}_{k}=\sum_{i=0}^{N-1}\overline{\textbf{p}}_{i}\sum_{j=0} ^{l_{i}-1}\binom{\Lambda}{L_{i}+j}(1-t)^{\Lambda-L_{i}-j}t^{L_{i}+j}, \tag{4}\] with \(t=k/K\), while \(\overline{\textbf{q}}_{k},\forall k\), and \(\overline{\textbf{p}}_{1},\forall i\), represent the Mercator projected coordinates [12]. Moreover, \(\Lambda=\left(\sum_{i=0}^{N-1}l_{i}\right)-1\) and \(L_{i}=\sum_{h=0}^{i-1}l_{h}\) are used as auxiliary variables. Notably, multiple Bezier curves can be defined to force this behavior, which only requires the geographical coordinates of the PoIs. In IoD-Sim, this mobility model is implemented in the ns3::ParametricSpeedDroneMobilityModel class, which was extended from the original code base in [8] with the boolean attribute UseGeographicSystem, and in the ns3::ConstantAccelerationDroneMobilityModel class. To evaluate the distance between two nodes at different heights, another Cartesian system should be used, i.e., the geocentric one. This reference system has its point of origin at the center of the Earth. To this end, the geographic coordinates \(\textbf{q}_{k},\forall k\), and **w** have been transformed using the WGS84 ellipsoid. First, we compute the polar radius as \[r=\frac{a}{\sqrt{1-e^{2}\sin^{2}(\lambda)}}, \tag{5}\] where \(a=6378137\) and \(e=0.0818191908426215\) are the Earth's semi-major axis and its eccentricity, respectively. Then, the points in geocentric coordinates are expressed as: \[x^{\prime} =(r+z)\cos(\lambda)\cos(\varphi), \tag{6}\] \[y^{\prime} =(r+z)\cos(\lambda)\sin(\varphi),\] (7) \[z^{\prime} =((1-e^{2})r+z)\sin(\lambda). \tag{8}\] Therefore, the HAP-satellite distance, which accounts for the curvature of the Earth in each time slot, can be expressed as \(d_{k}=\|\textbf{q}^{\prime}_{k}-\textbf{w}^{\prime}\|\). As far as the system model is concerned, PHY-layer parameters can be set in IoD-Sim prior to the simulation via ns3::ThreeGppPhySimulationHelper, which is configured in the JSON file by ns3::ThreeGppLayerConfiguration, as illustrated in Figure 1 and as thoroughly described in [8, Sec. V-F]. The same logic is applied at the MAC layer via ns3::NullNtnDemoMacLayerSimulationHelper, where ns3::NullNtnDemoMacLayerConfiguration is also designed to verify that the PHY layer acts according to the reference standard. ### _Channel Model Implementation_ The HAP-to-satellite communication link is modeled according to the 3GPP TR 38.811 specifications [10], which are in turn based on the cellular channel model presented in 3GPP TR 38.901 [13]. A first characterization of the above has also been implemented in ns-3 in the ns3-ntn module [11], and eventually extended into the current version of IoD-Sim in the ns3::ThreeGppNNT[...]ChannelConditionModel and ns3::ThreeGppNNTN[...]PropagationLossModel classes, as represented in Figure 1. Specifically, the simulator supports different 3GPP channel environments, i.e., dense urban, urban, suburban, and rural. The channel model accounts for several attenuation factors: basic path loss, atmospheric absorption, and scintillation. Basic path lossIt is characterized by three main components, and can be written as \[PL_{b}=FSPL+SF+CL, \tag{9}\] where \(FSPL\) is the free-space path loss, \(SF\) is the shadow fading, and \(CL\) represents the clutter loss. The free-space path loss for the NTN scenario is defined as \[FSPL=32.45+20\log_{10}(f_{c})+20\log_{10}(d_{k}), \tag{10}\] where \(f_{c}\) is the carrier frequency in GHz, and \(d_{k}\) is the distance between the HAP at generic point \(k\) in its trajectory, and the satellite in meters. The shadow fading is modeled as a log-normal random variable, i.e., \[SF\sim N\left(0,\sigma_{SF}^{2}\right), \tag{11}\] and depends on the elevation angle, the visibility condition (i.e., line-of-sight or not), and the carrier frequency. The characterization of the clutter loss follows a similar model, even though it can be usually neglected in line-of-sight. Atmospheric absorptionUnlike in the terrestrial channel, in the NTN scenario the propagation of the signal undergoes an additional attenuation due to penetration through the atmosphere. While the complete model is available in the International Telecommunication Union (ITU) documents [14], the 3GPP TR 38.811 specifications adopt a simplified model considering only the annual mean values of absolute humidity, water-vapor density, water-vapor partial pressure, and dry air pressure for the atmosphere. Therefore, atmospheric absorption is given by \[PL_{A}=\frac{A_{\mathrm{zenith}}}{\sin(\xi)}, \tag{12}\] where \(A_{\mathrm{zenith}}\) is the absorption loss in dB at the zenith angle at a given carrier frequency, and \(\xi\) is the actual elevation angle. The value of \(A_{\mathrm{zenith}}\) is given in [14]. Atmospheric absorption is relevant only for frequencies above \(10\,\mathrm{GHz}\), or in the case of an elevation angle lower than 10 degrees for all frequencies. An important absorption effect is due to the presence of oxygen, which produces a very significant attenuation at frequencies around \(60\,\mathrm{GHz}\)[14, Annex 1.1]. ScintillationIt determines the rapid fluctuations of the phase and the amplitude of the signal, caused by small-scale changes in the structure of the atmosphere. Specifically, scintillation is due to two different contributions: tropospheric scintillation and ionospheric scintillation. The former is particularly significant for frequencies above 10 GHz and at low elevation due to the longer path of the signal. It is modeled as the 99-percentile of the attenuation level observed in Toulouse (France) at \(20\,\mathrm{GHz}\), as reported in [10, Figure 6.6.2.1-1]. Ionospheric scintillation, instead, is relevant only for latitudes below 20 degrees, or for frequencies below 6 GHz. It is expressed as \[PL_{IS}=\left(\frac{f_{c}}{4}\right)^{-1.5}\frac{P_{\mathrm{huc}}\left(4\ \mathrm{GHz}\right)}{\sqrt{2}}, \tag{13}\] where \(P_{\mathrm{huc}}\left(4\ \mathrm{GHz}\right)\) represents the ionospheric attenuation level observed in Hong Kong between March 1977 and March 1978 at a frequency of 4 GHz [10, Figure 6.6.6.1.4-1]. ## III Simulation Campaign In this section, we evaluate via simulation the channel link between a GEO satellite and a HAP. The satellite is located at \([0.04^{\circ},-4.95^{\circ},35\,770.88\,\mathrm{km}]\), which corresponds to the actual position of Eutelsat 5 West B. The HAP follows a curvilinear trajectory generated with 4 PoIs. Besides, the HAP adopts the mobility model described in Sec. II-A, with a constant speed of \(24\,\mathrm{m/s}\). This leads to a total mission duration of \(T=1\,023\,160\,\mathrm{s}\simeq 284\) h. A comprehensive overview of the described scenario and mobility pattern is illustrated in Figure 2, while simulation parameters are listed in Table I. The HAP and the GEO satellite are equipped with a circular aperture antenna operating at \(20\,\mathrm{GHz}\). This antenna, also known as reflector, is modeled based on the ns3::CircularApertureAntennModel class in the ns3-ntn module [11]. The HAP (GEO satellite) antenna \begin{table} \begin{tabular}{|l|l|} \hline Parameter & Value \\ \hline Mission duration (\(T\)) & \(284\,\mathrm{[h]}\) \\ 3GPP Environment & NTN Rural [10, Sec.6.1.2] \\ Update period & \(1\,\mathrm{[s]}\) \\ Frequency (\(f_{c}\)) & \(20\,\mathrm{[GHz]}\) \\ Shadowing & Disabled \\ Time resolution & \(1000\,\mathrm{[s^{-1}]}\) \\ Bandwidth & \(400\,\mathrm{[MHz]}\) \\ EIRP density & \(40\,\mathrm{[dBW/MHz]}\) \\ Antenna noise figure & \(1.2\,\mathrm{[dB]}\) \\ HAP speed & \(24\,\mathrm{[m/s]}\) \\ GEO antenna gain & \(58.5\,\mathrm{[dBi]}\) \\ HAP antenna Gain & \(39.7\,\mathrm{[dBi]}\) \\ GEO antenna radius & \(2.5\,\mathrm{[m]}\) \\ HAP antenna radius & \(0.3\,\mathrm{[m]}\) \\ GEO antenna inclination & \(180.0\,\mathrm{[deg]}\) \\ HAP antenna inclination & \(0\,\mathrm{[deg]}\) \\ 1st PoI (Takeoff/Landing) & \([78.244789^{\circ},\,15.4843571^{\circ},\,20\,\mathrm{km}]\) \\ 2nd PoI (Iran PoI) & \([35.7074505^{\circ},\,51.1498211^{\circ},\,20\,\mathrm{km}]\) \\ 3rd PoI (GEO Satellite) & \([0.04,\,-4.95,\,20\,\mathrm{km}]\) \\ 4th PoI (Iceland PoI) & \([64.133542^{\circ},\,-21.9348416^{\circ},\,20\,\mathrm{km}]\) \\ \hline \end{tabular} \end{table} TABLE I: Simulation parameters and settings. Fig. 2: An overview of the trajectory of the HAP, its PoIs, and the satellite position over the Earth. has a maximum gain of \(39.7\,\mathrm{dB}\) (\(58.5\,\mathrm{dB}\)), a diameter of \(0.6\,\mathrm{m}\) (\(5\,\mathrm{m}\)), and an inclination angle of \(0^{\circ}\) (\(180^{\circ}\)). We focus on downlink communication, where signals are sent from the satellite to the HAP with a transmission power of \(37.5\,\mathrm{dBm}\) and a bandwidth of \(400\,\mathrm{MHz}\). We consider the channel model described in Sec. II-B, and a rural environment [10, Sec.6.1.2] with the assumption of line-of-sight visibility. Given that the HAP flies in the stratosphere, i.e., at a fixed altitude of \(20\,\mathrm{km}\), we assume that the impact of shadowing as well as of tropospheric scintillation is negligible. Moreover, we consider the impact of atmospheric absorption through all the layers of the atmosphere, even though the HAP flies in the stratosphere, so as to obtain worst-case results. Considering the above setup, in Figure 3 we illustrate the evolution of the SNR over time of the link between the GEO satellite and the HAP, during the mission. The markers refer to those in Figure 2, and indicate when the HAP reaches a certain PoI according to the given level of interest. As expected, as the HAP approaches the geographical position of the GEO satellite (i.e., the starred marker, corresponding to the area in the Gulf of Guinea), the SNR increases, thus reaching a maximum value of \(13.0584\,\mathrm{dB}\). With a bandwidth of 400 MHz as per the 3GPP TR 38.811 specifications, this corresponds to a PHY-layer capacity of approximately \(1.78\,\mathrm{Gbps}\), which is enough to realize HAP-to-satellite communication. However, the SNR drops below \(0\,\mathrm{dB}\) as the HAP moves farther away from the GEO satellite, i.e., as the length of the link between the two endpoints increases. For additional insights, Figure 4 shows the SNR as a function of the distance between the HAP and the GEO satellite projected over the Earth. As expected, the SNR is positive only for distances lower than \(\sim\)\(100\,\mathrm{km}\), which roughly corresponds to the service area of the HAP, and then drops below \(0\,\mathrm{dB}\) everywhere else. This is due to (i) the high directivity of reflector antennas, which poses a limit to the coverage radius of the HAP, and (ii) the higher path loss as the distance between the HAP and the GEO satellite increases, and the elevation angle between the two decreases accordingly. Finally, we analyze a scenario in which the HAP hovers below the GEO satellite in the PoI of maximum link gain (i.e., the starred marker in Figure 2), and the frequency varies from \(20\) to \(100\,\mathrm{GHz}\). We can see in Figure 5 that the SNR decreases as the frequency increases, as the \(FSPL\) in Eq. (10) increases, with a significant drop around \(60\,\mathrm{GHz}\) due to the impact of oxygen absorption in the atmosphere (in the order of \(15\,\mathrm{dB/km}\)). Still, the SNR is consistently above \(0\,\mathrm{dB}\) as \(f_{c}\leq 50\) GHz, where the very large bandwidth at these frequencies can support high-rate transmissions. In conclusion, the above results demonstrate that NTN communication between a GEO satellite and a HAP can be effectively established, at least from a PHY-layer standpoint, and simulated using IoD-Sim. ## IV Conclusions and Future Work This work presents a preliminary implementation in IoD-Sim of the channel model between a HAP and a GEO satellite, as per the 3GPP TR 38.811 specifications. To do so, IoD-Sim has been extended to simulate new mobility Fig. 4: SNR vs. the distance between the HAP and the GEO satellite, projected on the Earth. Fig. 5: SNR in the PoI of maximum link gain vs. the frequency. Fig. 3: The evolution of the SNR during the mission. models for the HAP, and now also supports geocentric, geographic, topocentric, and projected coordinates. Simulation results show that high-capacity GEO-HAP communication is feasible, and we provide indications on the optimal set of frequencies and distances for maximum performance. Several aspects are still not yet addressed in the simulator. Specifically, future implementation efforts will be focused on the modeling of (i) non-stationary satellite orbits, (ii) HAP and satellite power consumption, (iii) MAC-layer protocols that take into account NTN propagation delays, and (iv) an Integrated-T/NTN end-to-end communication stack for a comprehensive 6G simulation platform. ## Acknowledgments This work was partially supported by the European Union under the Italian National Recovery and Resilience Plan (NRRP) of NextGenerationEU, with particular reference to the partnership on "Telecommunications of the Future" (PE00000001 - program "RESTART").
2306.15808
Classification of Infant Sleep/Wake States: Cross-Attention among Large Scale Pretrained Transformer Networks using Audio, ECG, and IMU Data
Infant sleep is critical to brain and behavioral development. Prior studies on infant sleep/wake classification have been largely limited to reliance on expensive and burdensome polysomnography (PSG) tests in the laboratory or wearable devices that collect single-modality data. To facilitate data collection and accuracy of detection, we aimed to advance this field of study by using a multi-modal wearable device, LittleBeats (LB), to collect audio, electrocardiogram (ECG), and inertial measurement unit (IMU) data among a cohort of 28 infants. We employed a 3-branch (audio/ECG/IMU) large scale transformer-based neural network (NN) to demonstrate the potential of such multi-modal data. We pretrained each branch independently with its respective modality, then finetuned the model by fusing the pretrained transformer layers with cross-attention. We show that multi-modal data significantly improves sleep/wake classification (accuracy = 0.880), compared with use of a single modality (accuracy = 0.732). Our approach to multi-modal mid-level fusion may be adaptable to a diverse range of architectures and tasks, expanding future directions of infant behavioral research.
Kai Chieh Chang, Mark Hasegawa-Johnson, Nancy L. McElwain, Bashima Islam
2023-06-27T21:44:05Z
http://arxiv.org/abs/2306.15808v1
Classification of Infant Sleep/Wake States: Cross-Attention among Large Scale Pretrained Transformer Networks using Audio, ECG, and IMU Data ###### Abstract Infant sleep is critical to brain and behavioral development. Prior studies on infant sleep/wake classification have been largely limited to reliance on expensive and burdensome polysomnography (PSG) tests in the laboratory or wearable devices that collect single-modality data. To facilitate data collection and accuracy of detection, we aimed to advance this field of study by using a multi-modal wearable device, LittleBeats (LB), to collect audio, electrocardiogram (ECG), and inertial measurement unit (IMU) data among a cohort of 28 infants. We employed a 3-branch (audio/ECG/IMU) large scale transformer-based neural network (NN) to demonstrate the potential of such multi-modal data. We pretrained each branch independently with its respective modality, then finetuned the model by fusing the pretrained transformer layers with cross-attention. We show that multi-modal data significantly improves sleep/wake classification (accuracy = 0.880), compared with use of a single modality (accuracy = 0.732). Our approach to multi-modal mid-level fusion may be adaptable to a diverse range of architectures and tasks, expanding future directions of infant behavioral research. ## I Introduction Sleep is a physiological and behavioral process central to brain development during infancy [1, 2, 3, 4]. During the first six months of life, infants spend more time sleeping than awake [5]. Further, indices of infant sleep quality and quantity are associated with subsequent cognitive and language development [6, 7, 8, 9, 10, 11], attention regulation [12, 13, 14], social-emotional functioning [10, 12], and physical health [15, 16]. Given the importance of infant sleep to development, automated and unobtrusive monitoring of sleep is critical. Thus, we focus on the task of sleep/wake classification in this study. Laboratory-based polysomnography (PSG) is the gold-standard sleep assessment for adults and infants alike [18, 19]. However, given the high level of burden and expense of laboratory PSG, researchers have used wearable devices for sleep monitoring [20]. Notably, past studies of sleep/wake classification using wearable devices typically extract sleep features from single-modality data. For example, with actigraphy data, [21] carried out extensive studies on adult sleep/wake classification using traditional signal processing methods, classic machine learning methods, and several deep learning methods. Electrocardiogram (ECG) signals are also often used because stable QRS complex and longer R peak period can indicate sleep stages [22]. Studies such as [23] used ECG data to classify adult sleep/wake with a 5-layer convolutional neural network (CNN) concatenated with 2 fully connected (FC) layers. A signal-processing-based approach was suggested by [24], where audio recordings of adults' respiratory sounds were used to estimate sleep/wake likelihood from signal level features such as autocorrelation. All published studies of automatic sleep/wake classification use different datasets, therefore their accuracies are not strictly comparable, but the accuracies reported in all four of these studies are between 81% Fig. 1: LittleBeats (center) is capable of collecting audio, ECG, and IMU data. Reproduced with permission from [17] and 84% (see Table I for details). A few studies have used bimodal data to further improve classification performance. Ref. [25] concatenated the frequency bins from respiratory signals and ECG to feed into a FC based network, achieving classification accuracy of 85.3%. Ref. [26] uses random forest to classify sleep/wake from acceleration and heart rate data collected by Apple Watch, achieving 87.3% accuracy. Recent work tend to focus on a more involved sleep classification with 5 stages. Ref. [27] extracted infant vocalizations from audio recordings and physical motions from video recordings, and classified sleep stages using a random forest, resulting in an average accuracy near 85%. Supratak et al.'s DeepSleepNet [28] and their subsequent stage of the art (SOTA) TinySleepNet [29] used only electroencephalogram (EEG) to reach 87.5% in accuracy. To achieve further improvements in accuracy, we leveraged trimodal data collected by an infant wearable device, Little-Beats (LB), as shown in Fig. 1. LB can synchronously record infant sounds (via microphone [audio]), physiology (via 3-lead ECG), and motion (via inertial measurement unit [IMU]) for extended periods of time (8-10 hours) in the home context. To perform sleep/wake classification using LB multi-modal data, we developed an ensemble of three large scale transformer networks pretrained on unimodal audio, ECG, and IMU datasets, and fine-tuned using trimodal data. This system has the potential to combine all important feature extraction done by the research mentioned above, including the extraction of vocalization and breathing patterns from audio recording, QRS complex features from ECG, and position and motion from IMU. To the best of our knowledge, our study is the first to access these three modalities jointly on sleep/wake classification. We show superior classification performance using all three modalities compared with any single modality or pairs of modalities combined. In contributing to neural network (NN) architectures, we explore the benefits of pretraining large scale transformer networks on unlabeled audio, ECG, and IMU data, then finetuning a cross-attention based fusion architecture on a small LB labeled dataset. Unsupervised pretraining involves training a NN on a large, diverse dataset, which enables it to learn generic latent features that can be applied to a wide range of tasks. There is a wide range of pretraining schemes for audio, ECG, and IMU data. Specifically, wav2vec 2.0 (W2V2) [30] showed the benefits of pretraining audio using contrastive loss, achieving state of the art performance in tasks such as automatic speech recognition. Ref. [31] combined W2V2 and contrastive multi-segment coding to pretrain on ECG signals, achieving outstanding performance on cardiac arrhythmia classification and patient identification. LIMU-BERT [32] is derived from bidirectional encoder representations from transformers (BERT) [33], and is used to pretrain transformers with unlabeled IMU data. Inspired by the above architectures, we pretrained three branches of large scale transformer networks, one for each modality, and were able to extract latent features useful for sleep/wake classification. To utilize features from all three modalities, we implemented a fusion strategy that relies on cross-attention among pretrained transformer layers. Multimodal fusion is often done by early fusion, where the input from each modality is concatenated and fed into the network as a single input, or late fusion, where each modality is processed separately and the outputs are combined at the final stage. We define early fusion as fusing right after feature extraction and not fusion of raw data. Much work has been done on these 2 fusion techniques. Ref. [34] employed early fusion by cascading learned features from ECG and IMU directly, and passed them through a dense network. Ref. [35] described a simple late fusion of different branches by summing or averaging the logit outputs. Recently, intermediate fusion has become more common. This type of feature fusion happens in the middle of the NN, and uses a variety of architectures depending on the tasks. Ref. [36] extensively used so-called "cross-stitch" modules for soft parameter fusion, taking multiple linear weighted sums of each feature extraction layer as input for the next layer. Cross-attention based fusion techniques were explored in recent papers such as [37][38], where the attention layers take concatenated features from different modalities as input, and [39], where a single cross-attention layer is used to share information between branches. Our approach is innovative, first, in that it relies on the pretrained transformer layers from each branch, rather than training a feature sharing mechanism from scratch. Second, we fuse the three transformer networks by alternating self and cross-attention at different layers, which both preserves each branch's transformer features and incorporates attention from other modalities. In sum, in addition to being the first study to combine audio, ECG, and IMU for classification of infant sleep/wake states, we develop an innovative cross-attention based fusion for large scale pretrained transformer networks to combine three modalities. We not only reiterate the well known benefit of pretraining, but also demonstrate the ability of pretrained unimodal transformers to be fine-tuned using cross-attention-based multi-modal fusion to improve accuracy. We believe our work lays the groundwork for improved accuracy in a wide variety of signal processing tasks by the use of multimodal wearable devices such as LB. ## II Method Our architecture is an ensemble of three similarly structured transformer networks, with each branch targeting audio, ECG, or IMU data respectively, as shown in Fig. 2. For the audio branch, we utilize a standard 12-layer W2V2 [30] network because W2V2 repeatedly shows superb performance in various speech-related tasks such as automatic speech recognition and phoneme recognition. A recent study [40] showed an oracle W2V2 pretrained on 4300 hours of unlabeled infant-family audio data collected by LB or the Language ENvironment Analysis (LENA) system [41] increased performance of speech diarization and vocalization type classifications. As infant vocalization features may well include cues of sleep, we adopt pretrained weights from [40] for the audio W2V2 branch. As for ECG data, we were inspired by the similarity between speech and ECG. As early as 1940, researchers [42] were actively using impulse trains to imitate glottal excitation for speech generation. R-peaks in ECG data have a similar time domain structure, so a speech-centric NN should be able to learn ECG features. Speech and ECG differ in one respect: speech contains periodicity at many frequencies (pitch and formants), while ECG is dominated by the inter-beat interval, therefore ECG features may be learnable using less pretraining than speech. We pretrain a standard 12-layer W2V2 using 574 hours of unlabeled ECG data to utilize our observations above and to match the structure of the audio branch. IMU data does not share similar structure as audio or ECG: an IMU signal is composed of parallel 3-axis accelerometer and 3-axis gyroscope signals, whose correlations are important for signal interpretation. To take full advantage of the information contained in our IMU data, we used a pretraining paradigm based on LIMU-BERT [32]. In place of W2V2's CNN feature extraction layers, LIMU-BERT begins with a single fully-connected feature projection layer per input sample, followed by several layers of transformers. This network is pretrained with Masked Language Model objectives and a Span Masking mechanism. We pretrained 574 hours of IMU data collected from LB, on a modified LIMU-BERT architecture where we lifted the restriction that all transformers share weights, extended the number of layers to 12, and matched the transformer implementation to the one found in W2V2. To maximize the model's ability to learn across modalities, or to extend the transformer attention beyond a single modality, we utilized linear transformation and cross-attention every 4 transformer layers, as inspired by [43] and their work using cross-attention across audio channels. An example of the cross-attention mechanism for the audio branch is shown in Fig. 3. A cross-attention layer for the audio branch consists of multiple cross-attention heads, each of which takes its query from the previous layer's audio embedding, and its values and keys from a fusion of the previous layer's ECG and IMU branches. ECG and IMU embeddings are first passed through two different linear layer projections producing an output with the frame rate and embedding dimension of the audio, then concatenated to create a 2 channel embedding. A channel-wise 1d convolutional layer is then used to create the vector that serves as key and value of the audio cross-attention layer. The multiple audio cross-attention heads are then concatenated, added to a residual connection (from the previous audio layer), normalized, and passed through a feedforward layer. The ECG and IMU cross-attention layers have the same architecture, with each modality receiving query inputs from its own previous layer, while each modality's key and value inputs come from the other two modalities. Mathematically, we can denote each hidden input embedding to the transformer layer as \(\mathbf{H}_{k}\in\mathbb{R}^{N\times L_{e}}\), where the indices \((k,l,m)\) are any permutation of \(\{\text{audio, ECG, IMU}\}\), \(N\) is the number of samples in the input, and \(L_{e}\) is the architecture embedding size. Let's define the query, key, and value projection of each branch to be \(\mathbf{W}^{\{Q,K,V\}}\in\mathbb{R}^{L_{e}\times L_{e}}\). Let's constrain the discussion of attention regarding one head of one branch where \(k=\) audio. The attention layers can be self attention or cross-attention depending on the layer index \(i\). We found using cross-attention at layers 1, 5, and 9 (i.e. \(i\%4=1\) for layers \(0\leq i\leq 11\)) gives the best performance. \[\text{SelfAtt}_{i,k}=\text{softmax}\left(\frac{\mathbf{H}_{k}\mathbf{W}_{i }^{Q}\left(\mathbf{H}_{k}\mathbf{W}_{i}^{K}\right)^{T}}{\sqrt{L_{e}}}\right) \mathbf{H}_{k}\mathbf{W}_{i}^{V}\] Fig. 3: Cross-attention for the audio branch. Multiheaded attention layer (MHA) takes the audio embedding as query. IMU and ECG outputs from the other two branches first get linearly projected and concatenated into a 2-channel embedding. The embedding is reduced to 1-channel by a 1-d convolution layer and passed into MHA as key and value. Fig. 2: The LittleBeats Model Architecture. Each of the audio (top), ECG (center), and IMU (bottom) branches consists of feature extraction (convolutional and/or feedforward projection), positional encoding, and 12 transformer layers. Each branch is pretrained individually on unlabeled data. During labeled finetuning, the three branches are combined via cross-attention at the feature level. Their outputs are then concatenated and fed into dense layers to output logits. \[\text{CrossAttn}_{i,k}=\text{softmax}\left(\frac{\mathbf{H}_{k}\mathbf{W}_{i}^{Q} \left(\mathbf{H}_{\text{cross}}\mathbf{W}_{i}^{K}\right)^{T}}{\sqrt{L_{e}}} \right)\mathbf{H}_{\text{cross}}\mathbf{W}_{i}^{V}\] \[\mathbf{H}_{\text{cross}}=\mathbf{A}\begin{bmatrix}\text{proj}(\mathbf{H}_{k}^ {T})\\ \text{proj}(\mathbf{H}_{m}^{T})\end{bmatrix}\] where \(\mathbf{A}\in\mathbb{R}^{L_{e}\times 2}\) is a trainable mixing weight. This discussion is applicable to the other heads and other branches. The three branches' outputs are concatenated and passed through three dense layers with ReLU activations to produce binary logits for sleep and wake. With the pretrained weights loaded onto each branch and the ensemble in place, we use frame-wise cross entropy loss to finetune the network. ## III Experiment ### _Dataset_ Participants were recruited via study flyers distributed to local community organizations as well as online listservs serving families of young children. In the context of larger studies of child behavioral development, study coordinators provided participating families with a LB or LENA kit, along with instructions for conducting associate recordings in the home. Families were asked to complete 2 to 3 daylong (8+ hours) recordings over a 2-week period. All study procedures were approved by the Institutional Review Board at the University of Illinois Urbana-Champaign. Unlabeled data included (a) 4300hr of audio home recordings (1000hr collected by LB and 3200hr by LENA) from 245 families with children under 5 years of age (see [40] for further details), as well as (b) 574hr of ECG data and (c) 574hr of IMU data collected by LB from 28 families with children (54% female) under 5 years of age (Mean = 26 months, Range: 3-65 months). For the labeled data, we gathered 68.5hr of synchronized audio, ECG, and IMU data from the same 28 families. Four trained human annotators labeled infant sleep and wake states from 1.5hr segments of LB audio files. Annotation of sleep states was facilitated by referring to (a) parental reports of their child's activities while wearing the LB device in the home, (b) visual inspection of the wav form of the audio file, and (c) listening to audio recordings for indicators of infant sleep (e.g., slowed steady breathing, no vocalizations or movement). We randomly divide the labeled data into training set (50hr, 25 families), validation set (4.5hr, 1 family), and testing set 14hr, 2 families). Data from families/infants in the training, validation, and testing datasets did not overlap. ### _Data Preprocessing_ As a first step, we synchronized data across modalities. LB records each modality in slightly different time segments. For each segment, LB records the timestamps in absolute UTC start time and end time, and the associated start and end sample index. Formally, let's define \[\mathbf{T}_{k}^{\text{start}},\mathbf{T}_{k}^{\text{end}},\mathbf{S}_{k}^{ \text{start}},\mathbf{S}_{k}^{\text{end}}\in\mathbb{R}^{L_{k}}\] where for each modality, each element in \(\mathbf{T}\) is the absolute UTC start or end time, each element in \(\mathbf{S}\) is starting or ending data sample index, \(L\) is the number of data chunks. LB is designed to function continuously despite variability in the number of nonzero IMU samples, therefore it sometimes fails to record all samples in a given audio or ECG segment. Such missed samples can be detected by comparing the known ground truth duration of a segment, \(F_{s,k}*(\mathbf{T}_{k}^{\text{end}}[i]-\mathbf{T}_{k}^{\text{start}}[i])\), to the number of samples it contains, \((\mathbf{S}_{k}^{\text{end}}[i]-\mathbf{S}_{k}^{\text{start}}[i])\), where \(F_{s,k}\) is the sampling frequency for modality \(k\). When an incomplete segment is detected, we fill in the missing data with zeros. We obtain \(\mathbf{\hat{Z}}_{k}\) where each of the vectors has length \(F_{s,k}*(\mathbf{T}_{k}^{\text{end}}[i]-\mathbf{T}_{k}^{\text{start}}[i])\). This is shown in Fig. 4. Timestamps of different modalities are approximately but not precisely aligned, because the writing to SD card on LB is asynchronous. In order to make multimodal processing possible, we simply discard the data that did not overlap according to the timestamp and truncate \(\mathbf{\hat{Z}}_{k}\) to \(\mathbf{Z}_{k}\in\mathbb{R}^{L}\) where L is the same across all modalities. \(\mathbf{Z}_{k}\) is then segmented into 30s segments \(\mathbf{X}_{k}\). Human annotators mark the beginning and ending of each period of sleep. These times are used to assign, to each 30s segment \(\mathbf{X}_{k}\), a binary label \(\mathbf{Y}_{k}\in\{0,1\}\) where 0 is wake and 1 is sleep. If both sleep and wake labels are present in segment \(k\), \(\mathbf{Y}_{k}\) is set equal to the label with longer duration. ### _Metric_ Because this is a binary classification task, we evaluated standard accuracy, precision, recall, F1-score, and Cohen's kappa \(\kappa\). ### _Implementation Details_ Audio sample rate is downsampled from 24000Hz to 16000Hz and ECG sample rate is upsampled from 2381Hz to 16000Hz, to match the default sampling rate of W2V2. IMU data are kept at a sampling rate of 150Hz because upsampling to 16000Hz might introduce too many artifacts. We did not experiment with data interpolation to fill out the zeros during time synchronization nor with data augmentation to diversify the data. For audio and ECG W2V2 branch, we used almost the same baseline structure as described in [30], with 12 transformer Fig. 4: Zeroing data according to timestamp. For each data segment, the recorded UTC time multiplied by sampling frequency (\(F_{s,k}*(\mathbf{T}_{k}^{\text{end}}[i]-\mathbf{T}_{k}^{\text{start}}[i])\)) is longer than the recorded number of samples (\(\mathbf{S}_{k}^{\text{end}}[i]-\mathbf{S}_{k}^{\text{start}}[i]\)). The missing data are simply filled with zeros. layers, hidden size 768, intermediete size 3072 and 16 attention heads. Convolution feature extraction structure remain the same with hidden size 512, kernel size [10,3,3,3,3,2,2] and strides [5,2,2,2,2,2,2]. For IMU branch, the feature extraction is done by a linear layer, a normalization layer, and adding a position embedding. The decoding part has 12 transformer layers, hidden size 72, intermediate size 144 and 4 attention heads. Outputs of the three branches are concatenated into a vector of size 1608, passed through a dense structure as follows: a linear layer from size 1608 to size 1608, ReLU, linear layer from size 1608 to 804, dropout of rate 0.1, linear layer from 804 to 2. Each experiment was run for 2 epochs with batch size 16, with the standard Adam optimizer. This was done on a single RTX A6000 GPU. Training code was based on Huggingface Transformers and the code is available for inspection and further development. 1 Footnote 1: This footnote will be replaced by a github URL containing an open-source implementation upon acceptance of the manuscript. ### _Baseline Implementation_ The majority of sleep/wake classification baselines did not release code, so we implemented some of their algorithms with modification based on their description in the paper. Specifically, for [25] we replaced the respiratory input with LB audio data, and trained with gradient descent instead of Levenberg-Marquardt back-propagation algorithm. For [23], we used increasing strides ([1, 2, 3, 4, 5, 6]) for the CNN layers instead of using [2, 2, 2, 2, 2, 2] because each LB ECG segments has more samples than theirs. We also modified the dropout value to 0.1 based on empirical testing. [26] used 3-axis acceleration and heart rate data collected on Apple Watch as their dataset. We calculated heart rate at each recorded segment from LB ECG data, using the python package HeartPy. We then input our timestamp, acceleration data, and calculated heart rate to their released code. Finally, state of the art sleep stage classifier [29] released their code online. We used ECG as input to the model instead of EEG in the original paper. ## IV Results ### _Benefits of Three Modalities_ The first section in Table I shows how using all modalities together compares to using each of the modalities alone for the purpose of sleep/wake classification. The single modality architecture takes each branch in Fig. 2, with pretrained weights, and adds the final three FC layers for classification. The proposed architecture is the entire model pretrained on unlabeled data and finetuned with proposed cross-attention at every 4 layers. We see that, by quite a large margin, using all three modalities together as proposed, the network is able to learn more about infants' sleep patterns and achieve better performance across the board than using just a single modality. Note that while recall for the ECG branch outperforms the proposed network by 5%, its precision is lower by 20%, suggesting that the unimodal ECG network is a worse classifier since it too frequently classifies segments as "sleep." The second section in Table I presents results of key baseline implementation on the LB dataset, as described in Section III-E. Because [25] and [23] have not released their code and [26] and [29] are not designed to work strictly on ECG data, results need to be interpreted with caution. However, we can still observe that our proposed model using all three modalities outperforms all key baselines, which only use one or two modalities. Numbers in the last section in Table I are results reported in other papers, using other datasets, and are therefore not strictly comparable to the numbers in the first two sections. The classifiers summarized by these results are sleep/wake classifiers using audio only [24], ECG only [23], ECG and respiratory signals [25], motion [21], and ECG and IMU [26]. While all datasets are different and comparison of the accuracies is therefore not theoretically justifiable, such a comparison nevertheless supports the conclusion displayed in the top half of the table, viz., that sleep/wake classification performed using three modalities is more accurate than sleep/wake classification performed using only one or two modalities. ### _Benefits of Pretraining_ Table II shows how pretraining affects the performance of the proposed architecture. Here we configured the model to use all three branches, but only load the pretrained weights for specific modalities. Cross-attention is applied at every \(4^{\text{th}}\) layer for the 12 layers of transformers. We see that every time we add pretrained weights for a particular branch, we have a performance increase across all metrics. This reinforces our understanding of pretrained networks. By pretraining each branch using the unlabeled data, then fine-tuning the entire system together against a small number of labeled data, we are able to get better performance. ### _Fusion Techniques_ Our discussion is incomplete without comparison to early and late fusion techniques discussed in the literature [35]. Table III presents performance of 2 fusion methods against the proposed cross-attention-based intermediate fusion technique. Early fusion is inspired by [34], where convolutional features extracted from ECG and IMU data are concatenated with anthropometric data as input to a downstream neural network. In our variation, we concatenate the three outputs from the feature extractor for each branch as shown in Fig. 2, skipping the transformer layers, and pass through the dense network for fine-tuning. As for late fusion, we leave in the transformer layers, triplicate the FC layers to generate logits for each modality, and average the three logits for evaluation. The proposed cross-attention fusion achieved better performance in almost every metric except that it has a lower recall than late fusion. Similar to the trend found in Table I, this high recall for late fusion results from a disproportional confusion matrix, as shown by late fusion's low precision score. Therefore, the proposed fusion architecture is still favorable in general. ### _Ablation Study_ Table IV shows the effect of using cross-attention at different layers. All models were fine-tuned on labeled multimodal data after loading pretrained weights for all three modalities separately. "-" means the architecture did not converge. We empirically found that cross-modal attention once every 4 layers (\(i\%4=1\)) gives the best performance across the board. Many models with cross-modal attention fail to converge: convergence failure occurs with cross-modal attention every second layer, and with cross-modal attention in the middle four layers (\(4\leq i<8\)) or the last four layers (\(8\leq i<12\)). Cross-modal attention in the first four layers (\(0\leq i<4\)) converged to a system with accuracy lower than that of the system with no cross-modal attention. The sensitivity of cross-modal attention to architectural configuration was an unexpected result. We defer this to future studies. ## V Conclusion With the development of multi-modal wearable devices, we are able to gather synchronized audio, ECG, and IMU data for the task of infant sleep/wake classification. We demonstrated the best classification performance when using all three modalities compared with our own single modality network or single/double modality network found in the literature. In addition, we developed an ensemble of large scale pre-trained transformer neural network, by fusing the pretrained transformer layers with cross-attention at every 4 layers. This fusion method is not limited to the task of wake/sleep classification, but seems likely to generalize successfully to any multi-modal network in which all modalities have the same number of pretrained transformer layers. Our work presents exciting directions for multi-modal studies of infant and child development. ## Acknowledgment We would like to thank the families who participated in this research, as well as Jordan Bodway and Jenny Baldwin for their assistance with data collection and processing. This work was supported by funding from the National Institute of Mental Health (R21MH112578), National Institute of Drug Abuse (R34DA050256), the National Institute of Food and Agriculture (ILLU-793-368), and the Personalized Nutrition Initiative and Center for Social and Behavioral Science at the University of Illinois at Urbana-Champaign through the Seed Grant program.
2303.16507
Improving Object Detection in Medical Image Analysis through Multiple Expert Annotators: An Empirical Investigation
The work discusses the use of machine learning algorithms for anomaly detection in medical image analysis and how the performance of these algorithms depends on the number of annotators and the quality of labels. To address the issue of subjectivity in labeling with a single annotator, we introduce a simple and effective approach that aggregates annotations from multiple annotators with varying levels of expertise. We then aim to improve the efficiency of predictive models in abnormal detection tasks by estimating hidden labels from multiple annotations and using a re-weighted loss function to improve detection performance. Our method is evaluated on a real-world medical imaging dataset and outperforms relevant baselines that do not consider disagreements among annotators.
Hieu H. Pham, Khiem H. Le, Tuan V. Tran, Ha Q. Nguyen
2023-03-29T07:34:20Z
http://arxiv.org/abs/2303.16507v1
Improving Object Detection in Medical Image Analysis through Multiple Expert Annotators: An Empirical Investigation ###### Abstract The work discusses the use of machine learning algorithms for anomaly detection in medical image analysis and how the performance of these algorithms depends on the number of annotators and the quality of labels. To address the issue of subjectivity in labeling with a single annotator, we introduce a simple and effective approach that aggregates annotations from multiple annotators with varying levels of expertise. We then aim to improve the efficiency of predictive models in abnormal detection tasks by estimating hidden labels from multiple annotations and using a re-weighted loss function to improve detection performance. Our method is evaluated on a real-world medical imaging dataset and outperforms relevant baselines that do not consider disagreements among annotators. This work1 has been accepted for publication by IEEE Access and its full form can be found in [7]. We also released the dataset and the code used in this study2. Footnote 1: This is a short version submitted to the Midwest Machine Learning Symposium (MMLS 2023), Chicago, IL, USA. Footnote 2: [https://vindr.ai/datasets/cxx](https://vindr.ai/datasets/cxx). ## 2 Learning from multiple annotators Existing works that are highly related to our work, including learning from multiple annotators [20, 21, 22, 24, 25, 27, 29, 31, 33, 34, 35, 33, 35] and weighted training techniques [1, 2, 3, 9, 23, 26, 32]. Unlike any approaches above, we aggregated annotations from multiple annotators and propose a re-weighted loss function that assigns more weights to more confident examples that determine by the consensus of multiple annotators. Figure 1 shows an overview of our method. Specifically, we first estimate the actual labels using WBF algorithm [28]. They are then used to train a typical object detector with a re-weighted loss function. Note a general form of the loss function for those detectors can be written as \[\mathcal{L}\left(p,p^{*},t,t^{*}\right) =\mathcal{L}_{cls}\left(p,p^{*}\right)+\beta I(t)\mathcal{L}_{loc} \left(t,t^{*}\right) \tag{1}\] \[I(t) =\left\{\begin{array}{ll}1&\text{if}\,\mathrm{IoU}\left\{a,a^{* }\right\}>\eta\\ 0&\text{otherwise.}\end{array}\right.\] where \(t\) and \(t^{*}\) are the predicted and ground truth box coordinates, \(p\) and \(p^{*}\) are the class category probabilities, respectively; \(\mathrm{IoU}\left\{a,a^{*}\right\}\) denotes the Intersection over Union (IoU) between the anchor \(a\) and its ground truth \(a^{*}\); \(\eta\) is an IoU threshold for objectness, i.e. the confidence score of whether there is an object or not; \(\beta\) is a constant for balancing two loss terms \(\mathcal{L}_{cls}\) and \(\mathcal{L}_{loc}\)[36]. We use fused boxes confidence scores \(c_{k}^{i}\) obtained from WBF algorithm [28] to get a re-weighted loss function that emphasizes boxes with high annotators agreement. The new loss function can now be written as \[\mathcal{L}\left(p,p^{*},t,t^{*}\right)=c\mathcal{L}_{cls}\left(p,p^{*}\right) +c\beta I(t)\mathcal{L}_{loc}\left(t,t^{*}\right). \tag{2}\] ## 3 Experiments and Results We validated our method on VinDr-CXR [11], a real-world chest X-ray dataset with labels provided by multiple Figure 1: Illustration of the proposed approach. radiologists for a typical task medical imaging [10, 12, 13, 14, 15, 16, 17, 30]. It consists of 18,000 chest X-ray scans, with 15,000 for training and 3,000 for testing sets. We compared the performance of the proposed method against (1) Baseline using all annotations as the ground truth; models trained on annotations provided by each annotator independently (Annotators #1, #2, #3) and (3) an ensemble of independent models trained on separate radiologists' annotation sets. Table 1 reports the experimental results in [email protected] score. Our method outperforms the baselines, individual models, and even the ensemble of individual experts' models. Experimental results validated the effectiveness of the proposed method. ## 4 Discussions and Conclusion The proposed method is the first effort to train an image detector from labels provided by multiple annotators. We empirically showed a notable improvement in terms of mAP scores by estimating the true labels and then integrating the implicit annotators' agreement into the loss function to emphasize the accurate bounding boxes over the imprecise ones. The idea is simple but effective, allowing the overall framework can be applied in training any machine learning-based detectors. However, the overall architecture is not end-to-end. It may not fully exploit the benefits of combining truth inference and training the desired image detector.
2304.09480
A revisit on the hydrogen atom induced by a uniform static electric field
In this paper, we revisit the Stark effect of the hydrogen atom induced by a uniform static electric field. In particular, a general formula for the integral of associated Laguerre polynomials was derived by applying the method for Hermite polynomials of degree n proposed in the work [Anh-Tai T.D. et al., 2021 AIP Advances \textbf{11} 085310]. The quadratic Stark effect is obtained by applying this formula and the time-independent non-degenerate perturbation theory to hydrogen. Using the Siegert State method, numerical calculations are performed and serve as data for benchmarking. The comparisons are then illustrated for the ground and some highly excited states to provide an insightful look at the applicable limit and precision of the quadratic Stark effect formula for other atoms with comparable properties.
Tran Duong Anh-Tai, Le Minh Khang, Nguyen Duy Vy, Thu D. H. Truong, Vinh N. T. Pham
2023-04-19T08:04:07Z
http://arxiv.org/abs/2304.09480v3
# A pedagogical revisit on the hydrogen atom induced by a uniform static electric field ###### Abstract In this article, we pedagogically revisit the Stark effect of hydrogen atom induced by a uniform static electric field. In particular, a general formula for the integral of associated Laguerre polynomials was derived by applying the method for Hermite polynomials of degree n proposed in the work [Anh-Tai T.D. et al., 2021 AIP Advances **11** 085310]. The quadratic Stark effect is obtained by applying this formula and the time-independent non-degenerate perturbation theory to hydrogen. Using the Siegert State method, numerical calculations are performed and serve as data for benchmarking. The comparisons are then illustrated for the ground and some highly excited states to provide an insightful look at the applicable limit and precision of the quadratic Stark effect formula for other atoms with comparable properties. + Footnote †: : _Eur. J. Phys._ _Keywords_: hydrogen atom, Stark effect, perturbation theory. + ## 1 Introduction Atomic/molecular ionization is one of the most fundamental quantum mechanical phenomena and a comprehensive understanding of its mechanism could have implications for a variety of physics topics. In the last three decades, strong-electric-field ionization of atoms and molecules has attracted a great deal of experimental and theoretical interest due to its abundance of non-linear physical processes, such as high-harmonic generation (HHG) [1, 2, 3] and non-sequential double ionization (NSDI) [4, 5, 6, 7, 8]. In the presence of a weak electric field, the spectral levels of atomic bound states are typically divided and shifted, resulting in the well-known Stark effect--the effect named after Johannes Stark, who discovered it in 1913 [9, 10, 11]. P. Epstein introduced the first satisfactory theoretical explanation of this effect using the old quantum mechanics [12]. Subsequently, E. Schrodinger also provided an explanation for the hydrogen Stark effect as a part of the formalism of wave mechanics [13]. P. Epstein independently revisited the problem using wave mechanics [14]. It is noteworthy to note that Epstein solved the problem using parabolic coordinates, which have more advantages than spherical coordinates and are widely employed in the study of strong-field ionization of atoms and molecules [15, 16, 17, 18, 19]. Various approaches, including the WKB [20, 21], group theory [22], hypervirial relations [23], numerical methods [24, 25, 26, 27], Bender-Wu asymptotic formula [28], classical viewpoint [29], and a generalization to arbitrarily high-order perturbed corrections [30, 31, 32], have revisited the problem. Intriguingly, experimental imaging of the nodal structure of hydrogen Stark states has recently revealed a significant finding [33]. Consequently, it is evident that the Stark effect and the hydrogen atom continue to play vital roles in modern physics. The perturbation theory is a fundamental and essential approximation method in quantum mechanics and is a typical pedagogical topic in standard quantum mechanics textbooks [34, 35, 36, 37, 38, 39, 40, 41]. Since the calculation is simple enough, the derivation of the Stark effect of the hydrogen atom with the second-order correction, also known as the quadratic Stark effect, would be an outstanding pedagogical illustration for perturbation theory at the undergraduate level. In order for the perturbation theory to be legitimate, the applied electric field potential must be sufficiently small, resulting in a weak electric-field strength. Nonetheless, we discover that defining a weak electric field is not trivial. In early experiments [42, 43, 44], for instance, field strengths on the order of \(10^{-7}\)-\(10^{-6}\) were considered strong; however, in more recent experiments [45, 33, 46], a field strength of 0.1 a.u. is regarded as weak. To the best of our knowledge, the applicable interval for the approximation of the formula characterizing the Stark effect of hydrogen derived from perturbation theory has not yet been thoroughly investigated. Consequently, it is essential to revisit the Stark effect of atomic hydrogen from a pedagogical perspective and also consider the remaining issues discussed previously. Our primary objective is to present a mathematically rigorous and straightforward method for approximating the eigenvalues of a hydrogen atom induced by a uniform static electric field as a pedagogical illustration of using advanced one-variable calculus to solve fundamental quantum mechanics problems for undergraduates. Quantum mechanics is related to perturbation theory and the non-relativistic hydrogen atom in parabolic coordinates, while advanced one-variable calculus is represented by the integral of associated Laguerre polynomials. In our method, the operators are expressed in parabolic coordinates, which are efficient and widely-utilized for atomic ionization studies. Using the parabolic coordinates effectively reduces the number of analytical calculations, as the integrals of the associated Laguerre polynomials can be calculated using the general formula. Moreover, by performing numerical calculations based on the Siegert State (SS) method [15, 16, 17], we also address the issue of evaluating the exactness and the range of field strength such that the approximated formula continues to accurately characterize the dependence of the eigenvalues on the field strength. The comparisons are illustrated for the ground state and some highly-excited states with the principal quantum number up to \(n=10\). The article's structure is as follows. In Section 2, the general formula for the integral of associated Laguerre polynomials is derived in detail. In Section 3, we present the derivation of the formula characterizing the quadratic Stark effect of hydrogen as a pedagogical application of using advanced one-variable calculus to solve quantum mechanical problems, as well as comparisons of the approximate formula and numerical calculations. The conclusion is presented in Section 4. This article employs the atomic units in which \(\hbar=m_{e}=e=1\) for the sake of simplification. \(X^{(n)}\) denotes the \(n\)th-order perturbative correction to the physical quantity \(X\). ## 2 Derivation of the general formula for the integral of associated Laguerre polynomials Following the procedure proposed in Ref. [47] to deduce the general formula for integrals of Hermite polynomials, this section derives the general formula for the integral of the associated Laguerre polynomials, which is given by \[Z(\alpha,k,k^{\prime})=\int\limits_{0}^{\infty}x^{\alpha}u_{k,m}\left(x\right) u_{k^{\prime},m}\left(x\right)dx, \tag{1}\] where \[u_{k,m}\left(x\right)=x^{m/2}\exp(-x/2)L_{k}^{m}(x), \tag{2}\] with \(L_{k}^{m}(x)\) and \(\alpha\) are the associated Laguerre polynomial and non-negative integer, respectively. The two associated Laguerre polynomials in Eq. (1) are then expanded by the generating function [48] as shown: \[U(x,t) =\sum\limits_{k=0}^{\infty}L_{k}^{m}(x)t^{k}=\frac{1}{(1-t)^{m+1} }\exp\left(-\frac{xt}{1-t}\right), \tag{3}\] \[V(x,y) =\sum\limits_{k^{\prime}=0}^{\infty}L_{k^{\prime}}^{m}(x)y^{k^{ \prime}}=\frac{1}{(1-y)^{m+1}}\exp\left(-\frac{xy}{1-y}\right), \tag{4}\] where \(|t|<1\), and \(|y|<1\). Multiplying Eqs. (3) and (4) side by side and by \(x^{m+\alpha}\exp(-x)\), then taking the integral from \(0\) to \(\infty\), we get \[\sum\limits_{k=0}^{\infty}\sum\limits_{k^{\prime}=0}^{\infty} \left[\int\limits_{0}^{\infty}x^{m+\alpha}\exp(-x)L_{k}^{m}(x)L_{k^{\prime}}^ {m}(x)dx\right]t^{k}y^{k^{\prime}}=\\ \frac{1}{(1-t)^{m+1}(1-y)^{m+1}}\int\limits_{0}^{\infty}x^{m+ \alpha}\exp\left[-x\left(1+\frac{t}{1-t}+\frac{y}{1-y}\right)\right]dx. \tag{5}\] The result of the integral on the right hand side (RHS) of Eq. (5) can be straightforwardly obtained, \[I=\int\limits_{0}^{\infty}x^{m+\alpha}\exp\left[-x\left(1+\frac{t}{1-t}+\frac{y}{ 1-y}\right)\right]dx=\frac{(m+\alpha)!(1-t)^{m+\alpha+1}(1-y)^{m+\alpha+1}}{(1 -ty)^{m+\alpha+1}}, \tag{6}\] by applying the integral \(\int\limits_{0}^{\infty}x^{n}\exp(-px)dx=\frac{n!}{p^{n+1}}.\) Substituting Eq. (6) back into the RHS of Eq. (5), we obtain \[RHS=\frac{(m+\alpha)!(1-t)^{\alpha}(1-y)^{\alpha}}{(1-ty)^{m+\alpha+1}}. \tag{7}\] The binomials in Eq. (7) are then expanded as \[(1-ty)^{-(m+\alpha+1)} =\sum\limits_{s=0}^{\infty}\binom{m+s+\alpha}{s}(ty)^{s}, \tag{8}\] \[(1-t)^{\alpha}(1-y)^{\alpha} =\sum\limits_{i=0}^{\alpha}\sum\limits_{j=0}^{\alpha}(-1)^{i+j} \binom{\alpha}{i}\binom{\alpha}{j}t^{i}y^{j}. \tag{9}\] Hence, the RHS can be rewritten as \[RHS=\sum\limits_{s=0}^{\infty}\sum\limits_{i=0}^{\alpha}\sum\limits_{j=0}^{ \alpha}(-1)^{i+j}\binom{\alpha}{i}\binom{\alpha}{j}\binom{m+s+\alpha}{s}t^{s+ i}y^{s+j}. \tag{10}\] The value of \(Z(\alpha,k,k^{\prime})\) is obviously the coefficient of \(t^{s+i}y^{s+j}\) and can therefore be determined by unifying the coefficients of \(t^{s+i}y^{s+j}\) from both sides of Eq. (5). We acquire \[\begin{cases}s+i=k\\ s+j=k^{\prime}\end{cases}\Rightarrow k^{\prime}=k-i+j. \tag{11}\] Consequently, the desired formula for the general integral of the associated Laguerre polynomials is attained \[Z(\alpha,k,k^{\prime})=\sum\limits_{i=0}^{\alpha}\sum\limits_{j=0}^{\alpha}(- 1)^{i+j}\binom{\alpha}{i}\binom{\alpha}{j}\frac{(m+\alpha+k-i)!}{(k-i)!} \delta_{k^{\prime},k-i+j}. \tag{12}\] \(Z(\alpha,k,k^{\prime})\) is non-zero if, for a given set of \(\alpha\) and \(k\), only \(k^{\prime}\) satisfies the selection rule determined by the Kronecker delta, \(\delta_{k^{\prime},k-i+j}\). To illustrate the application of the general formula Eq. (12), we take the known integral [48] \[I_{1}=\int\limits_{0}^{\infty}u_{k,m}(x)u_{k^{\prime},m}(x)dx=\int\limits_{0} ^{\infty}x^{m}\exp(-x)L_{k}^{m}(x)L_{k^{\prime}}^{m}(x)dx=\frac{(k+m)!}{k!} \delta_{k^{\prime},k}, \tag{13}\] as an illustration. One can readily observe that the integral \(I_{1}\) is equivalent to the case where \(\alpha=0\) and that the indices \(i=j=0\). Substituting \(\alpha=i=j=0\) back into Eq. (12) yields \[Z(0,k,k^{\prime})=(-1)^{0+0}\binom{0}{0}\binom{0}{0}\frac{(m+0+k-0)!}{(k-0)!} \delta_{k^{\prime},k-0+0}=\frac{(k+m)!}{k!}\delta_{k^{\prime},k}. \tag{14}\] The well-known integral [48] \[I_{2}=\int\limits_{0}^{\infty}x^{m+1}\exp(-x)L_{k}^{m}(x)L_{k^{\prime}}^{m}(x)dx= \frac{(k+m)!}{k!}(2k+m+1)\delta_{k^{\prime},k}, \tag{15}\] will be left as an exercise, and we encourage readers to derive the result in Eq. (15) on their own. ## 3 Results and discussion Following are the eigenvalues of the Hamiltonian describing non-relativistic hydrogen induced by a uniform z-axis directed static electric field \[\widehat{H}=-\frac{1}{2}\Delta-\frac{1}{r}+Fz, \tag{16}\] is discussed. Here \[\widehat{H}_{0}=-\frac{1}{2}\Delta-\frac{1}{r}, \tag{17}\] is the Hamiltonian characterizing the non-relativistic hydrogen atom in the Coulomb field, and \(Fz\) is the potential resulting from the uniform static electric field with the strength \(F\geq 0\). Here, \(r\) and \(z\) represent the distance and \(z\)-axis displacement of the electron from the proton, which is assumed to be at the origin of the coordinate system, respectively. In the parabolic coordinates, the three-dimensional Laplacian operator, \(\Delta\), is given by \[\Delta=\frac{4}{\xi+\eta}\frac{\partial}{\partial\xi}\left(\xi\frac{\partial }{\partial\xi}\right)+\frac{4}{\xi+\eta}\frac{\partial}{\partial\eta}\left( \eta\frac{\partial}{\partial\eta}\right)+\frac{4}{\xi\eta}\frac{\partial^{2}} {\partial\varphi^{2}}, \tag{18}\] where \((\xi,\eta,\varphi)\) corresponds to the \(\xi\) distance, the \(\eta\) distance, and the azimuthal angle, respectively. The parabolic coordinates are connected to the spherical coordinates via the transformation \[\xi=r+z\left(0\leq\xi<+\infty\right),\eta=r-z\left(0\leq\eta<+\infty\right), \varphi\left(0\leq\varphi\leq 2\pi\right), \tag{19}\] and the Jacobian of the transformation of coordinates is given by \[|J|=\frac{\xi+\eta}{4}. \tag{20}\] The general Hamiltonian Eq.(16) reduces to \(\widehat{H}_{0}\) whose bound-state eigenvalues and eigenfunctions are already known in the absence of an electric field. Since the analytical derivation of the eigenvalues and eigenfunctions of the hydrogen atom in parabolic coordinates has been rigorously presented in Refs. [34, 35, 49], we briefly present the necessary results for the derivation of the hydrogen quadratic Stark effect formula in the following. The bound-state eigenvalues are given as \[E^{(0)}=-\frac{1}{2n^{2}}, \tag{21}\] where \(n\) is the principal quantum number. The parabolic coordinates enable us to express the bound-state eigenfunctions of \(\widehat{H}_{0}\) as \[\psi_{n_{1},n_{2},m}^{(0)}(\xi,\eta,\varphi)=Au_{n_{1},m}\left(\xi/n\right)u_{n_{ 2},m}\left(\eta/n\right)\exp(im\varphi), \tag{22}\] with \[A=\frac{1}{n^{2}}\sqrt{\frac{n_{1}!n_{2}!}{\pi(n_{1}+m)!(n_{2}+m)!}}, \tag{23}\] is the normalization factor. In parabolic coordinates, a bound state of the hydrogen atom is denoted by a set of three quantum numbers \((n_{1},n_{2},m)\), where \(n_{1}\) and \(n_{2}\) represent the parabolic quantum numbers describing the \(\xi\) and \(\eta\) variables, respectively, and \(m\) is the magnetic quantum number. Note that in the calculations that follow, the magnetic quantum number should be positive because the eigenfunctions \(\psi_{n_{1},n_{2},m}^{(0)}(\xi,\eta,\varphi)\) with \(\pm m\) are degenerate, as is evident from Eq. (22). The relationship between the three quantum numbers \((n_{1},n_{2},m)\) and the principal quantum number \(n\) is as \[n=n_{1}+n_{2}+m+1. \tag{24}\] The functions \(u_{n_{1},m}(\xi/n)\) and \(u_{n_{2},m}(\eta/n)\), which independently describe the dependence of eigenfunctions along \(\xi\) and \(\eta\) coordinates, have the identical form given by \[u_{k,m}\left(x\right)=x^{m/2}\exp\left(-x/2\right)L_{k}^{m}\left(x\right), \tag{25}\] where \(L_{k}^{m}(x)\) is the associated Laguerre polynomials. Physically, the parabolic quantum number \(k=\{n_{1},n_{2}\}\) determines the number of \(u_{k,m}\left(x\right)\) nodes, whereas the magnetic quantum number \(m\) is associated with the boundary condition at \(x=0\). For \(m=0\), \(u_{k,0}\left(0\right)\) decays from \(1\), whereas \(u_{k,m\neq 0}\left(0\right)=0\) for \(m\neq 0\). This characteristic is evident in Fig. 1, which depicts the absolute squared of \(u_{k,m}\left(x\right)\) for various values \((k,m)\). In the presence of the electric field, the Hamiltonian (16) eigenvalues cannot be determined analytically. Consequently, the time-independent non-degenerate perturbation theory is utilized to approximate the required eigenvalues. In this method, it is assumed that the uniform static electric field is sufficiently weak, so that the potential caused by such a field can be regarded as a minor perturbation. In the present study, the eigenvalues of the Hamiltonian (16) are expanded up to the second order of the electric field strength \[E=E^{(0)}+E^{(1)}F+E^{(2)}F^{2}+\mathcal{O}(F^{3}). \tag{26}\] Here \(E^{(0)}\), \(E^{(1)}\) and \(E^{(2)}\) are the zero-, first- and second-order corrections to the eigenvalues, respectively. \(E^{(0)}\) coincides to the eigenvalues of hydrogen atom in the Coulomb field given by Eq. (21). \(E^{(1)}\) and \(E^{(2)}\) are then determined by the time-independent non-degenerate perturbation theory. \(E^{(1)}\) is obtained by evaluating \[E^{(1)} = \langle\psi_{n_{1},n_{2},m}^{(0)}|\frac{\xi-\eta}{2}|\psi_{n_{1},n_{2},m}^{(0)}\rangle\] \[= 2\pi A^{2}\int\limits_{0}^{\infty}\int\limits_{0}^{\infty}u_{n_{ 1},m}^{\ast}\left(\xi/n\right)u_{n_{2},m}^{\ast}\left(\eta/n\right)\frac{\xi- \eta}{2}u_{n_{1},m}\left(\xi/n\right)u_{n_{2},m}\left(\eta/n\right)|J|d\xi d\eta,\] where \(u_{k,m}^{*}\left(x\right)=u_{k,m}\left(x\right)\) denotes the complex conjugate of \(u_{k,m}\left(x\right)\). Note that in the following, the results of the integrations over \(\xi\) and \(\eta\) will be automatically multiplied by the factor \(2\pi\) because the integration over the azimuthal variable, \(\varphi\), always yields the factor \(2\pi\). Substituting the Jacobian given by Eq. (20) into Eq. (27) yields \[E^{(1)}=E_{1}^{(1)}-E_{2}^{(1)} \tag{28}\] with \[E_{1}^{(1)} = \frac{\pi n^{2}A^{2}}{4}\int\limits_{0}^{\infty}\left(\xi/n \right)^{2}u_{n_{1},m}^{2}\left(\xi/n\right)d\left(\xi/n\right)\int\limits_{0 }^{\infty}u_{n_{2},m}^{2}\left(\eta/n\right)d\left(\eta/n\right), \tag{29}\] \[E_{2}^{(1)} = \frac{\pi n^{2}A^{2}}{4}\int\limits_{0}^{\infty}\left(\eta/n \right)^{2}u_{n_{2},m}^{2}\left(\eta/n\right)d\left(\eta/n\right)\int\limits_{0 }^{\infty}u_{n_{1},m}^{2}\left(\xi/n\right)d\left(\xi/n\right). \tag{30}\] Obviously, the integrals \(E_{1}^{(1)}\), and \(E_{2}^{(1)}\) can be rewritten in an unified form as \[I_{k,\ell}=\frac{\pi n^{2}A^{2}}{4}\int\limits_{0}^{\infty}x^{2}u_{k,m}^{2}(x )dx\int\limits_{0}^{\infty}u_{\ell,m}^{2}(y)dy. \tag{31}\] The integral \(I_{k,\ell}\) can be evaluated by using the Eq. (12), hence we get \[E_{1}^{(1)} = I_{n_{1},n_{2}}=\frac{1}{4}\left(6n_{1}^{2}+6n_{1}m+6n_{1}+m^{2}+3 m+2\right), \tag{32}\] \[E_{2}^{(1)} = I_{n_{2},n_{1}}=\frac{1}{4}\left(6n_{2}^{2}+6n_{2}m+6n_{2}+m^{2}+3 m+2\right). \tag{33}\] Plugging Eqs. (32) and (33) to Eq. (28), we obtain \[E^{(1)}=\frac{3}{2}n(n_{1}-n_{2}). \tag{34}\] With this first-order correction, the formula describing the eigenvalues' dependence on the electric field strength is given as \[E=-\frac{1}{2n^{2}}+\frac{3}{2}n(n_{1}-n_{2})F+\mathcal{O}(F^{2}). \tag{35}\] The Stark effect described by Eq. (35) is called the linear Stark effect since the splitting of the energy levels is \[\Delta E^{(1)}=\frac{3}{2}n(n_{1}-n_{2})F\propto F. \tag{36}\] Moreover, substates with identical parabolic quantum numbers, \(n_{1}=n_{2}\), do not perceive the applied electric field with the first-order field-induced correction because their distribution of the electrical charge between the \(\xi\) and \(\eta\) coordinates is symmetric, whereas the linear Stark effect derives from the asymmetric distribution of the electrical charge [34, 35]. The second-order correction to the eigenvalues can be computed as follows, \[E^{(2)}=\langle\psi_{n_{1},n_{2},m}^{(0)}|\frac{\xi-\eta}{2}|\psi_{n_{1},n_{2},m}^{(1)}\rangle \tag{37}\] where \(|\psi_{n_{1},n_{2},m}^{(1)}\rangle\) is the first-order correction to the eigenfunctions which could be found in Ref [50] and reads as \[\psi_{n_{1},n_{2},m}^{(1)}(\xi,\eta,\varphi)=\frac{n^{3}A}{4}\left[\Phi_{n_{1 },m}\left(\xi/n\right)u_{n_{2},m}\left(\eta/n\right)-u_{n_{1},m}\left(\xi/n \right)\Phi_{n_{2},m}\left(\eta/n\right)\right]\exp(im\varphi), \tag{38}\] in which \[\Phi_{k,m}(x)=\frac{(k+m)(k+m-1)}{2}u_{k-2,m}(x)-(3n-m-3-2n_{1})(k +m)u_{k-1,m}(x)+\] \[+3ku_{k,m}(x)-(3n-m+1-2n_{1})(k+1)u_{k+1,m}(x)+\frac{(k+1)(k+2)}{ 2}u_{k+2,m}(x). \tag{39}\] Combining Eqs. (22), (39), and (38) again yields \[E^{(2)}=E_{1}^{(2)}+E_{2}^{(2)}, \tag{40}\] where the two integrals \(E_{1}^{(2)}\) and \(E_{2}^{(2)}\) have an unified form as \[K_{i,j}=\frac{\pi n^{7}A^{2}}{16}\left\{\int\limits_{0}^{\infty}x^{ 2}\Phi_{i,m}\left(x\right)u_{i,m}\left(x\right)dx\int\limits_{0}^{\infty}u_{j,m }\left(y\right)u_{j,m}\left(y\right)dy\right.+\\ -\int\limits_{0}^{\infty}x^{2}u_{i,m}\left(x\right)u_{i,m}\left(x \right)dx\int\limits_{0}^{\infty}\Phi_{j,m}\left(y\right)u_{j,m}\left(y\right) dy\right\}. \tag{41}\] Again, the integral \(K_{i,j}\) can be evaluated and derived with the help of Eq. (12) \[K_{i,j}= \frac{n^{3}}{16}\left[4i(i^{2}-1)+6i(i+im+m^{2}+m-1)\right.+\] \[\left.+(m^{2}+3m+2)(3i-3j+2m-3)+18(i^{2}+im+i)(i-j-2n)\right]. \tag{42}\] Substituting \(E_{1}^{(2)}=K_{n_{1},n_{2}}\) and \(E_{2}^{(2)}=K_{n_{2},n_{1}}\) into Eq. (40) yields the second-order correction to the eigenvalues as \[E^{(2)}=-\frac{n^{4}}{16}\left[17n^{2}-3(n_{1}-n_{2})^{2}-9m^{2}+19\right]. \tag{43}\] Consequently, the formula that describes the so-called quadratic Stark effect is obtained \[E=-\frac{1}{2n^{2}}+\frac{3n(n_{1}-n_{2})}{2}F-\frac{n^{4}}{16}\left[17n^{2}-3 (n_{1}-n_{2})^{2}-9m^{2}+19\right]F^{2}+\mathcal{O}(F^{3}). \tag{44}\] In contrast to the linear Stark effect, the quadratic Stark effect involves the splitting of energy levels \[\Delta E^{(2)}=\frac{3n(n_{1}-n_{2})}{2}F-\frac{n^{4}}{16}\left[17n^{2}-3(n_{1 }-n_{2})^{2}-9m^{2}+19\right]F^{2}, \tag{45}\] exists even when the electrical charge distribution is symmetric, \(n_{1}=n_{2}\). This indicates that the substates are always able to detect the presence of the applied electric field, despite its weakness. When the hydrogen atom is exposed to a uniform static electric field, the degeneracy of substates with the same principal quantum number \(n\) is completely eliminated [34, 35]. Using the Siegert State (SS) method, we explicitly verify the exactness and applicability of the approximated formula given by the equation (44) using numerical calculations. Thus, we compute the relative deviation specified by \[\sigma=\left|\frac{E_{ana}-E_{num}}{E_{num}}\right|\times 100\%, \tag{46}\] where \(E_{num}\) and \(E_{ana}\) represent the results of numerical and analytical methods, respectively. Figure 2 illustrates explicitly the evaluation for the ground state and the first excited state with three substates of hydrogen induced by a uniform static electric field. For the ground state, it can be observed that in the regime \(0\leq F\leq 0.5\) a.u., the results obtained by the approximated formula and numerical calculation are well-matched, and the relative error is close to \(0\). The discrepancy between two approaches increases as the electric-field strength increases; however, it is still less than 1% at \(F=0.1\) a.u. In Figs.2(c-d), the three substates of the first excited state exhibit the same characteristic. When \(F=0.01\) a.u., the \((0,1,0)\) substate has the maximum relative deviation, below 2%, while the \((1,0,0)\) substate has the smallest relative error, approximately 0.5%. Figure 3 displays the comparison for the excited states with the principle quantum number \(n=\{4,6,8,10\}\). In these instances, relative deviations are less than 3.5%, which is still an acceptable value. ## 4 Conclusion In conclusion, we have revisited the quadratic Stark effect of hydrogen for educational purposes. Our method relies on the time-independent non-degenerate perturbation theory and the general integral formula for associated Laguerre polynomials. We have also investigated the applicability interval of the approximated formula describing the quadratic Stark effect of hydrogen using a numerical technique based on the Siegert State method. The comparison demonstrates that when the applied electric field is suitably weak, the approximated formula agrees well with numerical calculations not only for the ground state, but also for highly excited states as well. The approximated formula can therefore be used to predict the eigenvalues of highly excited states in the presence of an exceedingly weak electric field for future high-precision numerical calculations. In addition, we wish to emphasize that, in quantum mechanics, Figure 2: The first column illustrates the dependence of the eigenvalue, \(E\), on the external field strength, \(F\) for the ground state and the first excited state of the hydrogen atom induced by a static electric field. In the second column, the relative errors between the numerical result obtained by the Siegert State method and the approximated formula are displayed. Note that the solid line represents the numerical results, whereas the squares represent the approximations. perturbation theory is uncomplicated but never trivial. To address the problem of a stronger electric field, it should be noted that higher-order corrections or improved approaches should be considered; however, these are not our pedagogical objectives. In conclusion, we anticipate that the results of this study will serve as a valuable resource for undergraduates studying quantum mechanics and be readily accessible to them. This work is supported by the Ministry of Education and Training of Vietnam under grant number B2021-SPS-02-VL. N.D. Vy is thankful to the Van Lang University.
2303.02572
Semantics of multimodal adjoint type theory
We show that contrary to appearances, Multimodal Type Theory (MTT) over a 2-category M can be interpreted in any M-shaped diagram of categories having, and functors preserving, M-sized limits, without the need for extra left adjoints. This is achieved by a construction called "co-dextrification" that co-freely adds left adjoints to any such diagram, which can then be used to interpret the "context lock" functors of MTT. Furthermore, if any of the functors in the diagram have right adjoints, these can also be internalized in type theory as negative modalities in the style of FitchTT. We introduce the name Multimodal Adjoint Type Theory (MATT) for the resulting combined general modal type theory. In particular, we can interpret MATT in any finite diagram of toposes and geometric morphisms, with positive modalities for inverse image functors and negative modalities for direct image functors.
Michael Shulman
2023-03-05T04:01:40Z
http://arxiv.org/abs/2303.02572v5
# Semantics of multimodal adjoint type theory ###### Abstract We show that contrary to appearances, Multimodal Type Theory (MTT) over a 2-category \(\mathcal{M}\) can be interpreted in any \(\mathcal{M}\)-shaped diagram of categories having, and functors preserving, \(\mathcal{M}\)-sized limits, without the need for extra left adjoints. This is achieved by a construction called "co-dextrification" that co-freely adds left adjoints to any such diagram, which can then be used to interpret the "context lock" functors of MTT. Furthermore, if any of the functors in the diagram have right adjoints, these can also be internalized in type theory as negative modalities in the style of FitchTT. We introduce the name Multimodal Adjoint Type Theory (MATT) for the resulting combined general modal type theory. In particular, we can interpret MATT in any finite diagram of toposes and geometric morphisms, with positive modalities for inverse image functors and negative modalities for direct image functors. d + Footnote †: footnote]Email: [email protected] d + Footnote †: footnote]This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-21-1-0009. ## 1 Introduction _Modal type theories_ involve type-forming operations, such as the classical \(\Box\) (necessity) and \(\Diamond\) (possibility), whose introduction and elimination rules modify the accessibility of previous hypotheses. The increasing number of modal type theories has led to a need for general frameworks that can be instantiated to any new example, to avoid having to develop the metatheory of each new modal type theory from scratch. After [26, 27], each instantiation of a general modal type theory is determined by a 2-category \(\mathcal{M}\), the "mode theory". Its objects denote "modes", its morphisms generate modal operators relating types at different modes, and its 2-cells govern their interaction. However, the "LSR" theory of [26, 27] is only simply typed, its definitional equality is ill-behaved, and it uses awkward global context operations. The more recent frameworks MTT [12] and FitchTT [11] resolve these problems: they are dependently typed, with a well-behaved definitional equality, and only ever extend the context; all indications suggest their implementability [10, 41]. However, their naive semantics requires the functors interpreting the modal operators to have additional left adjoints ("context locks"), which are not visible internally in the syntax. We will show that this defect is, for the most part, only apparent. Namely, from any suitable \(\mathcal{M}\)-shaped diagram of categories, we construct a new diagram whose functors all _do_ have left adjoints, enabling an interpretation of MTT and FitchTT. Moreover, the _types_ in the new diagram are induced from the original ones, so this interpretation directly yields information about the original diagram. We call this the _co-dextrification_, since it makes existing functors into _right_ adjoints and has a "cofree" universal property. To explain the co-dextrification, consider first _split-context_ modal type theories, e.g. as in [34, 38, 45]. As an example, let \(\mathcal{M}\) have two objects and one morphism \(\mu:p\to q\); we want to interpret modal type theory in a diagram of two categories and a functor \(\mathscr{C}_{\mu}:\mathscr{C}_{p}\to\mathscr{C}_{q}\). The split-context theory has ordinary \(p\)-judgments \(\Gamma\models_{p}\mathcal{J}\), but \(q\)-judgments \(\Gamma\mid\Delta\models_{q}\mathcal{J}\) with the context split into a \(p\)-part \(\Gamma\) and a \(q\)-part \(\Delta\), where \(\Delta\) can depend on \(\Gamma\). We consider the types in \(\Gamma\) to implicitly have \(\mathscr{C}_{\mu}\) applied. The modal rules then rearrange these context parts: Figure 0(a) shows the split-context introduction rule for the \(\mu\)-modality. Following [45], the \(q\)-contexts (\(\Gamma\mid\Delta\)) suggest semantics in the comma category \(\widehat{\mathscr{C}}_{q}=(\mathscr{C}_{q}\downarrow\mathscr{C}_{\mu})\), whose objects are triples \((\Gamma,\Gamma\triangleright\Delta,\mathsf{p}_{\Delta})\) where \(\Gamma\in\mathscr{C}_{p}\), \(\Gamma\triangleright\Delta\in\mathscr{C}_{q}\), and \(\mathsf{p}_{\Delta}:\Gamma\triangleright\Delta\to\mathscr{C}_{\mu}(\Gamma)\). This matches the split-context syntax, but can also be thought of as introducing a context lock functor in a "universal" way: the functor \(\widehat{\mathscr{C}}_{\mu}:\mathscr{C}_{p}\to\widehat{\mathscr{C}}_{q}\) sending \(\Gamma\) to \((\Gamma,\mathscr{C}_{\mu}(\Gamma),1_{\mathscr{C}_{\mu}(\Gamma)})\) has a left adjoint \((-)/\mu\), defined by \((\Gamma,\Gamma\triangleright\Delta,\mathsf{p}_{\Delta})/\mu=\Gamma\). Thus, the rule in Figure 0(a) can also be written as in Figure 0(b). For general \(\mathcal{M}\), there is no obvious way to split the context by restricting dependency. Instead, the spiritual generalizations of split-context theories, sometimes called _left-division_ theories (e.g. [32, 31]), annotate each context variable with a morphism of \(\mathcal{M}\) that is implicitly applied to it, and the modal rules modify these annotations. In our simple example, each variable in a \(q\)-context is annotated with \(\mu\) or \(1_{q}\), and the operation \(\Gamma/\mu\) deletes the \(1_{q}\)-annotated variables and uses the others to form a \(p\)-context. For more general \(\mathcal{M}\), when defining \(\Gamma/\nu\), each annotated variable \(x:^{\mu}A\) in \(\Gamma\) is replaced by zero or more variables annotated by a family of morphisms \(\varrho_{i}\) equipped with 2-cells \(\alpha_{i}:\mu\Rightarrow\nu\circ\varrho_{i}\) forming a _left multi-lifting_, i.e. such that for any \(\beta:\mu\Rightarrow\nu\circ\sigma\) there is a unique \(i\) and factorization of \(\beta\) through \(\alpha_{i}\) by some \(\varrho_{i}\Rightarrow\sigma\). Unfortunately, a fully general \(\mathcal{M}\) may not have all left multi-liftings. In LSR, each rule application is instead allowed to choose _any_ morphism \(\varrho\) with \(\alpha:\mu\Rightarrow\nu\circ\varrho\); the problems of LSR stem from the non-uniqueness of such a choice. MTT and FitchTT solve this by delaying the choice of 2-cells, treating \((-)/\mu\) as a _constructor_ of contexts rather than an operation on them that computes. (It is then sometimes written as \(\Gamma.\clubsuit_{\mu}\) or \(\Gamma.\{\mu\}\), but I see no reason not to stick with \(\Gamma/\mu\).) Figure 0(b) shows \(\mu\Theta(-)\) is "right adjoint" to \((-)/\mu\), so the semantics of these theories appears to require the modality functors to have left adjoints. This contrasts with how we interpreted the split-context theory in a comma category, creating a _new_ left adjoint. Some work [39, 24] tried to generalize this by mimicking annotated contexts in semantics, but this was complicated and difficult. Instead, we change perspective: rather than regarding an object of \((\mathscr{C}_{q}\downarrow\mathscr{C}_{\mu})\) as an object \(\Gamma\in\mathscr{C}_{p}\) together with an object \(\Gamma\triangleright\Delta\in\mathscr{C}_{q}\) that _depends on \(\mathscr{C}_{\mu}(\Gamma)\)_, we regard it as an object \(\Delta\in\mathscr{C}_{q}\) together with a "specified value of \(\Delta/\mu\)" in \(\mathscr{C}_{p}\), and a _weakening_ substitution from \(\Delta\) to \(\mathscr{C}_{\mu}(\Delta/\mu)\). How a context is built from annotated types -- like the fact that it is built from types at all -- is a property of syntax that doesn't need to be reflected in semantics. We can now generalize to any \(\mathcal{M}\): each \(\widehat{\mathscr{C}}_{q}\) is an _oplax limit_ of the \(\mathscr{C}_{p}\) over a slice 2-category. It remains to specify how to _extend_ such a context by a type, i.e. how do we compute \((\Gamma,x:^{\mu}A)/\nu\) in terms of \(\Gamma/\nu\) and \(A\)? Instead of choosing _one_ pair \((\varrho,\alpha)\) as in LSR, or a universal family of them as in a multi-lifting theory, we use _all_ of them. More precisely, we define \((\Gamma,x:^{\mu}A)/\nu\) to be the extension of \(\Gamma/\nu\) by the _limit_ of \(x:^{\varrho}A\) over all such \((\varrho,\alpha)\). It is unclear whether this can be done syntactically, but semantically it is unproblematic. When a left multi-lifting exists, this limit reduces to the _product_ of \(x:^{\varrho}A\) over the elements of the multi-lifting. And if there are _no_ such \(\varrho\), the limit is a terminal object and \(A\) is simply deleted, as happens to \(\Delta\) in Figure 0(a). This is the essential idea of co-dextrification. It is formally similar to Hofmann's "right adjoint splitting" [15] for strict pullbacks, suggesting it can similarly be regarded as a sort of _coherence theorem_. The co-dextrification does require each \(\mathscr{C}_{p}\) to have, and each \(\mathscr{C}_{\mu}\) to preserve, limits of the size of \(\mathcal{M}\). This is unproblematic if \(\mathcal{M}\) is finite, but modal operators often come in adjoint pairs (e.g. as geometric morphisms of topoi), and as soon as \(\mathcal{M}\) contains a generic adjunction it is infinite. Fortunately, if some Figure 1. Comparison of modal introduction rules \(\mathscr{C}_{\mu}\) has a right adjoint, that adjoint automatically lifts to a _dependent right adjoint_ of \(\widehat{\mathscr{C}}_{\mu}\). Thus, it suffices to apply co-dextrification over a smaller 2-category \(\mathcal{L}\) that generates \(\mathcal{M}\) by adding some right adjoints. The resulting type theory represents the morphisms in \(\mathcal{L}\) by positive modalities as in MTT, but their right adjoints by negative modalities as in FitchTT. (For a particular \(\mathcal{L}\), such a combination appeared in [5].) The positive elimination rules also restrict which morphisms of \(\mathcal{M}\) can appear as "framings": this would be problematic for internalizing functoriality, except for the stronger elimination rule of the negative modalities. We call this theory **Multimodal Adjoint Type Theory (MATT)**. If we regard \(\mathcal{L}\), rather than \(\mathcal{M}\), as the fundamental parameter of MTT, then it restores the symmetry of [26, 27] in which each morphism (of \(\mathcal{L}\)) generates a positive/negative pair of modalities that are automatically adjoint. ## Acknowledgement I am extremely grateful to Daniel Gratzer, for many long and illuminating conversations about modal type theories, for many concrete suggestions about MATT (including the name), and for careful reading and bugfixes. Dan Licata also contributed useful ideas to some of these conversations. ## 2 Multimodal Adjoint Type Theory For a 2-category \(\mathcal{M}\) we write its objects as \(p,q,r,s,\dots\), its morphisms as \(\mu,\nu,\varrho,\sigma,\dots\), and its 2-cells as \(\alpha,\beta,\dots\). We use \(\circ\) for both composition of morphisms and vertical composition of 2-cells, and write \(\mu\triangleleft\beta\) and \(\alpha\triangleright\nu\) for whiskering. We will not use horizontal composition of 2-cells. Although our semantics will have a mode theory with right adjoints added freely, it is simpler to formulate syntax using an arbitrary 2-category \(\mathcal{M}\) equipped with placeholders for the necessary restrictions. **Definition 2.1**: An **adjoint mode theory** is a 2-category \(\mathcal{M}\) equipped with four classes of morphisms in \(\mathcal{M}\) called **tangible**, **sharp**, **transparent**, and **sinister**, such that * Every identity morphism is transparent and sharp. * If \(\mu:p\to q\) is sharp and \(\nu:q\to r\) is transparent, then \(\nu\circ\mu:p\to r\) is tangible. (Thus, every transparent or sharp morphism, and in particular every identity morphism, is tangible.) * Every sinister morphism \(\mu:p\to q\) has a right adjoint \(\mu^{\dagger}:q\to p\) in \(\mathcal{M}\), with unit \(\eta_{\mu}:1\Rightarrow\mu^{\dagger}\circ\mu\) and counit \(\epsilon_{\mu}:\mu\circ\mu^{\dagger}\Rightarrow 1\). MATT over an adjoint mode theory \(\mathcal{M}\) is MTT [12] over \(\mathcal{M}\) with a few modifications. We write \(x:^{\mu}A\) in place of \(x:(\mu\mid A)\), and \(\mu\box A\) in place of \((\mu\mid A)\). We will show the most important MTT rules, but we omit technical details of substitutions. We now list the substantive modifications. 1. The modalities annotating variables in contexts must be tangible. Tangibility of identities yields ordinary type theories at each mode. The context rules are shown in Figure 2, along with a substitution rule that combines functoriality and naturality (the other substitution rules are more ordinary), and the variable-use rule in Figure 3 along with the rule for substituting keys into variables. 2 Footnote 2: The latter is not fully precise, e.g. we have not defined the “weakening” substitution \(\uparrow^{\alpha}\). In the formal presentation of [12] there is only a zero-variable, to which can be applied substitutions involving 2-cell keys and weakening. 2. The modalities \(\mu\) that annotate domains of function-types \((x:^{\mu}A)\to B\) must be sharp. Sharpness of identities yields ordinary function-types, and tangibility of sharp morphisms is required for the formation and introduction rules. All the rules are shown in Figure 4. Figure 2: Contexts and substitutions in MATT \[\begin{array}{ll}\mathsf{locks}(\diamond_{p})=1_{p}&\mathsf{locks}(\Gamma,x:^{ \mu}A)=\mathsf{locks}(\Gamma)&\mathsf{locks}(\Gamma/\mu)=\mathsf{locks}(\Gamma) \circ\mu\\ \frac{\alpha:\mu\Rightarrow\mathsf{locks}(\Delta)}{\Gamma,x:^{\mu}A,\Delta \vdash x^{\alpha}:A[\uparrow^{\alpha}]}&\frac{\alpha:\mu\Rightarrow\mathsf{ locks}(\Delta)\circ\nu\quad\quad\beta:\nu\Rightarrow\varrho}{(\Gamma,x:^{\mu}A, \Delta)/\varrho\vdash x^{\alpha}[1_{(\Gamma,x:^{\mu}A,\Delta)}/\beta]=x^{( \mathsf{locks}(\Delta)\triangleright\beta)\circ\alpha}}\end{array}\] 3. The modalities \(\mu\) that generate positive modal operators \(\mu\boxplus A\) must be sharp, and the "framing" modality in its elimination rule must be transparent. The rules for positive modal operators are shown in Figure 5. The elimination rule requires both transparent morphisms, and composites of transparent and sharp morphisms, to be tangible. 4. Every sinister morphism generates a _negative_ modal operator. These are not in MTT. Their rules are shown in Figure 6; they simplify those of [11] by using right adjoints instead of parametric ones. **Remark 2.A**: Nothing in syntax mandates that the same class of mode morphisms be allowed to annotate domains of function-types and to form positive modalities (the sharp maps), but these classes coincide in all the models I know of. In co-dextrification models, the sharp maps also coincide with the tangible ones. **Remark 2.2**: If \(\mu\) is both sharp and sinister, the formation and introduction rules of \(\mu\diamond\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\begin{array}{ccc}\underline{\mu:p\to q\text{~{}sinister}}&\Gamma\underline{\mu ^{\shortparallel}\vdash A\text{~{}type}_{q}}&\underline{\mu:p\to q\text{~{}sinister }}&\Gamma\underline{\mu^{\shortparallel}\vdash M:A}\\ \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\omit\span\ We take this image \(\mathcal{L}\) to be the transparent morphisms, \(\mathcal{S}\) to be the sinister morphisms, and the tangible and sharp morphisms to be those that are isomorphic to one of the form \(\mu\circ\nu^{\dagger}\) where \(\mu\in\mathcal{L}\) and \(\nu\in\mathcal{S}\). This choice of tangible and sharp morphisms appears necessitated by our semantics (see Lemma 5.5), and \(\mathcal{L}\) is then the largest class of transparent morphisms satisfying the composition axiom. **Assumption 2.4**: _We always consider \(\mathcal{L}[\mathcal{S}^{\dagger}]\) to be an adjoint mode theory as in Example 2.3._ **Remark 2.5**: Of course, a model of MATT over some \(\mathcal{M}\) remains a model if we shrink any of the three classes of morphisms. In particular, the co-dextrification also models MATT instantiated as in (iii) above, and this is the most common use case. We treat the more general class of tangible morphisms \(\mu\circ\nu^{\dagger}\), which is not much more difficult semantically, out of a desire for the greatest reasonable generality. I only know of one application of this generality. Specifically, in MTT the positive modalities and their introduction and elimination rules can be given internal modal function-types involving universes: \[\mu\boxed{\mathcal{L}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ * A **modal natural model**[12, Definition 5.4] is a modal context structure \(\mathscr{D}\) with a morphism \(\tau_{p}\boldsymbol{:}\operatorname{Tm}_{p}\to\operatorname{Ty}_{p}\) in each presheaf category \(\mathcal{P}\mathscr{D}_{p}\), such that for any \(\mu\boldsymbol{:}\,p\to q\) in \(\mathcal{M}\), the transformation \((\mathscr{D}^{\mu})^{*}\tau_{p}\) is representable in \(\mathcal{P}\mathscr{D}_{q}\). (Taking \(\mu=1_{p}\), this implies that each \(\mathscr{D}_{p}\) is a natural model.) We write the comprehension of \(A\in\tau_{p}(\mathscr{D}^{\mu}(\Gamma))\) as \(\mathsf{p}_{A}^{\mu}:\Gamma\triangleright^{\mu}A\to\Gamma\), and write \(\Gamma\triangleright^{1}A\) as \(\Gamma\triangleright A\). A modal natural model provides the structure of contexts, substitutions, and variables in MTT. For MATT, we simply impose the tangibility restriction. **Definition 3.2**: Let \(\mathcal{M}\) be an adjoint mode theory. A modal context structure \(\mathscr{D}:\mathcal{M}^{\mathsf{coop}}\to\mathcal{C}\mathit{at}\) is an **adjoint modal natural model** if we have a morphism \(\tau_{p}\boldsymbol{:}\operatorname{Tm}_{p}\to\operatorname{Ty}_{p}\) in each \(\mathcal{P}\mathscr{D}_{p}\) such that \((\mathscr{D}^{\mu})^{*}\tau_{p}\) is representable for all _tangible_\(\mu\). (Since identities are tangible, each \(\mathscr{D}_{p}\) is still a natural model.) We now similarly modify and unpack the modal type-formers of MTT. For later use, we also unpack the definitions more explicitly from the natural-model style given in [12]. **Definition 3.3**: (See [12, SS5.2.1]) A \(\Pi\)**-structure** on an adjoint modal natural model \(\mathscr{D}\) consists of, for any \(\mu\boldsymbol{:}\,p\to q\), and any \(\Gamma\in\mathscr{D}_{q}\) and \(A\in\operatorname{Ty}_{p}(\mathscr{D}^{\mu}(\Gamma))\) with \(B\in\operatorname{Ty}_{q}(\Gamma\triangleright^{\mu}A)\), a type \(\Pi(A,B)\in\operatorname{Ty}_{q}(\Gamma)\) such that \(\Gamma\triangleright\Pi(A,B)\) is a pushforward of \(\Gamma\triangleright^{\mu}A\triangleright B\) along \(\mathsf{p}_{A}\boldsymbol{:}\,\Gamma\triangleright^{\mu}A\to A\), all natural in \(\Gamma\). The notion of "equipped with the structure of a pushforward" is cleanly expressed in the language of [43] by saying we have a _distributivity pullback_ This means that this pullback square is terminal among such pullback squares with \(\mathsf{p}_{A}\) and \(\mathsf{p}_{B}\) fixed (known as "pullbacks around \((\mathsf{p}_{A},\mathsf{p}_{B})\)"). **Definition 3.4**: (See [12, SS5.2.2]) An adjoint modal natural model \(\mathscr{D}\) has **positive modalities** if for any \(\mu\boldsymbol{:}\,p\to q\) we have: 1. For any \(\Gamma\in\mathscr{D}_{q}\) and \(A\in\operatorname{Ty}_{p}(\mathscr{D}^{\mu}(\Gamma))\), we have a type \(\mu\boxplus A\in\operatorname{Ty}_{q}(\Gamma)\) and a map \(j_{\Gamma,A}^{\mu}\boldsymbol{:}\,\Gamma\triangleright^{\mu}A\to\Gamma \triangleright\,(\mu\boxplus A)\) over \(\Gamma\), all varying naturally in \(\Gamma\). 2. For any _transparent_\(\varrho\boldsymbol{:}\,q\to r\) and \(\Gamma\in\mathscr{D}_{r}\) with \(A\in\operatorname{Ty}_{p}(\mathscr{D}^{\varrho\omega}(\Gamma))\), define the dashed map \(\ell\) below by the universal property of pullbacks and full-faithfulness of \(\mathord{\mathord{\mathord{\mathord{\mathord{\mathord{\mathord{\mathord{ \mathord{\mathord{\mathord{\mathord{\ **Example 3.6**: Let \(\mathcal{M}\) be the adjoint mode theory for Two-Level Type Theory from Example 2.5, and let \(\mathscr{C}\) be a _two-level model_ as in [1, Definition 2.8]. If we ignore universes, this means it has two natural models \(\tau^{\mathrm{f}}\boldsymbol{:}\operatorname{Tm}^{\mathrm{f}}\to\operatorname{ Ty}^{\mathrm{f}}\) and \(\tau^{\mathrm{e}}\boldsymbol{:}\operatorname{Tm}^{\mathrm{e}}\to\operatorname{ Ty}^{\mathrm{e}}\), and that \(\tau^{\mathrm{f}}\) is a pullback of \(\tau^{\mathrm{e}}\). Let \(\mathscr{D}:\mathcal{M}^{\mathsf{coop}}\to\mathcal{C}at\) be constant at \(\mathscr{C}\), but where \(\mathscr{D}_{\mathrm{f}}=\mathscr{C}\) is equipped with \(\tau^{\mathrm{f}}\) while \(\mathscr{D}_{\mathrm{e}}=\mathscr{C}\) is equipped with \(\tau^{\mathrm{e}}\). This is an adjoint modal natural model with negative modalities, since the assumption that \(\tau^{\mathrm{f}}\) is a pullback of \(\tau^{\mathrm{e}}\) says exactly that the identity functor \((\mathscr{C},\tau^{\mathrm{e}})\to(\mathscr{C},\tau^{\mathrm{f}})\) has a dependent right adjoint. ## 4 Co-dextrification **Assumption 4.1**: _For all of this section, let \(\mathcal{L}\) be an arbitrary 2-category, let \(\mathscr{C}:\mathcal{L}\to\mathcal{C}at\) be a pseudo-functor, and let \(\kappa\) be an infinite regular cardinal such that \(\mathcal{L}\) is \(\kappa\)-small, each category \(\mathscr{C}_{p}\) has \(\kappa\)-small limits, and each functor \(\mathscr{C}_{\mu}:\mathscr{C}_{p}\to\mathscr{C}_{q}\) preserves \(\kappa\)-small limits. Often, \(\kappa\) will be \(\omega\)._ **Remark 4.A**: In fact, it suffices if all the comma categories over which we take limits below, which are constructed from the hom-categories of \(\mathcal{L}\), admit some initial functor from a \(\kappa\)-small category (when \(\kappa=\omega\) this is called being L-finite [33]). For instance, if all the relevant left liftings exist, then these categories have initial objects, hence satisfy this condition no matter how large the hom-categories of \(\mathcal{L}\) are. **Definition 4.2**: For \(r\in\mathcal{L}\), let \(\mathcal{L}/\!\!/r\) denote the **lax slice 2-category**: * Its objects are morphisms \(\mu:p\to r\) in \(\mathcal{L}\). * Its morphisms from \(\mu:p\to r\) to \(\nu:q\to r\) are pairs \((\varrho:p\to q,\;\alpha:\mu\Rightarrow\nu\circ\varrho)\). * Its 2-cells from \((\varrho,\alpha)\) to \((\sigma,\beta)\) are 2-cells \(\gamma:\varrho\Rightarrow\sigma\) such that \((\nu\triangleleft\gamma)\circ\alpha=\beta\). By postcomposition, we have a 2-functor \(\mathcal{L}/\!\!/-:\mathcal{L}\to 2\)-\(\mathcal{C}at\), with projection functors \(\pi_{r}:\mathcal{L}/\!\!/r\to\mathcal{L}\). **Definition 4.3**: For \(r\in\mathcal{L}\), let \(\widehat{\mathscr{C}}_{r}\) denote the **oplax limit** of the \((\mathcal{L}/\!\!/r)\)-shaped diagram \(\mathscr{C}\circ\pi_{r}:\mathcal{L}/\!\!/r\to\mathcal{C}at\) in \(\mathcal{C}at\). Thus, an object \(\boldsymbol{\Gamma}\in\widehat{\mathscr{C}}_{r}\) consists of: 1. For each \(\mu:p\to r\) in \(\mathcal{L}\), an object \(\boldsymbol{\Gamma}^{\mu}\in\mathscr{C}_{p}\). 2. For each \(\varrho:p\to q\) and \(\alpha:\mu\Rightarrow\nu\circ\varrho\), a morphism \(\boldsymbol{\Gamma}^{\alpha}:\boldsymbol{\Gamma}^{\nu}\longrightarrow\mathscr{ C}_{\varrho}(\boldsymbol{\Gamma}^{\mu})\) in \(\mathscr{C}_{q}\). (The notation is abusive, as \(\boldsymbol{\Gamma}^{\alpha}\) depends not just on \(\alpha\) but on the decomposition of its codomain as a composite.) 3. For \(\alpha=1_{\mu}:\mu\Rightarrow\mu\circ 1_{p}\), we have \(\boldsymbol{\Gamma}^{1_{\mu}}=1_{\boldsymbol{\Gamma}^{\mu}}\). 4. For \(\alpha:\mu\Rightarrow\nu\circ\varrho\) and \(\beta:\nu\Rightarrow\varpi\circ\sigma\), we have \(\mathscr{C}_{\sigma}(\boldsymbol{\Gamma}^{\alpha})\circ\boldsymbol{\Gamma}^{ \beta}=\boldsymbol{\Gamma}^{(\beta\triangleright\varrho)\circ\alpha}\), modulo pseudofunctoriality. In particular, taking \(\varrho\) and \(\sigma\) to be identities, for each \(p\in\mathcal{L}\) we have a functor \(\boldsymbol{\Gamma}^{p}\boldsymbol{:}\mathcal{L}(p,r)^{\mathsf{op}}\to\mathscr{ C}_{p}\). 5. For \(\alpha:\mu\Rightarrow\nu\circ\varrho\) and \(\beta:\varrho\Rightarrow\sigma\), we have \(\mathscr{C}_{\beta}(\boldsymbol{\Gamma}^{\mu})\circ\boldsymbol{\Gamma}^{ \alpha}=\boldsymbol{\Gamma}^{(\nu\triangleleft\beta)\circ\alpha}\). Similarly, a morphism \(\boldsymbol{\theta}:\boldsymbol{\Gamma}\to\boldsymbol{\Gamma}\) in \(\widehat{\mathscr{C}}_{r}\) consists of: 1. For each \(\mu:p\to r\), a morphism \(\boldsymbol{\theta}^{\mu}:\boldsymbol{\Gamma}^{\mu}\to\boldsymbol{\Delta}^{\mu}\). * For \(\alpha:\mu\Rightarrow\nu\circ\varrho\), we have \(\mathscr{C}_{\varrho}(\boldsymbol{\theta}^{\mu})\circ\boldsymbol{\Gamma}^{ \alpha}=\boldsymbol{\Delta}^{\alpha}\circ\boldsymbol{\theta}^{\nu}\). **Lemma 4.4**: _The categories \(\widehat{\mathscr{C}}_{p}\) are the action on objects of a modal context structure \(\widehat{\mathscr{C}}:\mathcal{L}^{\mathsf{coop}}\to\mathcal{C}at\)._ **Proof.** The functorial action is by composition: \((\widehat{\mathscr{C}}^{\mu}(\boldsymbol{\Gamma}))^{\nu}=\boldsymbol{\Gamma}^ {\mu\circ\nu}\) and \((\widehat{\mathscr{C}}^{\beta}(\boldsymbol{\Gamma}))^{\varrho}=\boldsymbol{ \Gamma}^{\beta\triangleright\varrho}\). \(\Box\) For \(\mu:p\to q\), write \(\mathsf{L}^{\mu}:\widehat{\mathscr{C}}_{q}\to\mathscr{C}_{p}\) for the functor defined by \(\mathsf{L}^{\mu}(\boldsymbol{\Gamma})=\boldsymbol{\Gamma}^{\mu}\). **Lemma 4.5**: _Each \(\widehat{\mathscr{C}}_{p}\) has \(\kappa\)-small limits, and each functor \(\mathsf{L}^{\mu}\) and \(\widehat{\mathscr{C}}^{\mu}\) preserves them. Furthermore:_ * _If each_ \(\mathscr{C}_{p}\) _has some shape of colimits, then so does each_ \(\widehat{\mathscr{C}}_{p}\)_, and each_ \(\mathsf{L}^{\mu}\) _and_ \(\widehat{\mathscr{C}}^{\mu}\) _preserves them._ * _If each_ \(\mathscr{C}_{p}\) _is locally cartesian closed or an elementary topos, so is each_ \(\widehat{\mathscr{C}}_{p}\)_._ * _If each_ \(\mathscr{C}_{p}\) _is locally presentable, and each_ \(\mathscr{C}_{\mu}\) _is accessible, then each_ \(\widehat{\mathscr{C}}_{p}\) _is also locally presentable._ * _If each_ \(\mathscr{C}_{p}\) _is a Grothendieck topos, and each_ \(\mathscr{C}_{\mu}\) _is an inverse or direct image, then so is each_ \(\widehat{\mathscr{C}}_{p}\)_._ **Proof.** The limits, and colimits in (i), are defined pointwise. For (ii), an oplax limit is the category of coalgebras for a finitely continuous comonad on a product category (see [44] or [19, B3.4.6]), and the stated properties are closed under products and such coalgebras (e.g. [19, A4.2.1]). For (iii), by [30, Theorem 5.1.6] accessible categories and functors are closed under limits, and an accessible category is locally presentable if and only if it is cocomplete. For (iv), we use (ii) and (iii), since Grothendieck topoi are the locally presentable elementary topoi [19, C2.2.8], and left and right adjoints are accessible. \(\Box\) **Lemma 4.6**: _For \(\varpi:r\to s\), the functor \(\mathsf{L}^{\varpi}:\widehat{\mathscr{C}}_{s}\to\mathscr{C}_{r}\) has a right adjoint, which we write \(\mathbf{R}_{\varpi}\)._ **Proof.** Given \(\Gamma\in\mathscr{C}_{r}\), we must first define \((\mathbf{R}_{\varpi}\Gamma)^{\nu}\in\mathscr{C}_{p}\) for any \(\nu:p\to s\). Let \((\varpi\downarrow(\nu\circ-))\) be the category of pairs \((\sigma:r\to p,\,\beta:\varpi\Rightarrow\nu\circ\sigma)\). Any such \((\sigma,\beta)\) induces an object \(\mathscr{C}_{\sigma}(\Gamma)\in\mathscr{C}_{p}\); we define \[(\mathbf{R}_{\varpi}\Gamma)^{\nu}=\lim_{(\sigma,\beta)\in(\varpi\downarrow( \nu\circ-))}\mathscr{C}_{\sigma}(\Gamma).\] Now suppose given also \(\varrho:p\to q\) and \(\alpha:\mu\Rightarrow\nu\circ\varrho\). Then \((\mathbf{R}_{\varpi}\Gamma)^{\alpha}\) should be a morphism \[(\mathbf{R}_{\varpi}\Gamma)^{\nu}=\lim_{(\sigma,\beta)\in(\varpi\downarrow( \nu\circ-))}\mathscr{C}_{\sigma}(\Gamma)\longrightarrow\lim_{(\sigma,\beta) \in(\varpi\downarrow(\mu\circ-))}\mathscr{C}_{\varrho}\mathscr{C}_{\sigma}( \Gamma)\stackrel{{\cong}}{{\rightarrow}}\mathscr{C}_{\varrho}(( \mathbf{R}_{\varpi}\Gamma)^{\mu}).\] If \((\sigma,\beta)\in(\varpi\downarrow(\mu\circ-))\) indexes a factor \(\mathscr{C}_{\varrho}\mathscr{C}_{\sigma}(\Gamma)\) of this codomain, then \((\varrho\circ\sigma,(\alpha\triangleright\sigma)\circ\beta)\in(\varpi\downarrow( \nu\circ-))\), and the factor \(\mathscr{C}_{\varrho\circ\sigma}(\Gamma)\) of the domain is isomorphic to \(\mathscr{C}_{\varrho}\mathscr{C}_{\sigma}(\Gamma)\). Thus, this determines a map \((\mathbf{R}_{\varpi}\Gamma)^{\alpha}\) between the limits. This defines \(\mathbf{R}_{\varpi}\Gamma\in\widehat{\mathscr{C}}_{s}\). Now we observe that \[(\mathbf{R}_{\varpi}\Gamma)^{\varpi}=\lim_{(\sigma,\beta)\in(\varpi\downarrow( \varpi\circ-))}\mathscr{C}_{\sigma}(\Gamma).\] Since \((1_{r},1_{\varpi})\in(\varpi\downarrow(\varpi\circ-))\), with \(\mathscr{C}_{1_{r}}(\Gamma)\cong\Gamma\), there is a projection \(\epsilon_{\Gamma}:(\mathbf{R}_{\varpi}\Gamma)^{\varpi}\to\Gamma\). We claim this is a universal arrow from \(\mathsf{L}^{\varpi}\). For \(\boldsymbol{\Delta}\in\widehat{\mathscr{C}}_{s}\), a map \(\boldsymbol{\theta:\boldsymbol{\Delta}\to\mathbf{R}_{\varpi}\Gamma}\) consists of, for any \(\nu:p\to r\) and any \((\sigma,\beta)\in(\varpi\downarrow(\nu\circ-))\), a morphism \(\boldsymbol{\theta}^{\nu,(\sigma,\beta)}:\boldsymbol{\Delta}^{\nu}\to\mathscr{C}_{ \sigma}\Gamma\), such that for any \(\alpha:\mu\Rightarrow\nu\circ\varrho\) and \(\beta:\varpi\Rightarrow\mu\circ\sigma\): Taking \(\nu=\varpi\) and \(\sigma=1_{r}\) with \(\beta=1_{\varpi}\) yields the composite \(\boldsymbol{\Delta}^{\varpi}\xrightarrow{\boldsymbol{\theta}^{\varpi}}( \mathbf{R}_{\varpi}\Gamma)^{\varpi}\xrightarrow{\mathrm{cf}}\Gamma\). Moreover, if in the above condition we take \(\mu=\varpi\) with \((\sigma,\beta)=(1_{r},1_{\varpi})\), then the left-hand vertical composite becomes \(\boldsymbol{\theta}^{\nu,(\varrho,\alpha)}\), which is fully general; thus all the components of \(\theta\) are determined by \(\boldsymbol{\theta}^{\varpi,(1,r,1_{\varpi})}\). Now, given \(\vartheta:\boldsymbol{\Delta}^{\varpi}\to\Gamma\), for any \(\nu\) and \((\sigma,\beta)\) we have a composite \(\boldsymbol{\Delta}^{\nu}\xrightarrow{\boldsymbol{\Delta}^{\beta}}\mathscr{C}_ {\sigma}(\boldsymbol{\Delta}^{\varpi})\xrightarrow{\mathscr{C}_{\sigma}( \vartheta)}\mathscr{C}_{\varrho}\Gamma\). The above compatibility condition follows from the axioms of Definition 4.3, so we have a map \(\boldsymbol{\Delta}\to\mathbf{R}_{\varpi}\Gamma\). Its underlying map \(\boldsymbol{\Delta}^{\varpi}\to\Gamma\) is \(\boldsymbol{\Delta}^{\varpi}\xrightarrow{\boldsymbol{\Delta}^{\mathrm{1_{ \varpi}}}}\mathscr{C}_{1_{r}}(\boldsymbol{\Delta}^{\varpi})\xrightarrow{ \mathscr{C}_{1_{\varpi}}(\vartheta)}\mathscr{C}_{1_{\varpi}}\Gamma\cong\Gamma\), which is equal to \(\vartheta\). \(\Box\) When \(\varpi=1_{r}\), we write \(\mathsf{L}^{r}=\mathsf{L}^{1_{r}}\) and \(\mathbf{R}_{r}=\mathbf{R}_{1_{r}}\). **Lemma 4.7**: _The functor \(\mathbf{R}_{r}:\mathscr{C}_{r}\to\widehat{\mathscr{C}}_{r}\) is fully faithful._ **Proof.** When \(\varpi=1_{r}\), the element \((1_{r},1_{1_{r}})\) of \((1_{r}\downarrow(1_{r}\circ-))\) is initial. Thus, the domain of \(\epsilon_{\Gamma}\) is evaluation at that object, which is \(\mathscr{C}_{1_{r}}(\Gamma)\cong\Gamma\). So \(\epsilon\) is an isomorphism, hence \(\mathbf{R}_{r}\) is fully faithful. \(\Box\) **Lemma 4.8**: _Let \(\mu:p\to r\), \(\nu:q\to r\), \(\varrho:p\to q\), and \(\alpha:\mu\Rightarrow\nu\circ\varrho\). Then for any \(\Gamma\in\mathscr{C}_{p}\) there is a map \(\mathbf{R}_{\alpha}(\Gamma):\mathbf{R}_{\mu}(\Gamma)\to\mathbf{R}_{\nu}( \mathscr{C}_{\varrho}\Gamma)\), which varies naturally in \(\Gamma\); it is the mate of \(\boldsymbol{\Delta}^{\alpha}:\boldsymbol{\Delta}^{\nu}\to\mathscr{C}_{\varrho }(\boldsymbol{\Delta}^{\mu})\). \(\Box\)_ **Lemma 4.9**: _For any \(\varpi:r\to s\), the functor \(\widehat{\mathscr{C}}^{\varpi}:\widehat{\mathscr{C}}_{s}\to\widehat{\mathscr{ C}}_{r}\) has a right adjoint \(\widehat{\mathscr{C}}_{\varpi}:\widehat{\mathscr{C}}_{r}\to\widehat{\mathscr{C}}_ {s}\)._ **Proof.** Let \(\boldsymbol{\Gamma}\in\widehat{\mathscr{C}}_{s}\) and \(\boldsymbol{\Delta}\in\widehat{\mathscr{C}}_{r}\). By definition, a morphism \(\boldsymbol{\theta}:\widehat{\mathscr{C}}^{\varpi}(\boldsymbol{\Gamma})\to \boldsymbol{\Delta}\) consists of components \(\boldsymbol{\theta}^{\mu}:\boldsymbol{\Gamma}^{\varpi\circ\mu}\to\boldsymbol {\Delta}^{\mu}\) for all \(\mu:p\to r\) such that for any \(\alpha:\mu\Rightarrow\nu\circ\varrho\) the following diagram commutes: (4.10) To give \(\boldsymbol{\theta}^{\mu}\) is equivalent to give \(\overline{\boldsymbol{\theta}^{\mu}}:\boldsymbol{\Gamma}\to\mathbf{R}_{\varpi \circ\mu}(\boldsymbol{\Delta}^{\mu})\). We will define \(\widehat{\mathscr{C}}_{\varpi}(\boldsymbol{\Delta})\in\widehat{\mathscr{C}}_{s}\) as the limit of a diagram of objects \(\mathbf{R}_{\varpi\circ\mu}(\boldsymbol{\Delta}^{\mu})\), so that a map \(\boldsymbol{\Gamma}\to\widehat{\mathscr{C}}_{\varpi}(\boldsymbol{\Delta})\) is determined by maps \(\overline{\boldsymbol{\theta}^{\mu}}\) satisfying a cone condition that is equivalent to (4.10). We start by writing down the naturality square for the transformation \(\mathbf{R}_{\varpi\circ\alpha}\) of Lemma 4.8 at \(\boldsymbol{\theta}^{\mu}\), and composing it with the adjunction unit \(\boldsymbol{\Gamma}\to\mathbf{R}_{\varpi\circ\mu}(\boldsymbol{\Gamma}^{\varpi \circ\mu})\): (4.11) We also transpose (4.10) across \({\sf L}^{\varpi\circ\nu}\dashv{\bf R}_{\varpi\circ\nu}\) to obtain an equivalent condition as at left below: (4.12) The left-bottom composite in (4.11) is equal to the top-right composite at left in (4.12). Thus, we can replace this part of the square at left in (4.12) by the top-right composite in (4.11) to obtain the equivalent condition at right in (4.12). Now we define \(\widehat{\mathscr{C}}_{\varpi}(\boldsymbol{\Delta})\) to be the limit in \(\widehat{\mathscr{C}}_{s}\) of the diagram consisting of the objects \({\bf R}_{\varpi\circ\mu}(\boldsymbol{\Delta}^{\mu})\), for all \(\mu:p\to r\), and the cospans \({\bf R}_{\varpi\circ\nu}(\boldsymbol{\Delta}^{\nu})\to{\bf R}_{\varpi\circ\nu} (\mathscr{C}_{\varrho}(\boldsymbol{\Delta}^{\mu}))\leftarrow{\bf R}_{\varpi \circ\mu}(\boldsymbol{\Delta}^{\mu})\) for all \(\alpha:\mu\Rightarrow\nu\circ\varrho\). Then \(\boldsymbol{\Gamma}\to\widehat{\mathscr{C}}_{\varpi}(\boldsymbol{\Delta})\) consists of \(\overline{\boldsymbol{\theta}^{\mu}}\) satisfying (4.12), hence maps \(\boldsymbol{\theta}^{\mu}\) satisfying (4.10). \(\Box\) **Corollary 4.13**: _We have a 2-functor \(\widehat{\mathscr{C}}:{\mathcal{L}}[{\mathcal{L}}^{\dagger}]^{\mathsf{coop}} \to{\mathcal{C}}\mathit{at}\), with \(\widehat{\mathscr{C}}^{\mu^{\dagger}}=\widehat{\mathscr{C}}_{\mu}\). In particular, considering only the right adjoints, we have a pseudofunctor \(\widehat{\mathscr{C}}:{\mathcal{L}}\to{\mathcal{C}}\mathit{at}\). \(\Box\)_ **Lemma 4.14**: _The functors \({\sf L}^{r}:\widehat{\mathscr{C}}_{r}\to\mathscr{C}_{r}\) are a pseudonatural transformation of pseudofunctors \({\mathcal{L}}\to{\mathcal{C}}\mathit{at}\)._ **Proof.** Let \(\varpi:r\to s\) and \(\boldsymbol{\Gamma}\in\widehat{\mathscr{C}}_{r}\); we must show that \(\widehat{\mathscr{C}}_{\varpi}(\boldsymbol{\Gamma})^{1_{s}}\cong\mathscr{C}_{ \varpi}(\boldsymbol{\Gamma}^{1_{s}})\). Since \({\sf L}^{s}={\sf L}^{1_{s}}\) preserves \(\kappa\)-small limits, \(\widehat{\mathscr{C}}_{\varpi}(\boldsymbol{\Gamma})^{1_{s}}\) is the limit of the diagram consisting of the objects \(({\bf R}_{\varpi\circ\mu}(\boldsymbol{\Gamma}^{\mu}))^{1_{s}}\), for all \(\mu:p\to r\), and the analogous cospans. And by definition of \({\bf R}_{\varpi\circ\mu}\), each of these objects is the limit \[({\bf R}_{\varpi\circ\mu}(\boldsymbol{\Gamma}^{\mu}))^{1_{s}}=\lim_{(\sigma, \beta)\in((\varpi\circ\mu)\downarrow(1_{s}\circ-))}\mathscr{C}_{\sigma}( \boldsymbol{\Gamma}^{\mu}).\] But \(((\varpi\circ\mu)\downarrow(1_{s}\circ-))\) has an initial object \((\varpi\circ\mu,1_{\varpi\circ\mu})\), so this limit is isomorphic to \(\mathscr{C}_{\varpi\circ\mu}(\boldsymbol{\Gamma}^{\mu})\). A similar argument applies to the apices of the cospans, so \(\widehat{\mathscr{C}}_{\varpi}(\boldsymbol{\Gamma})^{1_{s}}\) is the limit of the diagram consisting of the objects \(\mathscr{C}_{\varpi\circ\mu}(\boldsymbol{\Gamma}^{\mu})\), for all \(\mu:p\to r\), and the cospans \(\mathscr{C}_{\varpi\circ\nu}(\boldsymbol{\Gamma}^{\nu})\to\mathscr{C}_{\varpi \circ\nu\varrho}(\boldsymbol{\Gamma}^{\mu})\leftarrow\mathscr{C}_{\varpi \circ\mu}(\boldsymbol{\Gamma}^{\mu})\) for all \(\alpha:\mu\Rightarrow\nu\circ\varrho\). However, there is a canonical such object where \(\mu=1_{r}\), and for any other \(\mu\) the 2-cell \(1_{\mu}:\mu\Rightarrow 1_{r}\circ\mu\) determines a canonical cospan \(\mathscr{C}_{\varpi}(\boldsymbol{\Gamma}^{1_{s}})\to\mathscr{C}_{\varpi \circ 1_{r}\circ\mu}(\boldsymbol{\Gamma}^{\mu})\stackrel{{ \leftarrow}}{{\leftarrow}}\mathscr{C}_{\varpi\circ\mu}(\boldsymbol{\Gamma}^{ \mu})\) in which the right-hand leg is an identity. Thus, the limit of this diagram is isomorphic to \(\mathscr{C}_{\varpi}(\boldsymbol{\Gamma}^{1_{s}})\). \(\Box\) **Lemma 4.15**: _The functors \({\bf R}_{\tau}:\mathscr{C}_{r}\to\widehat{\mathscr{C}}_{r}\) are lax natural, by doctrinal adjunction [20]. \(\Box\)_ ## 5 MATT in the co-dextrification We now show that for suitable \(\mathscr{C}\), the co-dextrification \(\widehat{\mathscr{C}}\) models MATT over \({\mathcal{L}}[{\mathcal{S}}^{\dagger}]\) (recall Assumption 2.4). In fact, we use only its abstract properties; this makes our arguments cleaner and more general. ### Adjoint modal pre-models Recall that a **natural pseudo-model**[40, Appendix A] is a strict natural transformation \(\tau:{\rm Tm}\to{\rm Ty}\) between groupoid-valued pseudofunctors \({\rm Tm},{\rm Ty}:\mathscr{D}^{\mathsf{op}}\to\mathcal{G}pd\) that has discrete fibers and is representable. **Definition 5.1**: Let \({\mathcal{L}}\) be a 2-category with a class \({\mathcal{S}}\) of morphisms. An **adjoint modal pre-model** is: 1. A modal context structure \(\widehat{\mathscr{C}}:{\mathcal{L}}[{\mathcal{L}}^{\dagger}]^{\mathsf{coop}} \to{\mathcal{C}}\mathit{at}\), such that each \(\widehat{\mathscr{C}}_{p}\) is locally cartesian closed. As before, we write its action on morphisms as \(\widehat{\mathscr{C}}^{\mu}\), and we write \(\widehat{\mathscr{C}}_{\mu}=\widehat{\mathscr{C}}^{\mu^{\dagger}}\). 2. A pseudofunctor \(\mathscr{C}:{\mathcal{L}}[{\mathcal{S}}^{\dagger}]\to{\mathcal{C}}\mathit{at}\), with action on morphisms \(\mathscr{C}_{\mu}\). 3. A pseudonatural transformation \({\sf L}:\widehat{\mathscr{C}}\to\mathscr{C}\) between pseudofunctors \({\mathcal{L}}\to{\mathcal{C}}\mathit{at}\). To be covariant on \({\mathcal{L}}\), we take the right adjoints in \(\widehat{\mathscr{C}}\) but the left adjoints in \(\mathscr{C}\); thus \(\mathscr{C}_{\mu}({\sf L}^{p}({\sf I}))\cong{\sf L}^{q}(\widehat{\mathscr{C}}_{ \mu}({\sf I}))\). 4. Each functor \({\sf L}^{p}:\widehat{\mathscr{C}}_{p}\to\mathscr{C}_{p}\) preserves finite limits and has a fully faithful right adjoint \({\sf R}_{p}\). 5. Each category \(\mathscr{C}_{p}\) is a natural pseudo-model \((\mathscr{C}_{p},\tau_{p})\). **Example 5.2**: If \(\mathscr{C}:\mathcal{L}\to\mathcal{C}at\) is a pseudofunctor such that each \(\mathscr{C}_{p}\) is locally cartesian closed with \(\kappa\)-small limits, each functor \(\mathscr{C}_{\mu}\) preserves \(\kappa\)-small limits, and \(\mathscr{C}_{\mu}\) has a right adjoint if \(\mu\in\mathcal{S}\), then the co-dextrification \(\widehat{\mathscr{C}}\) extends it to an adjoint modal pre-model. **Remark 5.3**: If the functors \(\mathsf{L}^{p}\) are identities, then Definition 5.1 is just an adjoint modal context structure \(\widehat{\mathscr{C}}:\mathcal{L}[\mathcal{L}^{\dagger}]^{\mathsf{coop}}\to \mathcal{C}at\) of locally cartesian closed natural pseudo-models such that \(\widehat{\mathscr{C}}_{\mu}\) has a right adjoint when \(\mu\in\mathcal{S}\). In this case, the results we will prove in this section specialize to a more ordinary version of [29] for the modal case, when the lock functors already exist but we need to strictify the type formers. Local cartesian closure will be used for \(\Pi\)-types, of course, but is also used in the method of [29] to manipulate local universes. To that end, we observe that \(\mathscr{C}\) is closed under the pushforwards in \(\widehat{\mathscr{C}}\). **Lemma 5.4**: _In an adjoint modal pre-model, if \(A\xrightarrow{f}B\xrightarrow{g}C\) are morphisms such that \(f\) is a pullback of a map in the image of \(\mathsf{R}_{p}\), then the pushforward \(g_{*}(f)\) is also a pullback of a map in the image of \(\mathsf{R}_{p}\). That is, \(\mathsf{R}_{p}:\mathscr{C}_{p}\to\widehat{\mathscr{C}}_{p}\) identifies \(\mathscr{C}_{p}\) with a **local exponential ideal** in \(\widehat{\mathscr{C}}_{p}\)_ **Proof.** The pullbacks of maps in the image of \(\mathsf{R}_{p}\) are a left-exact-reflective subcategory of \(\widehat{\mathscr{C}}_{p}/C\); the reflection \(\mathsf{L}^{/C}\) applies \(\mathsf{L}^{p}\) and pulls back to \(C\). For any \(h:D\to C\), morphisms \(h\to g_{*}(f)\) in \(\widehat{\mathscr{C}}_{p}/C\) are equivalent to morphisms \(g^{*}(h)\to f\) in \(\widehat{\mathscr{C}}_{p}/B\). By assumption on \(f\), any such morphism factors through \(\mathsf{L}^{/B}(g^{*}(h))\), which is \(g^{*}(\mathsf{L}^{/C}(h))\) by left-exactness of \(\mathsf{L}^{p}\). Thus, it also corresponds to a map \(\mathsf{L}^{/C}(h)\to g_{*}(f)\). Taking \(h=g_{*}(f)\) we conclude that \(g_{*}(f)\cong\mathsf{L}^{/C}(g_{*}(f))\) and hence lies in the subcategory. \(\Box\) ### The left adjoint splitting The **left adjoint splitting**[29] of a natural pseudo-model \((\mathscr{D},\tau)\) is \(\tau^{!}:\mathrm{Tm}^{!}\to\mathrm{Ty}^{!}\) where: * An element \(A\in\mathrm{Ty}^{!}(\Gamma)\) consists of an object \(\mathsf{V}_{A}\in\mathscr{D}\), a type \(\mathsf{E}_{A}\in\mathrm{Ty}(\mathsf{V}_{A})\), and a morphism \(\mathpzc{A}^{!}:\Gamma\to\mathsf{V}_{A}\). We call \(\mathsf{V}_{A}\) the _local universe_. * An element \((A,a)\in\mathrm{Tm}^{!}(\Gamma)\) consists of \(\mathsf{V}_{A}\in\mathscr{D}\), a type \(\mathsf{E}_{A}\in\mathrm{Ty}(\mathsf{V}_{A})\), and \(a:\Gamma\to\mathsf{V}_{A}\triangleright\mathsf{E}_{A}\). * The map \(\tau^{!}\) sends \(a\) to \(\mathpzc{A}^{!}=\mathsf{p}_{A}\circ a\). Since \(\tau^{!}\) is the pullback of \(\tau\) along the map \(\mathrm{Ty}^{!}\to\mathrm{Ty}\) sending \(A\) to \(\mathsf{E}_{A}[\mathpzc{A}^{!}\mathpzc{A}]\), it is a natural model. Given an adjoint modal pre-model, we define \(\widehat{\tau}^{!}_{p}=(\mathsf{L}^{p})^{*}\tau^{!}_{p}\). Thus, an element \(A\in\widehat{\mathrm{Ty}}^{!}_{p}(\Gamma)\) consists of an object \(\mathsf{V}_{A}\in\mathscr{C}_{p}\), a type \(\mathsf{E}_{A}\in\mathrm{Ty}_{p}(\mathsf{V}_{A})\), and a morphism \(\mathpzc{A}^{!}:\mathsf{L}^{p}\Gamma\to\mathsf{V}_{A}\), or equivalently \(\mathpzc{A}^{!}:\Gamma\to\mathsf{R}_{p}\mathsf{V}_{A}\). **Lemma 5.5**: _If \((\widehat{\mathscr{C}},\mathscr{C})\) is an adjoint modal pre-model over \((\mathcal{L},\mathcal{S})\), then \((\widehat{\mathscr{C}},\widehat{\tau}^{!})\) is an adjoint modal natural model over \(\mathcal{L}[\mathcal{S}^{!}]\)._ **Proof.** The tangible morphisms in \(\mathcal{L}[\mathcal{S}^{!}]\) are \(\mu\circ\nu^{!}\), for \(\mu:q\to r\) in \(\mathcal{L}\) and \(\nu:q\to p\) in \(\mathcal{S}\). Thus, we must show that in this case \((\widehat{\mathscr{C}}_{\nu}\circ\widehat{\mathscr{C}}^{\mu})^{*}\widehat{\tau} ^{!}_{p}=(\mathsf{L}^{p}\circ\widehat{\mathscr{C}}_{\nu}\circ\widehat{\mathscr{ C}}^{\mu})^{*}\tau^{!}_{p}\) is representable. But by pseudonaturality of \(\mathsf{L}\), we have \(\mathsf{L}^{p}\circ\widehat{\mathscr{C}}_{\nu}\circ\widehat{\mathscr{C}}^{\mu} \cong\mathscr{C}_{\nu}\circ\mathsf{L}^{q}\circ\widehat{\mathscr{C}}^{\mu}\), and this has a right adjoint \(\widehat{\mathscr{C}}_{\mu}\circ\mathsf{R}_{q}\circ\mathscr{C}_{\nu^{!}}\). Finally, restriction along any functor with a right adjoint preserves representability. \(\Box\) Explicitly, the comprehension \(\Gamma\triangleright^{\mu\circ\nu^{!}}A\) is the pullback (5.6) Here the bottom row consists of three adjunction units followed by \(\mathpzc{A}^{!}\). It is shown by example in [29] that "weakly stable" categorical structure on a natural pseudo-model can usually be enhanced to strictly stable structure on its left adjoint splitting. We observe that this remains true for the mode-local type theories in an adjoint modal pre-model. **Theorem-Schema 5.7**: _If \((\widehat{\mathscr{C}},\mathscr{C})\) is an adjoint modal pre-model, then for any of the type constructors considered in [29], if \((\mathscr{C},\tau)\) has weakly stable structure, then \((\widehat{\mathscr{C}},\widehat{\tau^{!}})\) has strictly stable structure._ **Proof.** Since \(\mathsf{L}^{p}\) preserves finite limits, any weakly stable or pseudo-stable structure on \(\tau_{p}\) lifts to \((\mathsf{L}^{p})^{*}\tau_{p}\). Therefore, by [29], \(((\mathsf{L}^{p})^{*}\tau_{p})^{!}\) has strictly stable structure. If we identify \(\mathscr{C}_{p}\) with the image of \(\mathsf{R}_{p}\), then \(\widehat{\mathrm{Ty}}^{!}\subseteq((\mathsf{L}^{p})^{*}\mathrm{Ty}_{p})^{!}\) consists of the types whose local universes lie in \(\mathscr{C}_{p}\). By Lemma 5.4, \(\mathscr{C}_{p}\) is closed under all the local universe manipulations of [29]; hence \(\widehat{\tau}^{!}\) is closed under the strictly stable structure. \(\Box\) For the modal type formers, the "weakly stable" structure exists on \(\mathscr{C}\) alone; thus we name its structure. **Definition 5.8**: A **modal pre-model** over an adjoint mode theory \(\mathcal{M}\) is a pseudofunctor \(\mathscr{C}:\mathcal{M}\to\mathcal{C}\mathit{at}\) such that each \(\mathscr{C}_{p}\) is a natural pseudo-model. ### \(\Pi\)-structure To define the input for the coherence theorem for \(\Pi\)-structure, it is convenient to use the following notion. **Definition 5.9**: A morphism \(\delta:\Gamma\to\Delta\) in a natural pseudo-model is **type-exponentiable** if for any \(B\in\mathrm{Ty}(\Gamma)\), the pushforward of \(\Gamma\triangleright B\) along \(\delta\) is isomorphic to a type projection \(\Delta\triangleright\Pi(f,B)\to\Delta\). In other words, there is some \(\Pi(f,B)\in\mathrm{Ty}(\Delta)\) and a distributivity pullback \[\xy(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{ \bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}=" {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)* {\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{ \bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet}="(0,0)*{\bullet} (where every face except the left- and right-hand ones is a pullback), the claim is that the back face is a distributivity pullback in \(\mathscr{A}\). To show this, suppose given a pullback around \((f,\eta^{*}(\mathsf{R}g))\): Since the top and bottom faces of the above cube are pullbacks, as is the right-hand half of the top face, it will suffice to show that there are unique dashed arrows completing the following cube: Transposing across the adjunction \(\mathsf{L}\dashv\mathsf{R}\), this becomes But since \(\mathsf{L}\) preserves pullbacks, the back face of this is a pullback around \((\mathsf{L}f,g)\), so by assumption the unique dashed arrows exist. **Theorem 5.12**: _If \((\widehat{\mathscr{C}},\mathscr{C})\) is an adjoint modal pre-model over \((\mathcal{L},\mathcal{S})\) such that \(\mathscr{C}\) has pre-\(\Pi\)-structure over \(\mathcal{L}[\mathcal{S}^{\dagger}]\), then \((\widehat{\mathscr{C}},\widehat{\tau}^{\dagger})\) has \(\Pi\)-structure over \(\mathcal{L}[\mathcal{S}^{\dagger}]\)._ **Proof.** Suppose we have \(\mu:q\to r\) in \(\mathcal{L}\) and \(\nu:q\to p\) in \(\mathcal{S}\), and also \(\Gamma\in\widehat{\mathscr{C}}_{r}\) and \(A\in\widehat{\mathsf{T}}_{p}^{\dagger}(\widehat{\mathscr{C}}_{\nu}\widehat{ \mathscr{C}}^{\mu}\Gamma)=\mathrm{Ty}_{p}(\mathsf{L}^{p}\widehat{\mathscr{C} }_{\nu}\widehat{\mathscr{C}}^{\mu}\Gamma)\) with \(B\in\widehat{\mathsf{T}}_{Y}^{\dagger}(\Gamma\triangleright^{\mu\circ\nu^{ \dagger}}A)=\mathrm{Ty}_{r}(\mathsf{L}^{r}(\Gamma\triangleright^{\mu\circ\nu^{ \dagger}}A))\). Applying \(\mathsf{L}^{r}\) to the defining pullback (5.6) of \(\Gamma\triangleright^{\mu\circ\nu^{\dagger}}A\), and using pseudonaturality and the fact that \(\mathsf{L}^{q}\mathsf{R}_{q}\cong 1\), we have a pullback (5.13) Thus, Definition 5.10 says \(\mathsf{L}^{r}(\widehat{\mathsf{p}}_{A})\) is type-exponentiable, hence the pushforward of \(B\) along it is a type projection; it remains to construct a local universe making it strictly stable. Let \(\mathsf{V}_{\Pi(A,B)}\) be the universal object with maps \(\pi_{A}:\mathsf{V}_{\Pi(A,B)}\to\mathscr{C}_{\omega}(\mathsf{V}_{A})\) and \(\pi_{B}:\pi_{A}^{*}(\mathscr{C}_{\omega}(\mathsf{V}_{A}\triangleright\mathsf{E}_{ A}))\to\mathsf{V}_{B}\). (As in [29], this can be constructed using the locally cartesian closed structure of \(\mathscr{C}_{q}\).) By Definition 5.10, \(\pi_{A}^{*}(\mathscr{C}_{\omega}(\mathsf{p}_{\mathsf{E}_{A}}))\) is type-exponentiable, so the pushforward of \(\mathsf{E}_{B}[\pi_{B}]\in\mathrm{Ty}_{q}(\pi_{A}^{*}(\mathscr{C}_{\omega}( \mathsf{V}_{A}\triangleright\mathsf{E}_{A})))\) along it is represented by a type \(\mathsf{E}_{\Pi(A,B)}\in\mathrm{Ty}_{q}(\mathsf{V}_{\Pi(A,B)})\). That is, we have a distributivity pullback Now the bottom map in (5.13) and \({}^{\prime}\!B^{\ast}:\mathsf{L}^{q}(\Gamma\triangleright^{\omega}A)\to\mathsf{V} _{B}\) induce a map \({}^{\prime}\!\Pi(A,B)^{\ast}:\mathsf{L}^{q}\Gamma\to\mathsf{V}_{\Pi(A,B)}\). Together, these data define \(\Pi(A,B)\in\widehat{\mathrm{Ty}}_{p}^{\dagger}(\Gamma)\), such that \(\mathsf{L}^{p}\Gamma\triangleright\mathsf{E}_{\Pi(A,B)}[{}^{\prime}\!\Pi(A,B)^{ \ast}]\) is a pushforward of \(\mathsf{L}^{q}(\Gamma\triangleright^{\omega}A)\triangleright\mathsf{E}_{B}[{}^{ \prime}\!B^{\ast}]\) along \(\mathsf{L}^{q}(\Gamma\triangleright^{\omega}A)\to\mathsf{L}^{q}\Gamma\). Moreover, the pullback-stability of pushforwards (i.e. the Beck-Chevalley condition for dependent products) yields a distributivity pullback The comprehension \(\Gamma\triangleright\Pi(A,B)\) in \(\widehat{\mathscr{C}}_{q}\) is defined by applying \(\mathsf{R}_{q}\) to this and pulling back along the unit \(\Gamma\to\mathsf{R}_{q}\mathsf{L}^{q}\Gamma\). Thus, Lemma 5.11 implies we have a distributivity pullback giving the desired universal property of \(\Pi(A,B)\). \(\Box\) ### Positive modalities **Definition 5.14**: In a natural pseudo-model, a map \(f:\Gamma\to\Delta\) is **anodyne** if for any \(B\in\mathrm{Ty}(\Delta)\) and any \(g:\Gamma\to\Delta\triangleright B\) lifting \(f\), there exists a diagonal filler: A map is **stably anodyne** if any pullback of it is anodyne. In homotopical terms, the anodyne maps are those with the left lifting property against all type projections. **Definition 5.15**: A modal pre-model \(\mathscr{C}\) has **positive pre-modalities** if for any \(\mathit{sharp}\ \mu:p\to q\) and \(\Gamma\in\mathscr{C}_{p}\) with \(A\in\mathrm{Ty}_{p}(\Gamma)\), there exists \(\mu\Box A\in\mathrm{Ty}_{q}(\mathscr{C}_{\mu}\Gamma)\) and a map \(\hat{\iota}^{\mu}_{\Gamma,A}:\mathscr{C}_{\mu}(\Gamma\triangleright A)\to\mathscr{ C}_{\mu}\Gamma\triangleright(\mu\Box A)\) over \(\mathscr{C}_{\mu}\Gamma\). such that for any _transparent_\(\varrho:q\to r\), the map \(\mathscr{C}_{\varrho}(\hat{\iota}^{\mu}_{\Delta,A})\) is stably anodyne. **Lemma 5.16**: _In an adjoint modal pre-model, let \(\theta:\Gamma\to\Delta\) be a map in \(\widehat{\mathscr{C}}_{p}\). If \(\mathsf{L}^{p}\theta\) is anodyne in \(\mathscr{C}_{p}\), then \(\theta\) is anodyne in \(\widehat{\mathscr{C}}_{p}\)._ **Proof.** Suppose given \(B\in\widehat{\mathrm{Ty}}^{!}_{p}(\Delta)=\mathrm{Ty}^{!}_{p}(\mathsf{L}^{p}\Delta)\), and a commutative square as at left below. It suffices to find a filler for the outer rectangle at left above; and by adjunction, this is equivalent to finding a filler in the square at right above. But such a filler exists precisely because \(\mathsf{L}^{p}\theta\) is anodyne. \(\Box\) **Theorem 5.17**: _If \(\{\widehat{\mathscr{C}},\mathscr{C}\}\) is an adjoint modal pre-model over \((\mathcal{L},\mathcal{S})\) such that \(\mathscr{C}\) has positive pre-modalities over \(\mathcal{L}[\mathcal{S}^{!}]\), then \((\widehat{\mathscr{C}},\widehat{\tau}^{!})\) has positive modalities over \(\mathcal{L}[\mathcal{S}^{!}]\)._ **Proof.** The sharp morphisms in \(\mathcal{L}[\mathcal{S}^{!}]\) are \(\mu\circ\nu^{!}\), where \(\mu:q\to r\) is in \(\mathcal{L}\) and \(\nu:q\to p\) is in \(\mathcal{S}\). Suppose given these and also \(\Gamma\in\widehat{\mathscr{C}}_{r}\) and \(A\in\widehat{\mathrm{Ty}}^{!}_{p}(\widehat{\mathscr{C}}^{\mu\circ\nu^{!}} \Gamma)=\mathrm{Ty}^{!}_{p}(\mathsf{L}^{p}(\widehat{\mathscr{C}}^{\mu\circ\nu ^{!}}\Gamma))\), hence \(\mathsf{E}_{A}\in\mathrm{Ty}_{p}(\mathsf{V}_{A})\) with \({}^{r}\!A:\mathsf{L}^{p}\widehat{\mathscr{C}}^{\mu\circ\nu^{!}}(\Gamma) \rightarrow\mathsf{V}_{A}\). By Definition 5.15, we have \((\mu\circ\nu^{!})\Box E_{E}\in\mathrm{Ty}_{q}(\mathscr{C}_{\mu}\mathscr{C}_{ \nu^{!}}\mathsf{V}_{A})\) and \(i^{\mu\circ\nu^{!}}_{\mathsf{V}_{A},\mathsf{E}_{A}}:\mathscr{C}_{\mu}\mathscr{C }_{\nu^{!}}(\mathsf{V}_{A}\triangleright\mathsf{E}_{A})\rightarrow\mathscr{C}_{ \mu}\mathscr{C}_{\nu^{!}}\mathsf{V}_{A}\triangleright((\mu\circ\nu^{!})\Box E_{A})\) over \(\mathscr{C}_{\mu}\mathscr{C}_{\nu^{!}}\mathsf{V}_{A}\), such that \(\mathscr{C}_{\varrho}\{i^{\mu\circ\nu^{!}}_{\mathsf{V}_{A},\mathsf{E}_{A}}\}\) is stably anodyne for any transparent \(\varrho\). We define \(\mathsf{V}_{(\mu\circ\nu^{!})\boxtimes A}=\mathscr{C}_{\mu}\mathscr{C}_{\nu^{!}}\mathsf{V}_{A}\) and \(\mathsf{E}_{(\mu\circ\nu^{!})\boxtimes A}=(\mu\circ\nu^{!})\Box E_{E}\). Now \({}^{r}\!A:\mathsf{L}^{p}\widehat{\mathscr{C}}^{\mu\circ\nu^{!}}(\Gamma) \cong\mathsf{L}^{p}\widehat{\mathscr{C}}_{\nu}\widehat{\mathscr{C}}^{\mu}( \Gamma)\cong\mathscr{C}_{\nu}\mathsf{L}^{q}\widehat{\mathscr{C}}^{\mu}\Gamma \to\mathsf{V}_{A}\) has an adjunct \(\Gamma\rightarrow\widehat{\mathscr{C}}_{\mu}\mathsf{R}_{q}\mathscr{C}_{\nu^{!}}\mathsf{V}_{A}\) (which we will sometimes denote also by \({}^{r}\!A\)). Composing this with the lax naturality constraint of \(\mathsf{R}\), we get \[\Gamma\stackrel{{{}^{r}\!A}}{{\longrightarrow}}\widehat{ \mathscr{C}}_{\mu}\mathsf{R}_{q}\mathscr{C}_{\nu^{!}}\mathsf{V}_{A} \stackrel{{\mathsf{R}_{\mu}}}{{\longrightarrow}}\mathsf{R}_{r} \mathscr{C}_{\mu}\mathscr{C}_{\nu^{!}}\mathsf{V}_{A}=\mathsf{R}_{r}\mathsf{V} _{(\mu\circ\nu^{!})\boxtimes A},\] whose adjunct \(\mathsf{L}^{r}\Gamma\rightarrow\mathsf{V}_{(\mu\circ\nu^{!})\boxtimes A}\) we take as \({}^{r}\!(\mu\circ\nu^{!})\Box A^{\!}\). Finally, we define \(j^{\mu\circ\nu^{!}}_{\mathsf{V}_{A},\mathsf{E}_{A}}\) to make the diagram in Figure 6(a) commute. This uses the universal property of the front rectangle as a pullback. Since \(i^{\mu\circ\nu^{!}}_{\mathsf{V}_{A},\mathsf{E}_{A}}\) is fixed along with the local universe, this definition of \(j\) is strictly stable. This completes Definition 3.4(i). Note that the left-hand square in back above is also a pullback (defining \(\Gamma\triangleright^{\mu\circ\nu^{!}}A\)), but the right-hand one is not: it is naturality of the lax constraint for \(\mathsf{R}\). However, since \(\mathsf{L}^{r}\) inverts this constraint, that square also becomes a pullback upon application of \(\mathsf{L}^{r}\). Thus, \(\mathsf{L}^{r}(j^{\mu\circ\nu^{!}}_{\mathsf{V}_{A}^{,A}})\) is a pullback of \(i^{\mu\circ\nu^{!}}_{\mathsf{V}_{A},\mathsf{E}_{A}}\). For (ii) of Definition 3.4, let \(\varrho:r\to s\) be in \(\mathcal{L}\) (hence transparent in \(\mathcal{L}[\mathcal{S}^{!}]\)), and suppose we have \(A\in\mathrm{Ty}_{p}(\widehat{\mathscr{C}}^{\varrho\rho\rho\omega\nu^{!}}(\Gamma))\). (Thus, the \(\Gamma\) in the preceding proof of part (i) is now \(\widehat{\mathscr{C}}^{\varrho}(\Gamma)\).) We first observe that in an adjoint modal pre-model, the construction of \(\ell\) in Definition 3.4 is equivalent to the diagram in Figure 6(b), in which the map \(k\) and the square \((*)\) are defined by the diagram in Figure 6(c). Then \((*)\) is a pullback, so \(\ell\) is a pullback of \(\widehat{\mathscr{C}}_{\varrho}(j^{\mu\circ\nu^{!}}_{\widehat{\mathscr{C}}^{ \varrho}(\Gamma),A})\). Hence \(\mathsf{L}^{*}(\ell)\) is a pullback of \(\mathsf{L}^{*}\widehat{\mathscr{C}}_{\varrho}(j^{\mu\circ\nu^{!}}_{\widehat{ \mathscr{C}}^{\varrho}(\Gamma),A})\), which is isomorphic to \(\mathscr{C}_{\varrho}\mathsf{L}^{r}(j^{\mu\circ\nu^{!}}_{\widehat{\mathscr{C}}^{ \varrho}(\Gamma),A})\). But we observed that \(\mathsf{L}^{r}(j^{\mu\circ\nu^{!}}_{\widehat{\mathscr{C}}^{\varrho}(\Gamma),A})\) is a pullback of \(i^{\mu\circ\nu^{!}}_{\mathsf{V}_{A},\mathsf{E}_{A}}\); thus \(\mathsf{L}^{*}(\ell)\) is also a pullback of \(\mathscr{C}_{\varrho}(i^{\mu\circ\nu^{!}}_{\mathsf{V}_{A},\mathsf{E}_{A}})\). By Definition 5.15, \(\mathscr{C}_{\varrho}(i^{\mu\circ\nu^{!}}_{\mathsf{V}_{A},\mathsf{E}_{A}})\) is stably anodyne; hence \(\mathsf{L}^{*}(\ell)\) is anodyne. Thus the fillers required by (ii) exist; we make them strictly stable as in [29, Lemmas 3.4.1.4 and 3.4.3.2]. \(\Box\) ### Negative modalities **Definition 5.18**: A modal pre-model \(\mathscr{C}\) has **negative pre-modalities** if for any _sinister_\(\mu:p\to q\), and \(\Gamma\in\mathscr{C}_{q}\) with \(A\in\mathrm{Ty}_{q}(\Gamma)\), we have \(\mu\Diamond A\in\mathrm{Ty}_{p}(\mathscr{C}_{\mu^{!}}\Gamma)\) such that \(\mathscr{C}_{\mu^{!}}\Gamma\triangleright(\mu\Diamond A)\cong\mathscr{C}_{\mu^{!}}( \Gamma\triangleright A)\) over \(\mathscr{C}_{\mu^{!}}\Gamma\). **Theorem 5.19**: _If \((\widehat{\mathscr{C}},\mathscr{C})\) is an adjoint modal pre-model over \((\mathcal{L},\mathcal{S})\) such that \(\mathscr{C}\) has negative pre-modalities over \(\mathcal{L}[\mathcal{S}^{!}]\), then \((\widehat{\mathscr{C}},\widehat{\tau}^{!})\) has negative modalities over \(\mathcal{L}[\mathcal{S}^{!}]\)._ **Proof.** Let \(\mu:p\to q\) be in \(\mathcal{S}\), and \(\Gamma\in\widehat{\mathscr{C}}_{p}\) with \(A\in\widehat{\mathrm{Ty}}_{q}^{!}(\widehat{\mathscr{C}}_{\mu}\Gamma)=\mathrm{Ty }_{q}^{!}(\mathsf{L}^{q}\widehat{\mathscr{C}}_{\mu}\Gamma)\). Thus, we have \(\mathsf{E}_{A}\in\mathrm{Ty}_{q}(\mathsf{V}_{A})\) and \(\mathsf{L}^{r}\!A:\mathscr{C}_{\mu}\mathsf{L}^{p}\Gamma\cong\mathsf{L}^{q} \widehat{\mathscr{C}}_{\mu}\Gamma\to\mathsf{V}_{A}\). By assumption, we have \(\mu\!\circ\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ## 6 Diagrams of 1-topoi Combining Theorems 5.12, 5.17 and 5.19, we have the following. (Recall Assumption 2.4.) **Theorem 6.1**: _Let \(\mathcal{L}\) be a 2-category with a class of morphisms \(\mathcal{S}\). If an adjoint modal pre-model \((\widehat{\mathscr{C}},\mathscr{C})\) over \((\mathcal{L},\mathcal{S})\) is such that \(\mathscr{C}\) has pre-\(\Pi\)-structure, positive pre-modalities, and negative pre-modalities over \(\mathcal{L}[\mathcal{S}^{\dagger}]\), then \((\widehat{\mathscr{C}},\widehat{\tau}^{\ddagger})\) models MATT over \(\mathcal{L}[\mathcal{S}^{\dagger}]\). \(\Box\)_ Any category with pullbacks has a canonical natural pseudo-model where all maps are type projections. **Lemma 6.2**: _Let \(\mathcal{M}\) be an adjoint mode theory, and \(\mathscr{C}:\mathcal{M}\to\mathcal{C}\mathit{at}\) be a pseudofunctor such that each \(\mathscr{C}_{p}\) is locally cartesian closed. If we make \(\mathscr{C}\) a modal pre-model in the canonical way, as above, then it has pre-\(\Pi\)-structure, positive pre-modalities, and negative pre-modalities._ **Proof.** Since \(\mathscr{C}_{p}\) is locally cartesian closed and everything is a type projection, we have pre-\(\Pi\)-structure. For positive pre-modalities we take \(i^{\mu}_{\Gamma,A}\) to be an identity, and similarly for negative pre-modalities. \(\Box\) **Theorem 6.3**: _Let \(\kappa\) be an infinite regular cardinal, \(\mathcal{L}\) a \(\kappa\)-small 2-category with a class of morphisms \(\mathcal{S}\), and \(\mathscr{C}:\mathcal{L}\to\mathcal{C}\mathit{at}\) a pseudofunctor such that each \(\mathscr{C}_{p}\) is locally cartesian closed with \(\kappa\)-small limits, each \(\mathscr{C}_{\mu}\) preserves \(\kappa\)-small limits, and has a right adjoint if \(\mu\in\mathcal{S}\). Then \(\widehat{\mathscr{C}}\) models extensional MATT over \(\mathcal{L}[\mathcal{S}^{\dagger}]\)._ **Proof.** By Lemma 4.5, local cartesian closure lifts from \(\mathscr{C}\) to \(\widehat{\mathscr{C}}\). Thus, \((\widehat{\mathscr{C}},\mathscr{C})\) is an adjoint modal pre-model, so Theorem 6.1 and Lemma 6.2 yield a model of MATT. Composition and diagonals yield weakly stable \(\Sigma\)-types and extensional identity types in each \(\mathscr{C}_{p}\), hence mode-locally by Theorem-Schema 5.7. \(\Box\) **Remark 6.4**: In addition, the following should follow from Lemma 4.5 and Theorem-Schema 5.7. * If each \(\mathscr{C}_{p}\) has finite coproducts, then \(\widehat{\mathscr{C}}\) models sum types at each mode. * If each \(\mathscr{C}_{p}\) is locally presentable and each \(\mathscr{C}_{\mu}\) is accessible, then each \(\widehat{\mathscr{C}}_{p}\) is again locally presentable. Thus, by the methods of [28], \(\widehat{\mathscr{C}}\) models inductive types and quotient-inductive types at each mode. * If \(\mathscr{C}\) is a diagram of Grothendieck topoi and geometric morphisms, then each \(\widehat{\mathscr{C}}_{p}\) is also a topos. Thus, if there are enough inaccessible cardinals, \(\widehat{\mathscr{C}}\) models universes at each mode (see [16, 42, 13, 40]). Let \(\mathcal{T}\mathit{opos}\) denote the 2-category of Grothendieck topoi, geometric morphisms, and transformations. **Theorem 6.5**: _Let \(\mathcal{L}\) be a finite 2-category and \(\mathscr{E}:\mathcal{L}^{\mathsf{copo}}\to\mathcal{T}\mathit{opos}\) a pseudofunctor. Then the co-dextrification \(\widehat{\mathscr{E}}\) models extensional MATT over \(\mathcal{L}[\mathcal{L}^{\dagger}]\), with positive and negative modalities representing inverse image and direct image functors respectively, and extensional MLTT at each mode. \(\Box\)_ **Remark 6.6**: Theorem 6.5 does not state explicitly how to extract conclusions about \(\mathscr{E}\) from the interpretation of MATT in \(\widehat{\mathscr{E}}\). We will not try to make this precise here, but the idea is that \(\widehat{\mathscr{E}}_{p}\) can be viewed as a "presentation" of \(\mathscr{C}_{p}\) via the reflector \(\mathsf{L}^{p}:\widehat{\mathscr{C}}_{p}\to\mathscr{E}_{p}\), and that the interpretation of MATT respects this "quotient". For instance, the anodyne context morphisms (Definition 5.14) in \(\widehat{\mathscr{E}}_{p}\) are precisely those that are inverted by \(\mathsf{L}^{p}\); thus MATT is "unable to distinguish" contexts that present the same object of \(\mathscr{E}_{p}\). One way to make this more precise is using Quillen model categories. We end by discussing some examples of simple classes of diagrams in \(\mathcal{T}\mathit{opos}\), to explore the flexibility and the limits of Theorems 6.3 and 6.5. As we will see, in some cases extra left adjoints already exist, so that co-dextrification is not necessary; but even in this case, some coherence results like those of section 5 are often still needed (see Remark 5.3). Table 1 summarizes some of the following examples, along with whether left adjoints already exist, and pointers to related theories in the literature. **Example 6.7**: If \(\mathcal{L}\) consists of two objects \(p,q\) and one nonidentity morphism \(\mu:p\to q\), then a functor \(\mathcal{L}^{\mathsf{copo}}\to\mathcal{T}\mathit{opos}\) is a single geometric morphism. The resulting instance of MATT has two modes related by an adjoint pair of modalities \(\mu\boxplus\) and \(\mu\diamond\). It is related to the split-context theory AdjTT of [45], and can be interpreted in any geometric morphism. In particular, there is a unique geometric morphism from any topos \(\mathscr{E}\) to \(\mathbf{Set}\). The resulting instance of MATT combines the usual internal language of \(\mathscr{E}\) at one mode with the classical world of \(\mathbf{Set}\) at another mode, with a "discrete objects" modality \(\mu\boxplus\) taking any set to an object of \(\mathscr{E}\), having a right adjoint "global sections" modality \(\mu\raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{ \scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0}{$ \circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox {-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0}{ $\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox {-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0} {$\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt \raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{ \scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0}{$ \circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt \raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{ \scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0}{$ \circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt \raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{ \scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0}{$ \circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt \raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{ \scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0}{$ \circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt \raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{ \scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0}{$ \circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt \raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt \raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{ \scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0}{$ \circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt \raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{ \scalebox{1.0}{$\circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0}{$ \circ$}}\hskip-0.86pt\raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0.86pt \raisebox{-0.86pt}{\scalebox{1.0}{$\circ$}}\hskip-0. count is an isomorphism. Thus we can interpret MATT over this \(\mathcal{L}\) in an arbitrary reflective adjunction between toposes, giving a modal type theory for a topos equipped with a subtopos. If the inclusion functor has a further right adjoint, we have a coreflective adjunction in \(\mathcal{T}\mathit{opos}\), and we can interpret MATT over \(\mathcal{L}[\mathcal{L}^{\uparrow}]\) with \(\nu\) sinister. The induced geometric morphism from the larger topos to the smaller one is then called _totally connected_. For instance, the "topos of trees" used in guarded recursion theory (presheaves on \((\mathbb{N},\leq)\)) is totally connected over \(\mathbf{Set}\); the modal type theories that it models are discussed in [12, SS9] and [11, SSVI]. (In this topos, the left adjoint happens to already have a further left adjoint, so co-dextrification is not required to interpret modal type theory.) **Example 6.10**: Taking \(\mathcal{L}\) to be the opposite of the one from Example 6.9, we can interpret MATT in an arbitrary _coreflective_ adjunction between toposes. This is the same as a _connected_ geometric morphism, such as that from sheaves on some connected space to \(\mathbf{Set}\). If the right adjoint has a further right adjoint, so that we can interpret MATT over \(\mathcal{L}[\mathcal{L}^{\uparrow}]\), the geometric morphism is called _local_. This property rarely holds for the topos of sheaves on a space (a "little topos"), but it often does for toposes whose _objects_ can be interpreted as some kind of space ("big toposes"). Big toposes are a natural home for _synthetic topology_. One is Johnstone's topological topos [18], whose objects are a sort of sequential convergence space; the internal language of this topos is used for instance in [8]. A related topos is \(\kappa\)-condensed sets [36], which has been advocated for the study of algebraic objects equipped with topology. (When \(\kappa\) is inaccessible, these are also called pyknotic sets [3]. The category of all "condensed sets" is locally cartesian closed but not a topos.) Many local toposes are also _cohesive_, meaning that the leftmost adjoint of their adjoint triple has a further left adjoint. This left adjoint is not usually finitely continuous, so it cannot be represented internally as a judgmental modality with co-dextrification, although it can be introduced axiomatically as in [38, 31]. When it exists, co-dextrification is not needed to interpret the other modalities. But Johnstone's topological topos and \(\kappa\)-condensed sets are not cohesive, so co-dextrification is necessary in those cases. Non-cohesive local toposes turn out to have many advantages. One clear advantage is that they can include non-locally-connected spaces, which arise naturally in many parts of mathematics. In addition, they often do a better job of faithfully encoding topological notions such as unions of closed sets, constructions of cell complexes (including geometric realization of simplicial sets), and cohomology. In [9] such a topos was used to represent the Kleene-Kreisel functionals and model principles of intutionism. **Example 6.11**: We can simplify the mode theory of Example 6.10 by removing the mode corresponding to the base topos. Then \(\mathcal{L}\) has one mode \(p\) and a single idempotent comonad \(\mu\boldsymbol{:}\,p\to p\), and is again finite. Thus, MATT over this \(\mathcal{L}\) can be interpreted in any topos that is connected over some base, with the base topos visible as the modal types for the comonad \(\mu\boldsymbol{\oplus}_{\!\! represent its modality negatively, and use it as its own lock functor in semantics, thereby interpreting this instance of MATT in any topos equipped with such an endofunctor. Unfortunately, the intended model of [35] is an \((\infty,1)\)-topos without an evident \(1\)-categorical analogue, so it is not covered by this paper. By [18, Theorem 8.1], there is a geometric morphism \(S\mathbin{\!\mathbin{\mathcal{E}}}\to\mathbf{sSet}\) from Johnstone's topological topos \(\mathscr{E}\) to the topos \(\mathbf{sSet}\) of simplicial sets, whose direct image \(S_{*}\) is the total singular complex (suitably generalized) and whose inverse image \(S^{*}\) is geometric realization. Since both \(\mathscr{E}\) and \(\mathbf{sSet}\) are local over \(\mathbf{Set}\), this allows us to reason formally about geometric realization using an instance of MATT with three modes -- say \(t\) for the topological topos, \(s\) for simplicial sets, and \(d\) for discrete sets -- with sinister coreflective adjunctions relating \(d\) to both \(t\) and \(s\), and a sinister morphism \(\sigma\mathbin{:}s\to t\) for the geometric realization adjunction. As \(\mathscr{E}\) is not cohesive (though \(\mathbf{sSet}\) is), and geometric realization is not a right adjoint, this would be impossible without co-dextrification. Using [18, Theorem 8.2], we can do something similar for geometric realization of "simplicial spaces", i.e. simplicial objects of \(\mathscr{E}\). ## 7 Conclusion and future work We have shown that, contrary to appearances, general modal type theories formulated with "context locks" following [12, 11] can be interpreted in diagrams of categories without requiring additional left adjoints to interpret the locks. This significantly expands the potential semantics of such theories, strengthening the argument that they are a good general approach to modal dependent type theories. In addition, we have formulated MATT, a general context-lock modal type theory that unifies the positive modalities of [12] with the negative ones of [11], and shown that it is the natural type theory to interpret in our semantics. We have, however, left many open questions for future research, such as the following. 1. Can the assumption of \(\kappa\)-small limits be weakened, specifically when \(\kappa>\omega\)? 2. It is known [40] that intensional dependent type theory can be interpreted in any \((\infty,1)\)-topos. Can intensional MATT be interpreted in any \((\infty,1)\)-topos? 3. Is there a full "internal language correspondence" relating MATT to suitable diagrams of categories? E.g. do adjoint modal context structures have a homotopy theory that presents diagrams of categories? 4. Does MATT satisfy normalization, and which \((\mathcal{L},\mathcal{S})\) are decidable? (See Remark 2.6.) 5. Is there a general modal dependent type theory using left multi-liftings, and can it be interpreted in the co-dextrification? Can it be generalized to cases where left multi-liftings do not exist? 6. In [27], _simple_ modal type theories were unified with substructural ones. Is there a context-lock approach to substructurality? Can it be unified with modal dependent type theory? ## Appendix A The universal property of co-dextrification Here we sketch a proof that the co-dextrification is a right adjoint. We continue the notation of section 4. Let \(\mathcal{L}ex_{\kappa}\) denote the \(2\)-category of categories with \(\kappa\)-small limits, functors that preserve such limits, and natural transformations. Let \(\mathcal{P}_{\mathcal{S}\mathit{colax}}(\mathcal{M},\mathcal{L}ex_{\kappa})\) denote the \(2\)-category of pseudofunctors, colax natural transformations, and modifications from \(\mathcal{M}\) to \(\mathcal{L}ex_{\kappa}\). (A colax natural transformation \(F\mathbin{\!\mathbin{\mathcal{C}}}\to\mathscr{D}\) consists of components \(F_{p}\mathbin{\!\mathbin{\mathcal{C}}}_{p}\to\mathscr{D}_{p}\) and \(2\)-cells \(F_{\mu}\mathbin{\!\mathbin{\mathcal{C}}}_{p}\circ\mathscr{C}_{\mu}\Rightarrow \mathscr{D}_{\mu}\circ F_{p}\), satisfying functoriality and naturality.) Let \(\mathcal{P}_{\mathcal{S}}{}^{\mathit{adj}}(\mathcal{M}^{\mathsf{coop}}, \mathcal{L}ex_{\kappa})\) denote the \(2\)-category of: * Pseudofunctors \(\mathscr{D}\mathbin{\!\mathbin{\!\mathbin{\mathcal{C}}}}\mathbin{\!\mathbin{ \!\mathbin{\mathcal{M}}}}^{\mathsf{coop}}\to\mathcal{L}ex_{\kappa}\), with action on morphisms and \(2\)-cells written as \(\mathscr{D}^{\mu}\) and \(\mathscr{D}^{\alpha}\) respectively, and such that each \(\mathscr{D}^{\mu}\) has a right adjoint \(\mathscr{D}_{\mu}\) in \(\mathcal{L}ex_{\kappa}\). * Pseudonatural transformations between pseudofunctors \(\mathcal{M}^{\mathsf{coop}}\to\mathcal{L}ex_{\kappa}\). * All modifications between these. There is a 2-functor \(U\mathbin{\!\mathbin{\!\mathbin{\mathcal{P}}}}_{\mathcal{S}}{}^{\mathit{adj }}(\mathcal{M}^{\mathsf{coop}},\mathcal{L}ex_{\kappa})\to\mathcal{P}_{\mathit{ colax}}(\mathcal{M},\mathcal{L}ex_{\kappa})\). Since passage to adjoints is pseudofunctorial, for \(\mathscr{D}\in\mathcal{P}_{\mathcal{S}}{}^{\mathit{adj}}(\mathcal{M}^{\mathsf{ coop}},\mathcal{L}ex_{\kappa})\) the right adjoints \(\mathscr{D}_{\mu}\) form a pseudofunctor \(\mathcal{M}\to\mathcal{L}ex_{\kappa}\). If \(F\mathbin{\!\mathbin{\mathcal{C}}}\to\mathscr{D}\) is a pseudonatural transformation between \(\mathscr{C},\mathscr{D}\in\mathcal{P}_{\mathcal{S}}{}^{\mathit{adj}}(\mathcal{M}^ {\mathsf{coop}},\mathcal{L}ex_{\kappa})\), then the pseudonaturality isomorphism \(\mathscr{D}^{\mu}\circ F_{q}\cong F_{p}\circ\mathscr{C}^{\mu}\) has a mate \(F_{q}\circ\mathscr{C}^{\mu}_{\mu}\Rightarrow\mathscr{D}_{\mu}\circ F_{p}\), providing the \(2\)-cells to make \(F\) a colax natural transformation between pseudofunctors \(\mathcal{M}\to\mathcal{L}ex_{\kappa}\). Thus, section 4 shows that from \(\mathscr{C}\in\mathcal{P}_{\mathit{scalar}}(\mathcal{M},\mathcal{L}ex_{\kappa})\) we have constructed \(\widehat{\mathscr{C}}\in\mathcal{P}_{\mathcal{S}}{}^{\mathit{adj}}(\mathcal{M}^ {\mathsf{coop}},\mathcal{L}ex_{\kappa})\) along with a map \(\mathsf{L}:U\widehat{\mathscr{C}}\to\mathscr{C}\) in \(\mathcal{P}\!\mathcal{S}_{\textit{colax}}(\mathcal{M},\mathcal{L}ex_{\kappa})\). In fact, \(\widehat{\mathscr{C}}\) is even a strict \(2\)-functor \(\mathcal{M}^{\mathsf{coop}}\to\mathcal{L}ex_{\kappa}\), and the map \(\mathsf{L}\) is even _pseudo_ natural. The former distinction is immaterial (any pseudofunctor into \(\mathcal{L}ex_{\kappa}\) is equivalent to a strict one); but regarding the latter, it appears to be important for the universal property that we use colax transformations in general even though \(\mathsf{L}\) is pseudo. **Theorem A.2**: _Let \(\mathscr{C}\in\mathcal{P}\!\mathcal{S}_{\textit{colax}}(\mathcal{M},\mathcal{ L}ex_{\kappa})\), with \(\widehat{\mathscr{C}}\) constructed as in section 4. Then for any \(\mathscr{D}\in\mathcal{P}\!\mathcal{S}^{\textit{adj}}(\mathcal{M}^{\mathsf{coop }},\mathcal{L}ex_{\kappa})\), the functor_ \[\mathcal{P}\!\mathcal{S}^{\mathsf{adj}}(\mathcal{M}^{\mathsf{coop}},\mathcal{ L}ex_{\kappa})(\mathscr{D},\widehat{\mathscr{C}})\xrightarrow{\left(\mathsf{L} \circ U-\right)}\mathcal{P}\!\mathcal{S}_{\textit{colax}}(\mathcal{M}, \mathcal{L}ex_{\kappa})(U\mathscr{D},\mathscr{C})\] _is an equivalence of categories. Thus, \(\mathscr{C}\mapsto\widehat{\mathscr{C}}\) is bicategorically right adjoint to the forgetful functor \(U:\mathcal{P}\!\mathcal{S}^{\textit{adj}}\to\mathcal{P}\!\mathcal{S}_{\textit {colax}}\)._ **Sketch of proof.** We construct a pseudo-inverse. Let \(G:\mathscr{D}\to\mathscr{C}\) be colax natural; we define \(\widehat{G}:\mathscr{D}\to\widehat{\mathscr{C}}\) as follows. For \(\Gamma\in\mathscr{D}_{r}\) and \(\mu:p\to r\), we set \[(\widehat{G}_{r}\Gamma)^{\mu}=G_{p}(\mathscr{D}^{\mu}(\Gamma))\quad\in\mathscr{ C}_{p}.\] And for \(\alpha:\mu\Rightarrow\nu\circ\varrho\), we let \((\widehat{G}_{r}\Gamma)^{\alpha}\) be the composite \[(\widehat{G}_{r}\Gamma)^{\nu}=G_{q}(\mathscr{D}^{\nu}(\Gamma))\xrightarrow{G _{q}(\overline{\mathscr{D}^{\alpha}(\Gamma)})}G_{q}(\mathscr{D}_{\varrho} \mathscr{D}^{\mu}(\Gamma))\xrightarrow{G_{\varrho}}\mathscr{C}_{\varrho}(G_{ p}(\mathscr{D}^{\mu}(\Gamma)))=\mathscr{C}_{\varrho}((\widehat{G}_{r}\Gamma)^{\mu})\] where \(\overline{\mathscr{D}^{\alpha}(\Gamma)}:\mathscr{D}^{\nu}(\Gamma)\to\mathscr{ D}_{\varrho}\mathscr{D}^{\mu}(\Gamma)\) is the mate of the map \[\mathscr{D}^{\alpha}(\Gamma):\mathscr{D}^{\varrho}\mathscr{D}^{\nu}(\Gamma) \to\mathscr{D}^{\mu}(\Gamma)\] arising from pseudofunctoriality of the left adjoints for \(\mathscr{D}\). All the axioms follow from the functoriality and naturality of mates, as does functoriality; thus we have a functor \(\widehat{G}_{r}:\mathscr{D}_{r}\to\widehat{\mathscr{C}}_{r}\). Now let \(\omega:r\to s\); we show \(\widehat{G}\) commutes with \(\mathscr{D}^{\omega}\) and \(\widehat{\mathscr{C}}^{\omega}\), componentwise. We have \[(\widehat{G}_{r}(\mathscr{D}^{\omega}(\Gamma)))^{\mu}=G_{p}(\mathscr{D}^{\mu }\mathscr{D}^{\omega}(\Gamma))\cong G_{p}(\mathscr{D}^{\omega\circ\mu}(\Gamma ))=(\widehat{G}_{s}\Gamma)^{\omega\circ\mu}=(\widehat{\mathscr{C}}^{\omega}( \widehat{G}_{s}\Gamma))^{\mu}.\] If \(\mathscr{D}\) is a strict \(2\)-functor, this is a strict equality. Otherwise, the pseudonaturality axioms follow. On one side, if we map \(\widehat{G}\) back down into \(\mathcal{P}\!\mathcal{S}_{\textit{colax}}(\mathcal{M},\mathcal{L}ex_{\kappa}) (U\mathscr{D},\mathscr{C})\), its components are \((\widehat{G}_{p}\Gamma)^{1_{p}}=G_{p}(\mathscr{D}^{1_{p}}(\Gamma))\cong G_{p}\Gamma\). This defines an isomorphism \(G\cong\mathsf{L}\circ U\widehat{G}\) in \(\mathcal{P}\!\mathcal{S}_{\textit{colax}}(\mathcal{M},\mathcal{L}ex_{\kappa}) (U\mathscr{D},\mathscr{C})\). On the other side, suppose \(F:\mathscr{D}\to\widehat{\mathscr{C}}\) is a morphism in \(\mathcal{P}\!\mathcal{S}^{\textit{adj}}(\mathcal{M}^{\mathsf{coop}},\mathcal{ L}ex_{\kappa})\). We want to compare it to \(\widehat{\mathsf{L}\circ U\widehat{F}}\), so we compute using the definition of the latter: \[(\widehat{(\mathsf{L}\circ U\widehat{F})}_{r}(\Gamma))^{\mu}=\mathsf{L}^{p}F_{p} (\mathscr{D}^{\mu}(\Gamma))\cong\mathsf{L}^{p}(\widehat{\mathscr{C}}^{\mu}(F_ {r}\Gamma))=(F_{r}\Gamma)^{\mu}\] These define the components of an isomorphism \(\widehat{\mathsf{L}\circ U\widehat{F}}\cong F\) in \(\mathcal{P}\!\mathcal{S}^{\textit{adj}}\left(\mathcal{M}^{\mathsf{coop}}\mathcal{ L}ex_{\kappa}\right)\). \(\Box\)
2302.12448
Subspace based Federated Unlearning
Federated learning (FL) enables multiple clients to train a machine learning model collaboratively without exchanging their local data. Federated unlearning is an inverse FL process that aims to remove a specified target client's contribution in FL to satisfy the user's right to be forgotten. Most existing federated unlearning algorithms require the server to store the history of the parameter updates, which is not applicable in scenarios where the server storage resource is constrained. In this paper, we propose a simple-yet-effective subspace based federated unlearning method, dubbed SFU, that lets the global model perform gradient ascent in the orthogonal space of input gradient spaces formed by other clients to eliminate the target client's contribution without requiring additional storage. Specifically, the server first collects the gradients generated from the target client after performing gradient ascent, and the input representation matrix is computed locally by the remaining clients. We also design a differential privacy method to protect the privacy of the representation matrix. Then the server merges those representation matrices to get the input gradient subspace and updates the global model in the orthogonal subspace of the input gradient subspace to complete the forgetting task with minimal model performance degradation. Experiments on MNIST, CIFAR10, and CIFAR100 show that SFU outperforms several state-of-the-art (SOTA) federated unlearning algorithms by a large margin in various settings.
Guanghao Li, Li Shen, Yan Sun, Yue Hu, Han Hu, Dacheng Tao
2023-02-24T04:29:44Z
http://arxiv.org/abs/2302.12448v1
# Subspace based Federated Unlearning ###### Abstract Federated learning (FL) enables multiple clients to train a machine learning model collaboratively without exchanging their local data. Federated unlearning is an inverse FL process that aims to remove a specified target client's contribution in FL to satisfy the user's right to be forgotten. Most existing federated unlearning algorithms require the server to store the history of the parameter updates, which is not applicable in scenarios where the server storage resource is constrained. In this paper, we propose a simple-yet-effective subspace based federated unlearning method, dubbed SFU, that lets the global model perform gradient ascent in the orthogonal space of input gradient spaces formed by other clients to eliminate the target client's contribution without requiring additional storage. Specifically, the server first collects the gradients generated from the target client after performing gradient ascent, and the input representation matrix is computed locally by the remaining clients. We also design a differential privacy method to protect the privacy of the representation matrix. Then the server merges those representation matrices to get the input gradient subspace and updates the global model in the orthogonal subspace of the input gradient subspace to complete the forgetting task with minimal model performance degradation. Experiments on MNIST, CIFAR10, and CIFAR100 show that SFU outperforms several state-of-the-art (SOTA) federated unlearning algorithms by a large margin in various settings. Machine Learning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Unlearning, Federated Un, Federated Unlearning, Federated Unlearning, Unlearning, Federated Unlearning, Federated Unlearning, Federated Un, Unlearning, Federated Unlearning, Federated Un, Federated Unlearning, Unlearning, Federated Unlearning, Federated Un, Federated Unlearning, Unlearning, Federated Unlearning, Unlearning, Federated Un, Federated Unlearning, Unlearning, Federated Un, Federated Un, Unlearning, Federated Un, Unlearning, Federated Un, Unlearning, Federated Un, Federated Un, Federated Un, Unlearning, Federated Un, Unlearning, Federated Un, Unlearning, Federated Un, Federated Un, Unlearning, Federated Un, Unlearning, Federated Un, Unlearning, Federated Un, Unlearning, Federated Un, Federated Un, Unlearning, Federated Un, Federated Un, Federated Un, Unlearning, Federated Un, Federated Un, Unlearning, Federated Un, Unlearning, Federated Un, Unlearning, Federated Un, Federated Un, Unlearning, Federated Un, Un, Federated Un, Unlearning, Federated Un, Federated Un, Unlearning, Federated Un, Federated Un, Federated Un, Federated Un, Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Un, Federated Un, Federated Un, Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Un, Federated Un, Federated Un, Un, Federated Un, Un, Federated Un, Federated Un, Federated Un, Federated Un, Un, Federated Un, Federated Un, Federated Un, Federated Un, Un, Federated Un, Federated Un, Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Un, Federated Un, Federated Un, Un, Federated Un, Federated Un, Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Un Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un Un, Federated Un, Federated Un Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Un Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Un, Federated Un Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un Un, Federated Un Un, Federated Un, Federated Un Un, Federated Un, Federated Un, Federated Un, Federated Un Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un Un, Federated Un, Federated Un Un, Federated Un, Federated Un Un, Federated Un, Federated Un, Federated Un Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un Un, Federated Un Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un Un, Federated Un, Federated Un, Federated Un Un, Federated Un Un, Federated Un, Federated Un, Federated Un, Federated Un, Federated Un Un, Federated Un Un, Federated Un, Federated Un, Federated Un Un, Federated Un, Federated Un Un, Federated Un, Federated Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un, Federated Un Un, Federated Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un, Federated Un Un, Federated Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un Un, Federated Un Un, Federated Un, Federated Un Un Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un, Federated Un Un Un, Federated Un Un Un, Federated Un Un Un, Federated Un Un, Federated Un Un Un Un, Federated Un Un Un, Federated Un Un Un, Federated Un Un Un Un, Federated Un Un Un Un, Federated Un Un Un Un, Federated Un Un Un Un, Federated Un Un Un, Federated Un Un Un Un Un, Federated historical updated gradient data in the server and use it to roll back the trained global model (Wu et al., 2022). The above unlearning methods all require the client or server to retain some additional data or gradient information, which is not possible in FL scenarios with limited storage resources. In each FL training round, local training of each client is a process that reduces the empirical loss (Rumelhart et al., 1986). We argue that unlearning can be formulated as the inverse process of learning, in the sense that the gradient ascent on the target client can realize the forgetting of the client data. However, the loss is unbounded and we need to limit the gradient of training to ensure the quality of the model after unlearning (Chen and Wainwright, 2015). The whole process can be regarded as a constraint-solving problem to maximize the empirical loss of the target client within the constraints of the model performance. In this paper, we propose a Subspace-based Federated Unlearning method, dubbed SFU. SFU restricts the gradient of the target client's gradient ascent to the orthogonal space of the input space of the remaining clients to remove the target client's contribution from the final trained global model. In SFU, the server only needs the gradient information provided by the target client and the representation matrix information provided by other clients, without directly accessing the original data of each client. On the other hand, SFU can be used for models in any training phase without considering the specific details of model training and model aggregation. At the same time, SFU does not require the client or server to store additional historical gradient information or data. Specifically, SFU participants can be divided into three kinds of roles: the target client to be forgotten, the remaining clients, and the server. In SFU, the target client performs gradient ascent locally based on the global model and sends the gradient to the server; each remaining client selects a certain amount of local data to build a representation matrix and send it to the server; the server receives the representation matrix from each client and merges it to obtain the input subspace with Singular Value Decomposition (SVD) (Hoecker and Kartvelishvili, 1996); the server finally projects the gradient of the target client into the orthogonal subspace of the input space and updates the global model with it. In addition, we design a differential privacy method to protect the privacy of clients in the process of sending the representation matrices (Li et al., 2021; Truex et al., 2019). It needs each client to add random perturbation factors to each vector of the representation matrix to prevent possible privacy leaks and those perturbation factors have no effect on the input space search and the final model. Empirical results show that SFU beats other SOTA baselines with 1%-10% improvement in test sets. In the end, we summarize the main contributions as follows: * We first introduce subspace learning to federated unlearning and propose a new federated unlearning algorithm called SFU, which trains the global model by gradient ascent in an orthogonal subspace perpendicular to the input space of the remaining clients to achieve the forgetting goal without large performance loss and additional storage costs. * We design a differential privacy method to prevent the possible privacy leakage caused by the transmission of the client representation matrix during the process of unlearning. This method adds random perturbation factors to each vector of the representation matrix but does not affect the unlearning process. * We conduct extensive experiments to evaluate the effectiveness of SFU, which significantly outperforms several SOTA baselines on various datasets including MNIST, CIFAR10, and CIFAR100. ## 2 Related Work **Machine unlearning.** The term "machine unlearning" is a process to forget a piece of training data completely which needs to revert the effects of the data on the extracted features and models. Machine unlearning is first proposed by Cao and Yang (2015), and they transform the statistical query learning into a summation form and achieve unlearning by updating a small part of the summation. However, this algorithm only works for those transformable traditional machine learning methods, machine unlearning for different ML models has been explored. Ginart et al. (2019) formulate the problem and the concept of effective data deletion in machine learning. They also propose two efficient deletion solutions for the K-means clustering algorithm. Izzo et al. (2021) focus on supervised linear regression and propose an approximate data deletion method called projective residual update (PRU) for linear and logistic models. The computational cost of PRU is linear in the feature dimension, but PRU is not applicable for more complex models such as neural networks. Baumhauer et al. (2022) introduce the more general framework SISA to reduce the computational overhead associated with forgetting. The main idea of SISA is to split the training data into several disjoint fragments, each of which trains a sub-model. To remove a specific sample, the algorithm simply retrains the sub-model that is learned from this sample. However, existing machine learning works focus on ML models in traditional centralized settings, where training data is assumed to be globally accessible, which is not suitable for learning in FL settings. **Federated unlearning.** Compared with centralized learning, there are few works on unlearning in FL. Liu et al. (2021) first introduce unlearning into the field of FL and propose FedEraser. The main idea is to adjust the historical parameter updates of federated clients through the retrain ing process in FL and reconstruct the unlearning model. However, this process requires additional communication between the client and the server. Recently, Wu et al. (2022) develop a forgetting approach that removes historical parameter updates from the target client and recovers model performance through a knowledge distillation process. Both of these steps require the server to keep a history of all client updates. In addition, the knowledge distillation approach requires the server to have some additional unlabeled data. In some application scenarios, meeting these requirements may not be a matter of course. In contrast, our method does not need the server to store historical updates or additional unlabeled data and has better privacy. Our method mainly solves the case where a client (referred to as the target client) wants to remove its contribution from the global model. We let the global model perform gradient ascent in the orthogonal subspace of the input space of remaining clients to achieve federated unlearning. Different from these works (and the one we propose), Wang et al. (2022) propose a forgetting framework for forgetting a specific class or category in FL. Our approach is closely related to the federated unlearning approach recently proposed by Halimi et al. (2022). They formulate the unlearning problem as a constrained maximization problem by restricting to an \(\ell_{2}\)-norm ball around a suitably chosen reference model and allowed the target client to perform the unlearning by using the Projected Gradient Descent (PGD) algorithm (Thudi et al., 2022). However, \(\ell_{2}\)-norm ball can not provide an effective guarantee for the performance of the model after unlearning. We add constraints on the performance of the unlearning model. Our method restricts the gradient ascent to a subspace orthogonal to the input space of other clients, and this constrained gradient update has minimal impact on the performance of the model on other clients. We will quantitatively show that our method has a performance guarantee. ## 3 Methodology We propose a novel federated unlearning method, as shown in Algorithm 1, that can eliminate the client's contribution and vastly reduce the unlearning cost in the FL system. This method does not require the server to keep the history of parameter updates from each client or additional training. The key idea is to use the restricted gradient information of the target client to modify the final trained model. ``` 1:Input: The number of samples in each client \(p\), the global model \(w\), local dataset \(\mathcal{D}^{i}\) of client \(i\), random factors \(\lambda_{i}^{l}\) for layer \(l\) in client \(i\). 2:Target client \(C_{I}\): 3:\(g_{i}\leftarrow-\eta\ell_{i}(w;(x,y))\); 4:Send \(g_{i}\) to the server; 5:Other clients: 6:Representation matrix for layer \(l\) in client \(i\): \(\mathbf{R}_{i}^{l}=[\lambda_{i}^{l}x_{1}^{l},\lambda_{i}^{l}x_{2}^{l},...,\lambda_ {i}^{l}x_{p}^{l}]\); 7:Send \(\mathbf{R}_{i}^{l}\) to the server; 8:server: 9:\(\mathbf{R}^{l}=[\mathbf{R}_{1}^{l},...,\mathbf{R}_{C_{I}+1}^{l},\mathbf{R}_{C_{I}+1}^{l}..., \mathbf{R}_{N}^{l}]\); 10:Performing SVD on \(\mathbf{R}^{l}=\mathbf{U}^{l}\mathbf{\Sigma}^{l}(\mathbf{V}^{l})^{T}\); 11:\(S^{l}=span\{\mathbf{u}_{1,1}^{l},\mathbf{u}_{2,1}^{l},...,\mathbf{u}_{k,1}^{l}\}\); 12:\(g\tilde{C_{I}}=proj(g_{C_{I}},S)\); 13:\(w=w-(g_{C_{I}}-g\tilde{C_{I}})\) ``` **Algorithm 1** Subspace-based Federated Unlearning (SFU) model \(w\) over the dataset \(\mathcal{D}\triangleq\bigcup_{i\in[N]}\mathcal{D}^{i}:\) ### Problem Setup Suppose that there are \(N\) clients, denoted as \(C_{1},...,C_{N}\), respectively. Client \(C_{i}\) has a local dataset \(\mathcal{D}^{i}\). The goal of traditional FL is to collaboratively learn a machine learning model \(w\) over the dataset \(\mathcal{D}\triangleq\bigcup_{i\in[N]}\mathcal{D}^{i}:\) \[\operatorname*{arg\,min}_{w}\mathcal{L}(w)=\sum_{i=1}^{N}\frac{|\mathcal{D}^{ i}|}{|\mathcal{D}|}L_{i}(w), \tag{1}\] \[w^{*}=\operatorname*{arg\,min}_{w}\mathcal{L}(w), \tag{2}\] where \(L_{i}(w)=\mathbb{E}_{(x,y)\sim\mathcal{D}^{i}}[\ell_{i}(w;(x,y))]\) is the empirical loss of \(C_{i}\) and during federated training, each client minimizes their empirical risk \(L_{i}(w)\), \(w^{*}\) is the final model trained by the FL process. Now we consider how to forget the contribution of the target client \(C_{I}\). A natural idea is to increase the empirical risk \(L_{C_{I}}(w)\) of the target client \(C_{I}\), which is equivalent to reversing the learning process. However, simply maximizing the loss can influence the effect of the model on other clients. Federated unlearning needs to forget the contribution of the target client \(C_{I}\) while ensuring the overall model performance. Thus, the objective of federated unlearning is defined below: \[\begin{split}&\operatorname*{arg\,max}_{w}L_{i}(w)=\mathbb{E}_{(x,y) \sim\mathcal{D}^{i}}[\ell_{i}(w;(x,y))]\\ & s.t.\qquad\mathcal{L}^{ul}(w)-\mathcal{L}^{ul}(w^{*})\leq\delta \end{split} \tag{3}\] where \(\delta\) is a small change in the empirical loss, \(\mathcal{L}^{ul}()\) is the empirical loss of the FL system after removing the target client. \[\mathcal{L}^{ul}(w)=\sum_{i\in[N\setminus C_{I}]}\frac{|\mathcal{D}^{i}|}{| \mathcal{D}^{un}|}L_{i}(w), \tag{4}\] where \(\mathcal{D}^{un}\triangleq\bigcup_{i\in[N\setminus C_{I}]}\mathcal{D}^{i}\) is the remaining data set after removing the target client. ### Unlearning Metrics Comparing the difference between the unlearned model and the retrained model is one of the criteria used to measure the effect of unlearning. Common dissimilarity metrics include model test accuracy difference (Bourtoule et al., 2021), \(\ell_{2}\)-distance (Wu et al., 2020) or, and Kullback-Leibler (KL) divergence (Sekhari et al., 2021). However, in the FL scenario, it is difficult to intuitively indicate whether the contribution of a given client can be removed from the evaluation method based on model differences. Other metrics include privacy leakage in the differential privacy framework (Sekhari et al., 2021) and membership inference attacks (Graves et al., 2021; Baumhauer et al., 2022). In this paper, we use the backdoor triggers (Gu et al., 2017) as an effective way to evaluate the performance of unlearning methods, similar to Wu et al. (2022). In particular, the target client uses a dataset with a certain fraction of images that have a backdoor trigger inserted in them. Thus, the global FL model becomes susceptible to the backdoor trigger. Then, a successful unlearning process should produce a model that reduces the accuracy of the images with the backdoor trigger, while maintaining good performance on regular (clean) images. Note that we use the backdoor triggers as a way to evaluate the performance of unlearning methods; we do not consider any malicious client (Xie et al., 2020; Bagdasaryan et al., 2020; Fung et al., 2020) ### Subspace-based Federated Unlearning (SFU) We introduce a novel Subspace-based federated unlearning framework, named SFU. The main insight of the SFU is that we constrain the gradients generated by the target client's gradient ascent to the input subspace of the other clients to remove the contribution of the target client from the global model. As shown in Fig. 1, SFU participants can be divided into three kinds of roles: the target client to be forgotten, the remaining clients, and the server. The target client performs gradient ascent to upload the gradient to the server. Other clients compute the local representation matrix and upload it to the server. The server is responsible for the computation of other client input Spaces and the unlearning update of the global model. Next, we will introduce the specific tasks of the three participants respectively. #### 3.3.1 Gradient ascent on the target client To find a model with a large empirical loss in target client \(C_{I}\), we can simply make several local passes of (mini-batch stochastic) gradient ascent in client \(C_{I}\) and add these gradient updates to the global model. In order to satisfy the constraints of Eq. (3), we need to consider a more reasonable way to add the updated gradient of client \(C_{I}\) to the global model (Zhou et al., 2020; Qian et al., 2015). Given a neural network \(W\) and an input \(\mathbf{x}\) we can obtain an output \(\mathbf{y}\): \[W\mathbf{x}=\mathbf{y}_{1}. \tag{5}\] When this model accepts a gradient update \(\Delta w\), the output becomes: \[(W+\Delta w)\mathbf{x}=\mathbf{y}_{2}. \tag{6}\] The difference between the two outputs is: \[\Delta\mathbf{y}=\mathbf{y}_{2}-\mathbf{y}_{1}=(W+\Delta w)\mathbf{x}-W \mathbf{x}=\Delta w\mathbf{x}. \tag{7}\] When \(\Delta\mathbf{y}\) is 0, the difference between the two outputs is minimized, which requires the updated gradient \(\Delta w\) to be perpendicular to the original input gradient subspace \(\mathbf{x}\). Therefore, we can project the updated gradient of the target client \(C_{I}\) into the orthogonal space of the gradient subspace of \(\mathcal{D}^{un}\) to minimize the degradation of the glob model performance (Farajtabar et al., 2020). #### 3.3.2 Computation of representation matrix We first need to consider how to represent the input space in \(\mathcal{D}^{un}\), the data of other clients. For an individual network, we construct the gradient subspace by the following two steps: * For each layer \(l\) of the network, We first construct a representation matrix, \(\mathbf{R}^{l}=[\mathbf{x}_{1}^{l},\mathbf{x}_{2}^{l},...,\mathbf{x}_{n_{s}}^{l}]\) concatenating \(n_{s}\) representations along the column obtained from forward pass of \(n_{s}\) random samples from the current training dataset through the network. * Next, we perform SVD on \(\mathbf{R}^{l}=\mathbf{U}^{l}\mathbf{\Sigma}^{l}(\mathbf{V}^{l})^{T}\) followed by its \(k\)-rank approximation \((\mathbf{R}_{1}^{l})_{k}\) according to the following criteria for the given coefficient, \(\epsilon^{l}\) : \[||(\mathbf{R}^{l}))_{k}||_{F}^{2}\geq\epsilon^{l}||\mathbf{R}^{l}||_{F}^{2}.\] (8) \(S^{l}=span\{\mathbf{u}_{1,1}^{l},\mathbf{u}_{2,1}^{l},...,\mathbf{u}_{k,1}^{l}\}\), spanned by the first \(k\) vectors in \(\mathbf{U}_{1}^{l}\) as the space of significant representation at layer \(l\) since it contains all the directions with highest singular values in the representation (Saha et al., 2021). For FL scenarios, we need the data on each client to seek the gradient subspace of the \(\mathcal{D}^{un}\). First, all clients excluding the target client \(C_{I}\) select the same number of representations matrix of local samples for each layer \(\mathbf{R}_{i}^{l}\) and send them to the central server to construct the representation matrix. To protect the privacy of the representation matrix, we design a differential privacy algorithm. We add random factors \(\lambda_{i}^{l}\) to the representation of layer \(l\) from client \(i\) to avoid leaking data information about the client data and it does not affect the search process of the subspace because of the nature of the orthogonal matrix.In Eq. (7), if \[\Delta w\mathbf{x}=0, \tag{9}\] we have \[\Delta w(\lambda\textbf{x})=0. \tag{10}\] The final set of representation matrix in the server is \(\mathbf{R}=\left\{\mathbf{R}^{1},\mathbf{R}^{2},...,\mathbf{R}^{L}\right\}\), and \(\mathbf{R}^{l}=[\lambda_{1}^{l}\textbf{x}_{1,1}^{l},\lambda_{2}^{l}\textbf{x}_{2}^{ l},...,\lambda_{N}^{l}\textbf{x}_{n_{s}}^{l}]\). #### 3.3.3 Update of the global model on the server After several local passes of (mini-batch stochastic) gradient ascent, client \(C_{I}\) sends the updated gradient \(g_{C_{I}}\) to the server. The server performs the update of the global model \(w\) after collecting the set of representation matrix \(\mathbf{R}\) and the gradient \(g_{C_{I}}\). The server first perform SVD (Rumelhart et al., 1986) on \(\mathbf{R}\) to get the set of input gradient subspace \(S=\left\{S^{1},S^{2},...,S^{L}\right\}\). To achieve the goal of Equation 3, we need to project \(g_{C_{I}}\) onto \(S\) and get projection \(g_{C_{I}}^{c}\). \(g_{C_{I}}-g_{C_{I}}^{c}\) orthogonal to \(\mathbf{R}\) and the server update the global model \(w\) with \(g_{C_{I}}-g_{C_{I}}\): \[w=w-(g_{C_{I}}-g_{C_{I}}). \tag{11}\] The updated model \(w\) removes the contribution of the target client \(C_{I}\) and maintains a similar performance to the original global model. After a global model is trained, a complete SFU training process mainly includes three steps as shown in Fig.1: * **Step 1.** Besides target client \(C_{I}\), each client selects the same number of samples to calculate the representation matrix \(\mathbf{R}^{1}\) for each layer\(l\) of the network and sends it to the server after adding random factors \(\lambda_{i}^{l}\). * **Step 2.** The target client \(C_{I}\) performs several local passes of gradient ascent locally and sends the updated gradient to the server. * **Step 3.** The server perform SVD on the set of representation \(\mathbf{R}\) to get the set of input gradient subspace \(S\), project \(g_{C_{I}}\) onto \(S\) and removes the contribution of the target client \(C_{I}\) by updating the global model \(w\) as Eq. 11. In the end, we give several comments on the proposed SFU algorithm. Note that subspace learning has been commonly used to solve continual learning (Saha et al., 2021), meta learning (Jiang et al., 2022), adversarial training (Li et al., 2022), and fast training of deep learning models (Li et al., 2022). However, SFU is the first work to use the orthogonal space of input gradient space for federated unlearning. In addition, SFU needs to seek subspace from dispersed stored data and should consider the privacy leakage risk. ## 4 Experiments In this section, we empirically evaluate the effectiveness of SFU using different model architectures on three datasets. We divide the unlearning process into two parts: the removal of specific client contributions and the recovery of model performance. The experimental results show that our unlearning strategies can effectively remove the contribution of the target client from the global model with low-performance loss and can quickly recover the accuracy of the model in a few rounds of training. We first introduce the experimental setup and then present the evaluation results. ### Experimental Setup **Datasets:** We evaluate SFU using three popular datasets: MNIST(Xiao et al., 2017), CIFAR10, and CIFAR100 (Krizhevsky et al., 2009) as described below: * MNIST: It is a dataset that contains a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image associated with a label from 10 classes. * CIFAR10: It consists of 60,000 32 x 32 color images in 10 classes, with 6000 images per class. There are 50,000 training images and 10,000 test images. * CIFAR100: It has 100 classes. Each class has 600 color images of size 32 x 32, of which 5000 are used as a training set, and 100 are used as a test set. The training difficulty of the dataset is increasing from MNIST, CIFAR10 to CIFAR100. We adopt two data distribution Settings, including IID (Independent-Identically-Distributed) as well as Non-IID (Non Independent-Identically-Distributed). The Non-IID setting that we adopt is Dirichlet(\(\beta\)): Label distribution on each device follows the Figure 1: The pipeline of the SFU.The whole process takes place after the FL model has been trained. The orange client represents the target client whose contribution is to be removed; The blue ones represent other clients. The boxes on the right of the image represent global model updates that happen on the server. Dirichlet distribution, where \(\beta\) is a concentration parameter (\(\beta>0\)). **Baselines.** We chose three typical federated unlearning algorithms as our baseline: (1) Retraining: This method retrains the entire FL system without the target client being forgotten, which is an effective but computationally and communicationally expensive algorithm; (2) "UL" Wu et al. (2022): This method forgets the target client by subtracting historical parameter updates of the target client from the global model. Then, "UL" uses the knowledge distillation method to remedy the skew of the unlearning model caused by the subtraction without using any data from the target clients. (3) "GA": GA uses gradient ascent information on the target client to update the global model. Gradient ascent makes the loss of the model increase, which is the inverse process of learning and can achieve unlearning. However, the loss of gradient ascent is unbounded so we set the gradient clip norm when the global model is updated to reduce the probability of producing a random model. **Models.** We employ three neural network architectures in our experiments. * MLP: This is a fully-connected neural network architecture with 2 hidden layers. The number of neurons in the layers is 200 and 100, respectively. * CNN: This network architecture consists of 2 convolutional layers with 64 5 \(\times\) 5 filters followed by 2 fully connected layers with 800 and 500 neurons and a Relu layer. * ResNet: we use a smaller version of ResNet18 (He et al., 2016), with three times fewer feature maps across all layers. We used two networks MLP and CNN for the MNIST dataset, while we used the above three network structures for the CIFAR10 and CIFAR100 datasets. We use PyTorch (Oord et al., 2018) to implement those models. **Evaluation metric.** We use backdoor attacks in the target client's updates to the global model as described before, so that we can intuitively investigate the unlearning effect based on the attack success rate of the unlearned global model. In Tab. 1 and Tab. 2, we record the attack success rate as "atk acc". A lower "atk acc" represents a cleaner contribution removal from the target client. In our experiments, we implement the backdoor attack using a "pixel pattern" trigger of size 2 \(\times\) 2 and change the label to "9". Because of the prediction error of the model, We can consider an error rate of less than 10% as successful forgetting of the target client contribution. We also use the accuracy metric on the clean test data to measure the performance of the model after unlearning which is denoted "test acc" in Tab. 1 and Tab. 2. A high accuracy indicates that unlearning has little impact on the performance of the model. Current unlearning methods usually adopt certain methods to recover the model accuracy after unlearning, so we divide our evaluation of the unlearning approach into two respects: the removal of specific client contributions and the recovery of model performance. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & & \multicolumn{2}{c|}{FedAvg} & \multicolumn{2}{c|}{UL} & \multicolumn{2}{c|}{GA} & \multicolumn{2}{c|}{SFU} \\ \hline Dataset & network & test acc & atk acc & test acc & atk acc & test acc & atk acc & test acc & atk acc \\ \hline \multirow{2}{*}{MNIST} & MLP & 97.73 & 93.26 & 77.19 & 0.0 & 96.80 & 0.0 & **97.66** & 0.15 \\ \cline{2-10} & CNN & 99.31 & 91.29 & 42.36 & 0.0 & 92.33 & 19.28 & **99.29** & 0.21 \\ \hline \multirow{2}{*}{CIFAR10} & MLP & 49.15 & 89.66 & 26.17 & 0.0 & 48.61 & 0.01 & **49.08** & 0.0 \\ \cline{2-10} & CNN & 72.83 & 99.36 & 18.75 & 0.0 & 57.48 & 0.37 & **57.75** & 0.0 \\ \cline{2-10} & ResNet & 76.12 & 98.27 & 43.75 & 0.0 & **73.58** & 65.98 & 44.60 & 0.0 \\ \hline \multirow{2}{*}{CIFAR100} & MLP & 18.86 & 9.93 & 2.24 & 0.0 & 13.42 & 0.00 & **18.70** & 0.0 \\ \cline{2-10} & CNN & 37.47 & 97.77 & 2.51 & 0.0 & 20.12 & 1.41 & **36.31** & 0.0 \\ \cline{2-10} & ResNet & 39.68 & 90.38 & 7.38 & 0.0 & **37.63** & 6.73 & 26.84 & 0.0 \\ \hline \end{tabular} \end{table} Table 1: Accuracy results after unlearning (IID). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & & \multicolumn{2}{c|}{Retraining} & \multicolumn{2}{c|}{UL-Distillation} & \multicolumn{2}{c|}{GA-retraining} & \multicolumn{2}{c|}{SFU-retraining} \\ \hline Dataset & network & test acc & atk acc & test acc & atk acc & test acc & atk acc & test acc & atk acc \\ \hline \multirow{2}{*}{MNIST} & MLP & 97.88 & 0.0 & 97.88 & 0.0 & 97.05 & 9.30 & **97.91** & 0.28 \\ \cline{2-10} & CNN & 99.48 & 11.55 & 99.20 & 0.31 & 99.34 & 37.13 & **99.37** & 12.63 \\ \hline \multirow{2}{*}{CIFAR10} & MLP & 48.77 & 0.00 & 47.83 & 0.0 & 48.09 & 20.36 & **48.81** & 0.10 \\ \cline{2-10} & CNN & 74.86 & 0.00 & 72.33 & 0.51 & 72.43 & 26.47 & **72.87** & 2.18 \\ \cline{2-10} & ResNet & 76.95 & 7.23 & **77.47** & 9.68 & 77.19 & 81.06 & 76.84 & 3.12 \\ \hline \multirow{2}{*}{CIFAR100} & MLP & 18.47 & 0.0 & 9.43 & 0.0 & 18.81 & 2.46 & **18.46** & 0.0 \\ \cline{2-10} & CNN & 38.25 & 0.0 & 27.05 & 0.0 & 37.08 & 51.36 & **37.81** & 14.39 \\ \cline{2-10} & ResNet & 40.15 & 0.81 & 38.43 & 0.79 & 38.11 & 61.91 & **40.93** & 0.92 \\ \hline \end{tabular} \end{table} Table 2: Accuracy results after retraining (IID). **Implementation details.** We consider two scenarios: (i) We have ten clients with one target client, and all clients participate fully during each training round. (ii) We have 100 clients with one target client and only 10% of the clients participate during each training round. In the experiment of removal of specific client contributions, We conduct unlearning experiments on the FL model after 100 rounds of training; In the experiment of recovery of model performance, We start FL training with the stochastic model for full retraining. we start FL training on the unlearned local model without the involvement of the target client for SFU and GA. We use the knowledge of the public data on the server for distillation learning to recover the model accuracy for UL. Tab. 2 and Tab. 1 only show the results of scenario (i). Results of scenario (ii) and other implementation details are shown in **Appendix**A due to limited space. ### Results of Contribution Removal We conducted extensive experiments to determine the advantage of SFU in removing the contribution of a specific client in the global model. In addition, we demonstrate the robustness and superiority of SFU in different training degrees of the model and different data heterogeneity. All results are reported based on the global model after performing unlearning. The results show that SFU can effectively remove the contribution of target clients under the premise of ensuring model accuracy, and has strong robustness to data heterogeneity and the training degree of the model. **Efficient forgetting by SFU.** Tab. 1 reports the unlearning effects of SFU and the other baselines on different IID datasets and model architectures. The results show that SFU can efficiently complete the contribution removal of target clients. We observe that SFU successfully eliminates the contribution of target customers and achieves low precision for backdoor data. For example, SFU can reduce the backdoor attack success rate to or close to 0% in all datasets. Other baselines also can achieve similar results. This shows that the current unlearning methods can effectively remove the contribution of the target client. **High model accuracy for SFU.** The results in Tab. 1 also report the accuracy of the model on the clean test set data after unlearning for each baseline. The results show that SFU achieves the best results on various datasets and different model architectures. However, the UL method will cause great damage to the performance of the model, and the direct gradient ascent has a large probability of producing a low-accuracy model. For example, on the CNN architecture of the MNIST dataset, the accuracy of the UL model is only 42.76%, which is 56.55% lower than the accuracy of the original FL model 99.31%. The accuracy of the model generated by GA is 92.33%, which is 7% lower than the 99.29% accuracy of the model generated by SFU, which indicates that restricting the gradient in the orthogonal subspace of the input space can reduce the loss of the original model Figure 3: Model accuracy after execution of SFU and other baselines under different degrees of data heterogeneity. The original global model is obtained by training with 10 clients, including one target client. Figure 2: Model accuracy after execution of SFU and other baselines at various stages of FL model training. performance. **Robustness of the original model maturity.** Federated unlearning can occur at any stage of FL, so it is possible that the model does not converge when federated unlearning is performed. We define the maturity of the model in terms of the number of rounds of model training, where the more rounds the model is trained, the more mature it is. Fig. 2 reports the effect of each baseline with unlearning at different rounds of the FL model training. The results show that SFU can ensure the performance of the model to the greatest extent at any round of model training, and the accuracy of SFU is similar to that of FL model training. UL is easy to remove in the early stage of model training. However, as the model continues to train, the integration degree of each client model increases, simply subtracting the historical weight of the client is easy to cause a significant decline in the accuracy of the model. For example, the UL accuracy in the MNIST dataset dropped from 97% to 66% as the model trained. GA has similar results to SFU, but still has lower accuracy than SFU, which indicates that SFU is robust to the original model maturity. **Robustness on heterogeneous data.** We test different unlearning algorithms on MNIST datasets with different degrees of heterogeneity, and ten clients are selected to form the FL system. Fig. 3 shows that enhancing data distribution heterogeneity will reduce the effect of various unlearning algorithms. It is because the aggregation of the global model becomes more complex with the increase of model heterogeneity, which makes it more challenging to separate the contributions of specific clients under the premise of ensuring model accuracy. However, the accuracy of SFU is close to that of the original FL model and the retrained model in all Settings, which indicates that SFU is robust to data heterogeneity. ### Results of Model Accuracy Recovery Below, we report the accuracy of the final model and the recovery efficiency of different methods. All results are reported based on the global model after unlearning. **High final accuracy of the SFU.** Tab. 2 reports the model performance of each algorithm after 10 rounds of accuracy recovery training and after 100 rounds of full retraining. We find that almost all methods can achieve a low success rate of backdoor attacks except for GA. The reason why GA has a high success rate of backdoor attacks is that the algorithm after unlearning is still near the original model, and it is easy to converge to the original model with high accuracy and a high success rate of backdoor attacks. The final accuracy of SFU is the highest among all the methods, and sometimes it can even exceed the accuracy of retraining, which proves that updating in the subspace will produce a better-initialized model to achieve higher accuracy. **High-speed precision recovery for SFU.** To compare the model accuracy recovery speed of SFU with each baseline, we calculate the accuracy and backdoor attack success rate of different methods in terms of the number of FL training rounds after unlearning, and the results are shown in Fig. 4. We observe that SFU is able to achieve high accuracy after one round of training, while other methods require 5 or even more rounds of training to achieve. SFU is more efficient in terms of computation and communication cost on the retained clients than the baseline of retraining while achieving comparable backdoor accuracy. ## 5 Conclusion In this paper, we propose a novel federated unlearning approach that can successfully eliminate the contribution of a specified client to the global model, which also can minimize the model accuracy loss by performing a gradient ascent process within the subspace at any stage of model training. Our approach only relies on the target client to be forgotten from the federation without the server or any other client keeping track of its history of parameter updates. Our method also provides a differential privacy method to protect the representation matrix information during training. We have used a backdoor attack to effectively evaluate the performance of the proposed method. Our experimental results demonstrate the efficiency and effectiveness of SFU. Figure 4: Convergence plots for SFU and other baselines in different datasets with CNN.
2304.08946
Uniquely hamiltonian graphs for many sets of degrees
We give constructive proofs for the existence of uniquely hamiltonian graphs for various sets of degrees. We give constructions for all sets with minimum 2 (a trivial case added for completeness), all sets with minimum 3 that contain an even number (for sets without an even number it is known that no uniquely hamiltonian graphs exist), and all sets with minimum 4, except {4}, {4,5}, and {4,6}. For minimum degree 3 and 4, the constructions also give 3-connected graphs. We also introduce the concept of seeds, which makes the above results possible and might be useful in the study of Sheehan's conjecture. Furthermore, we prove that 3-connected uniquely hamiltonian 4-regular graphs exist if and only if 2-connected uniquely hamiltonian 4-regular graphs exist.
Gunnar Brinkmann, Matthias De Pauw
2023-04-18T12:36:59Z
http://arxiv.org/abs/2304.08946v4
# Uniquely hamiltonian graphs for many sets of degrees ###### Abstract We give constructive proofs for the existence of uniquely hamiltonian graphs for various sets of degrees. We give constructions for all sets with minimum 2 (a trivial case), all sets with minimum 3 that contain an even number (for sets without an even number it is known that no uniquely hamiltonian graphs exist), and all sets with at least two elements and minimum 4 where all other elements are at least 10. For minimum degree 3 and 4, the constructions also give 3-connected graphs. ## 1 Introduction The most important problem for hamiltonian cycles is of course which properties guarantee the existence of a hamiltonian cycle, but as soon as the existence of a hamiltonian cycle is known, the question arises how many hamiltonian cycles exist. In [4], recent results and an overview of older results on graphs with few hamiltonian cycles are given. The extremal case is when a graph contains a single hamiltonian cycle, that is: it is _uniquely hamiltonian_. A crucial role for the existence of a uniquely hamiltonian graph is played by the combination of vertex degrees present in the graph. Already in 1946 Tutte reported a result by Smith that uniquely hamiltonian cubic graphs don't exist [8]. This was later improved by Thomason [7] showing that uniquely hamiltonian graphs where all vertices have odd degree don't exist. In [5] it is shown that no \(d\)-regular uniquely hamiltonian graphs exist if \(d\geq 23\). So while there are e.g. neither uniquely hamiltonian graphs with all degrees 3 nor with all degrees 24, a special case of what we will prove will be that there are uniquely hamiltonian graphs if both these vertex degrees are allowed. For even \(d\) with \(4\leq d\leq 22\) it is not known whether \(d\)-regular uniquely hamiltonian graphs exist. In [3] Fleischner shows that there are uniquely hamiltonian graphs with minimum degree 4. He constructs graphs with vertices of degree 4 and 14 and graphs where _the maximum degree can grow even larger_ - without specifying which degrees can occur. We will use an improved version of his method to prove that for all sets \(M\) with at least two elements, containing a 4 and otherwise only numbers \(d\geq 10\), uniquely hamiltonian graphs exist, so that the set of vertex degrees is exactly \(M\). Furthermore we characterize sets of degrees with minimum 2 or 3 for which uniquely hamiltonian graphs exist completely. The term _graph_ always refers to a simple undirected graph, that is: without multiple edges and without loops. If multiple edges are allowed, we use the term _multigraph_. Loops are never allowed, as they are trivial in the context of uniquely hamiltonian graphs. We define the degree set \(M_{deg}(G)\) of a graph (or multigraph) \(G\) with vertex set \(V\) as \(M_{deg}(G)=\{\deg(v)\ |v\in V\}\). For a set \(M=\{d_{0},d_{1},d_{2},\ldots,d_{k}\}\) with \(d_{0}<d_{1}<\cdots<d_{k}\), we say that a 2-connected (if \(2\in M\)), resp. 3-connected (otherwise) uniquely hamiltonian graph \(G\)_realizes_\(M\) if \(M_{deg}(G)=M\). If such a \(G\) exists, we define \(M\) to be _uhc-realizable_. Next to the question whether a set \(M\) is uhc-realizable, it is also interesting which role is played by the larger degrees. Our emphasis is on the smallest degree \(d_{0}\) and we want to know whether the number of times that the degrees \(d_{1},\ldots,d_{k}\) occur can be bounded by a constant even for very large graphs, so that the average degree can be arbitrarily close to the smallest degree. On the other hand it might also be interesting to know, whether the larger degrees can occur an unbounded number of times and maybe also occur for at least a fixed fraction of the vertices also in arbitrarily large graphs. The average degree would in that case be bounded from below by the minimum degree times a constant factor \(c>1\). The strongest requirement is, if both can occur and even in combination depending on the \(d_{i}\). We formalize that by the following definition: For a set \(M=\{d_{0},d_{1},d_{2},\ldots,d_{k}\}\) with \(k>0\), \(d_{0}<d_{1}<\cdots<d_{k}\) we say that \(M\) is _strongly uhc-realizable_, if for each partition \(D_{1},D_{2}\) of \(\{d_{1},\ldots,d_{k}\}\) (with one of \(D_{1},D_{2}\) possibly empty) there are constants \(c_{1}\in\mathbb{N},c_{2}\in\mathbb{R}\), \(c_{2}>0\), and an infinite sequence of graphs \(G_{i}=(V_{i},E_{i})\) realizing \(M\), so that for all \(d\in D_{1}\) each \(G_{i}\) has at most \(c_{1}\) vertices of degree \(d\), and for each \(d^{\prime}\in D_{2}\) each \(G_{i}\) has at least \(c_{2}|V_{i}|\) vertices of degree \(d^{\prime}\). ## 2 Minimum degree 2 or 3 We will start with an easy remark that is mainly contained for consistency: **Remark 2.1**: _Any finite set \(M=\{d_{0}=2,d_{1},d_{2},\ldots,d_{k}\}\subset\mathbb{N}\) with \(2<d_{1}<d_{2}<\cdots<d_{k}\) is uhc-realizable and if \(k>0\), it is also strongly uhc-realizable._ **Proof:**: We will first prove that \(M\) is uhc-realizable. \(|M|=1\) is trivial. If \(|M|=2\), one can take \(K_{d_{1}+1}\) and subdivide the edges of a hamiltonian cycle. If \(|M|>2\) one can e.g. take complete graphs \(K_{d_{1}+1},\ldots K_{d_{k}+1}\), remove an edge \(e_{d_{i}}\in K_{d_{i}+1}\) for \(1\leq i<k\), an edge \(e^{\prime}_{d_{i}}\in K_{d_{i}+1}\) for \(2\leq i\leq k\) with \(e_{d_{i}}\cap e^{\prime}_{d_{i}}=\emptyset\) for \(3\leq i<k\), and then connect the endpoints of \(e_{d_{i}}\) and \(e^{\prime}_{d_{i+1}}\) for \(2\leq i<k\). The result is obviously hamiltonian and 2-connected and after subdividing the edges of a hamiltonian cycle, one has a uniquely hamiltonian graph with exactly the vertex degrees in \(M\). To show that \(M\) is strongly uhc-realizable, assume a partition \(D_{1},D_{2}\) to be given. If \(D_{2}=\emptyset\) one can subdivide edges on the hamiltonian cycle arbitrarily often to obtain the sequence of graphs. If \(D_{2}\neq\emptyset\) one can use the above construction only for the degrees in \(D_{2}\) and connect arbitrarily many copies of these graphs to the initial one by opening edges on the hamiltonian cycle. **Theorem 2.2**: _A finite set \(M=\{d_{0}=3,d_{1},d_{2},\ldots,d_{k}\}\) with \(3<d_{1}<d_{2}<\cdots<d_{k}\) of natural numbers is uhc-realizable if and only if \(M\) contains an even number. In that case it is also strongly uhc-realizable._ **Proof:**: The fact that if \(M\) contains no even number, there is no uniquely hamiltonian graph \(G\) with \(M_{deg}(G)=M\) is a well known result of Thomason [7] - no matter what the condition on connectivity is. To show that \(M\) is uhc-realizable if \(M\) contains an even number, it remains to be shown that there is a \(3\)-connected uniquely hamiltonian graph \(G\) with \(M_{deg(G)}=M\). The key to the construction is the Petersen graph with one vertex removed. We denote it by \(P_{v}\). The graph is depicted in Figure 1 with the neighbours of the removed vertex labeled \(x,y,z\). A well known property of this graph (one can in fact also check by hand) is that there is no hamiltonian path between any two of \(x,y,z\), but with \(z\) removed, there are exactly two hamiltonian paths from \(x\) to \(y\): \(x,1,2,3,4,5,6,y\) and \(x,4,5,6,1,2,3,y\). With this graph we will prove the following claim: **Claim:** There is a \(3\)-connected graph \(G_{M}\) with \(M_{deg}(G_{M})=M\) and exactly two hamiltonian cycles that has at least two vertices of degree \(3\) that are passed by the two hamiltonian cycles in different ways. **Proof of the claim:** For a graph \(G=(V,E)\), an edge \(e=\{x^{\prime},y^{\prime}\}\in E\), and a vertex \(z^{\prime}\in V,z\not\in e\) we define the graph \(O_{P}(G,e,z^{\prime})\) as the graph obtained as follows. Let \(P^{\prime}\) be a copy of \(P_{v}\). We start with the disjoint union of \(G\) and \(P^{\prime}\). Then we remove the edge \(e\), connect \(x^{\prime}\) with the vertex \(x\) in \(P^{\prime}\), \(y^{\prime}\) with the vertex \(y\) in \(P^{\prime}\), and identify \(z^{\prime}\) with the vertex \(z\) in \(P^{\prime}\). If \(G\) was \(3\)-connected, then \(O_{P}(G,e,z^{\prime})\) is \(3\)-connected too. As \(x,y,z\) form a \(3\)-cut, the edges of \(P^{\prime}\) that belong to a possible hamiltonian cycle of \(O_{P}(G,e,z^{\prime})\) form one path. This path can only start in \(x\) and end in \(y\), as a path from \(z\) to \(x\) (or to \(y\)) would miss vertices of \(P^{\prime}\) that can not be adjacent to two edges from the hamiltonian cycle outside \(P^{\prime}\). So it would have to be a path from \(x\) to \(y\) missing only \(z\) which must be adjacent to two edges of the hamiltonian cycle from outside \(P^{\prime}\). If \(e\) lies on a hamiltonian cycle of \(G\), then \(O_{P}(G,e,z^{\prime})\) is also hamiltonian, or to be precise: If \(G=(V,E)\) is a graph, \(z^{\prime}\in V\), and \(e=\{x^{\prime},y^{\prime}\}\in E\) is an edge that lies on exactly one hamiltonian cycle of \(G\), then \(O_{P}(G,e,z^{\prime})\) is a hamiltonian graph with exactly two hamiltonian cycles \(H_{1},H_{2}\). \(O_{P}(G,e,z^{\prime})\) has \(4\) vertices of degree \(3\) passed by \(H_{1}\) and \(H_{2}\) in two different ways (the vertices \(1,3,4,6\) in the copy \(P^{\prime}\)) and \(6\) edges that lie on exactly one of \(H_{1},H_{2}\) (\(\{x,1\}\), \(\{x,4\}\), \(\{3,4\}\), \(\{1,6\}\), \(\{y,3\}\), and \(\{y,6\}\) - all in \(P^{\prime}\)). We now start the construction with the graph \(G_{s}\) given in Figure 2. It has two hamiltonian cycles and if one extended the definition of \(O_{P}()\) to multigraphs, it could also be obtained Figure 1: The graph \(P_{v}\): the Petersengraph with one vertex removed. The graph has no hamiltonian path between any two of \(x,y,z\), but if \(z\) is removed, there are exactly two hamiltonian paths from \(x\) to \(y\): \(x,1,2,3,4,5,6,y\) and \(x,4,5,6,1,2,3,y\). \(G\) as \(O_{P}(G,e,z^{\prime})\) with \(G\) the 3-vertex multigraph with two vertices of degree 3 sharing the edge \(e\) and both adjacent to the vertex \(z^{\prime}\) of degree 2. We will now first construct a graph with exactly two hamiltonian cycles and degree set \(M\). For each even degree \(d\) we choose a vertex of degree 2 and increase its degree until it is \(d\) and analogously for each odd degree by increasing the degree of a vertex of degree 3: If \(M\) contains \(h\) even degrees \(d^{\prime}_{1}<\cdots<d^{\prime}_{h}\), we subdivide the edge \(\{a,z\}\) in \(G_{s}\) by \(h-1\) vertices \(v_{1},\ldots,v_{h-1}\) of degree 2. For each \(1\leq i\leq h-1\) we apply \(O_{P}(.,e,v_{i})\) precisely \((d^{\prime}_{i}-2)/2\) times with \(e\) one of the edges lying on exactly one hamiltonian cycle after the last application of \(O_{P}(.,e,v_{i})\). For \(z\) we apply the operation \(O_{P}(.,e,z)\) precisely \((d^{\prime}_{h}-4)/2\) times. The result \(G_{e}\) of these operations is a graph with degree set \(\{3,d^{\prime}_{1},\ldots,d^{\prime}_{h}\}\), exactly two hamiltonian cycles, and 6 edges lying on exactly one hamiltonian cycle. If \(M\) contains \(j+1\) odd degrees \(3<d^{\prime\prime}_{1}<\cdots<d^{\prime\prime}_{j}\), for each \(1\leq i\leq j\) we choose a fixed vertex \(w_{i}\) of degree 3 in the graph obtained after increasing the degree of \(w_{i-1}\) to \(d^{\prime\prime}_{i-1}\) (or in \(G_{e}\) if \(i=1\)) and apply \(O_{P}(.,e,w_{i})\) precisely \((d^{\prime\prime}_{i}-3)/2\) times with \(e\) one of the edges lying on exactly one hamiltonian cycle after the last application of \(O_{P}(.,.,w_{i})\). The result \(G_{M}\) of these operations is a graph with the properties stated in the claim. To \(G_{M}\) we can now apply a well known technique (see [6]) to prove the theorem: take two copies of \(G_{M}\) and two vertices \(v\), \(v^{\prime}\) of degree 3 - one in each copy - that are passed by the hamiltonian cycles in the copies in two different ways. Assume that the neighbours are \(a,b,c\), resp. \(a^{\prime},b^{\prime},c^{\prime}\) and that the hamiltonian cycles pass \(v\) as \(a,v,b\) and \(a,v,c\) (and accordingly for \(v^{\prime}\)). Removing \(v\) and \(v^{\prime}\) and adding the edges \(\{a,c^{\prime}\},\{b,b^{\prime}\},\{c,a^{\prime}\}\), only one hamiltonian cycle remains - using the paths \(a\) to \(c\) in one copy and \(c^{\prime}\) to \(a^{\prime}\) in the other. The fact that the graph is 3-connected follows easily. To show that \(M\) is strongly uhc-realizable, assume a partition \(D_{1},D_{2}\) to be given. If we have a uniquely hamiltonian graph \(G\) with \(M_{deg}(G)=M\) and the part \(D_{2}\) of the partition is empty, we can replace vertices of degree 3 by triangles of cubic vertices and obtain an infinite sequence of graphs with the desired properties. If \(D_{2}\neq\emptyset\), we can construct a graph \(G^{\prime}\) with \(M_{deg}(G^{\prime})=D_{2}\cup\{3\}\). Then, starting with \(G=G_{0}\), we can remove a vertex \(v^{\prime}\) of degree 3 in a copy of \(G^{\prime}\) and a vertex \(v\) of degree 3 in \(G\) and connect neighbours of \(v\) and \(v^{\prime}\) in a way that the two vertices not neighbouring \(v\), resp. \(v^{\prime}\) on the unique hamiltonian cycle are connected. The result \(G_{1}\) is again a graph with a unique hamiltonian cycle and Figure 2: The graph \(G_{s}\) to start the construction. degree set \(M\), but all degrees in \(D_{2}\) occur in \(G\) and \(G^{\prime}\). This process can be recursively applied to \(G_{1}\) to form \(G_{2}\), etc. to obtain a sequence of uniquely hamiltonian graphs with the required properties. ## 3 Minimum degree 4 The following construction is (up to some modifications) due to H. Fleischner [3]. Let \(P=(V,E)\) be a graph and \(s,t,v\in V\) be vertices. If there is a unique hamiltonian path from \(s\) to \(t\) in the graph \(P_{-v}=P[V\setminus\{v\}]\) induced by \(V\setminus\{v\}\), we call \(\mathcal{P}=(P,s,t,v)\) a _weak H-plugin_ or just an H-plugin. If in addition there is no hamiltonian path from \(s\) to \(t\) in \(P\) (so also containing \(v\)), we call \(\mathcal{P}=(P,s,t,v)\) a _strong H-plugin_. For an H-plugin \((P,s,t,v)\) and a graph \(G\) with a vertex \(x\) of degree \(3\) with neighbours \(y,x_{1},x_{2}\), we define the _P-splice_ of \(G\) at \(\{x,y\}\), denoted as \(O(x,y,\mathcal{P})\) as the graph obtained by removing \(x\), connecting \(x_{1}\) with the vertex \(s\) in a copy \(P^{\prime}\) of \(P\), \(x_{2}\) with the vertex \(t\) in \(P^{\prime}\) and identifying the vertex \(v\) in \(P^{\prime}\) with \(y\). This operation is sketched in Figure 3. We will also refer to it shortly as _splicing the edge_\(\{x,y\}\). The notation \(O(x,y,\mathcal{P})\) does not take into account which of the vertices is \(x_{1}\) and which is \(x_{2}\), so in general \(O(x,y,\mathcal{P})\) is one of the two possibilities. Elementary arguments show that if \(P\) as well as \(G\) are \(3\)-connected, then \(O(x,y,\mathcal{P})\) is \(3\)-connected. The following lemma and corollary are stronger versions of Lemmas 1,2, and 3 in [3]. **Lemma 3.1**: _(in parts already in [3]) Let \(G=(V,E)\) be a graph with a unique hamiltonian cycle \(C_{H}\), \(x\in V\) of degree \(3\) with neighbour \(y\), so that the edge \(\{x,y\}\) is not on \(C_{H}\). Let \(\mathcal{P}=(P,s,t,v)\) be an H-plugin._ _If at least one of the following three conditions is fulfilled, then \(O(x,y,\mathcal{P})\) has a unique hamiltonian cycle \(C_{H,O}\). Except for the edges incident with \(x\), all edges of \(C_{H}\) are also contained in \(C_{H,O}\)._ **(i)**: \(G[V\setminus\{y\}]\) _is not hamiltonian._ **(ii)**: \(\{x,y\}\) _lies in a triangle._ **(iii)**: \(\mathcal{P}\) _is a strong H-plugin._ Condition (iii) also explains the name _strong_ H-plugin: while in general the splicing of edges that are not on the unique hamiltonian cycle only guarantees a unique hamiltonian cycle in the result if the edges satisfy some extra condition, this extra condition is not necessary if \(\mathcal{P}\) is strong. Figure 3: The splicing operation. **Proof:**: As \(s\) and \(t\) have only one edge to the outside of (the copy of) \(P\) in \(O(x,y,{\cal P})\), none of them can be incident only with edges of a hamiltonian cycle \(C_{H,O}\) of \(O(x,y,{\cal P})\) that lie outside \(P\). To this end there are in principle three ways how \(C_{H,O}\) could pass through \(P\): **a.)**: by a hamiltonian path of \(P_{-v}\) from \(s\) to \(t\) while the vertex \(v=y\) is incident to two edges of \(C_{H,O}\) not in \(P\), **b.)**: by a hamiltonian path of \(P\) from \(v\) to \(s\) or to \(t\), **c.)**: by a hamiltonian path of \(P\) from \(s\) to \(t\). In all three cases (i), (ii), and (iii) of the lemma, we can get a hamiltonian cycle of \(O(x,y,{\cal P})\) passing \(P\) like described in a.) if we replace the part \(x_{1},x,x_{2}\) in \(C_{H}\) by \(x_{1},s,\ldots,t,x_{2}\) with the middle part the unique hamiltonian path from \(s\) to \(t\) in \(P_{-v}\). So there is always a hamiltonian cycle for case a.), but that cycle is unique due to the two paths in \(P_{-v}\) and outside \(P_{-v}\) being unique. Assume now that \(O(x,y,{\cal P})\) has a hamiltonian cycle passing \(P\) as in case b.) and assume w.l.o.g. that the endpoint is \(s\). Replacing the part \(y=v,\ldots,s,x_{1}\) by \(y,x,x_{1}\), we get a hamiltonian cycle of \(G\) containing \(e\), which does not exist, as \(C_{H}\) is unique. So a hamiltonian cycle falling into case b.) does not exist. It remains to be shown that also case c.) can not occur under the additional prerequisites. **(i)** Assume that \(O(x,y,{\cal P})\) has a hamiltonian cycle passing \(P\) as in case c.). Replacing the part \(x_{1},s,\ldots,t,x_{2}\) (now also containing \(v=y\)) by \(x_{1},x,x_{2}\), we get a cycle in \(G\) missing only \(y\) - that is: a hamiltonian cycle of \(G[V\setminus\{y\}]\), which does by assumption not exist. **(ii)**: This is a special case of (i). Assume that \(G[V\setminus\{y\}]\) contains a hamiltonian cycle \(C^{\prime}_{H}\). Then \(C^{\prime}_{H}\) passes \(x\) by \(x_{1},x,x_{2}\), but replacing this part by \(x_{1},y,x,x_{2}\) or \(x_{1},x,y,x_{2}\) - depending on whether the triangle is \(x_{1},x,y\) or \(x_{2},x,y\) - we get a hamiltonian cycle of \(G\) containing \(e\), which does not exist, as \(C_{H}\) is unique. **(iii)**: In this case the prerequisites are exactly that a path as in c.) does not exist. \(\blacksquare\) **Corollary 3.2**: _(in part already in [3]) Let \(G=(V,E)\) be a graph with a unique hamiltonian path \(P_{H}\) from \(s\in V\) to \(t\in V\). Assume \(x\in V\), \(x\not\in\{s,t\}\) is of degree \(3\) with neighbour \(y\), so that the edge \(\{x,y\}\) is not on \(P_{H}\). Let \({\cal P}=(P,s^{\prime},t^{\prime},v)\) be an H-plugin._ _If at least one of the following four conditions is fulfilled, then \(O(x,y,{\cal P})\) has a unique hamiltonian path \(P_{H,O}\) from \(s\) to \(t\). Except for the edges incident with \(x\), all edges of \(P_{H}\) are also contained in \(P_{H,O}\)._ **(i)**: \(y\not\in\{s,t\}\)_, and_ \(G[V\setminus\{y\}]\) _has no hamiltonian path from_ \(s\) _to_ \(t\)_._ **(ii)**: \(\{x,y\}\) _lies in a triangle._ **(iii)**: \({\cal P}\) _is a strong H-plugin._ **(iv)**: \(y\in\{s,t\}\)_._ **Proof:**: Adding a new vertex to \(G\) and connecting it with \(s\) and \(t\), the resulting graph \(G^{\prime}\) has a unique hamiltonian cycle if and only if \(G\) has a unique hamiltonian path from \(s\) to \(t\). Applying Lemma 3.1 to \(G^{\prime}\) we get the results. Case (iv) follows by case (i) of Lemma 3.1. We do not only want to splice one edge in a graph \(G\), but each edge in some set of edges. This is in general not possible, if the edges only satisfy (i) for \(G\): if \(z\) is a vertex, so that \(G[V\setminus\{z\}]\) has no hamiltonian cycle or hamiltonian path from \(a\) to \(b\), it is possible that after splicing an edge \(\{x,y\}\) not even close to \(z\), the result \(O(x,y,\mathcal{P})\) has a hamiltonian path or cycle in the graph with \(z\) removed. If on the other hand we have a set \(E_{O}\) of candidate edges \(\{x_{1},y_{1}\},\ldots,\{x_{k},y_{k}\}\) to be spliced with different \(x_{i}\) in different triangles, or the \(y_{i}\) are one of the starting points \(s,t\) of the unique hamiltonian path, these properties are preserved after splicing an edge in \(E_{O}\). This implies that in that situation we can apply the splicing operation also with a weak H-plugin to all edges simultaneously or in any order and still draw the conclusions of Lemma 3.1 or Corollary 3.2. Let \(G=(V,E)\) be a graph with \(s,t\in V\), \(v\not\in V\). Assume that its maximum degree is 4, that there are \(d-2\geq 2\) vertices of degree 3 - among them \(s\), and that \(t\) is the unique vertex of degree 2. We define \(W(G)\) as the graph obtained by adding \(v\) and connecting it to \(t\) and to all vertices of degree 3 except \(s\) - or formally: \(W(G)=(V_{W},E_{W})\) with \(V_{W}=V\cup\{v\}\), \(E_{W}=E\cup\{\{v,w\}|w\neq s,\deg(w)=3\}\cup\{\{v,t\}\}\). We call \(G\) a _\(d\)-seed_ if there is a unique hamiltonian path from \(s\) to \(t\) in \(G\) and if \(W(G)\) together with a new vertex \(v^{\prime}\) connected to \(v,s\) and \(t\) is 3-connected. The last property guarantees that 3-connectivity is preserved when \(W(G)\) is used for splicing an edge in a 3-connected graph. An example for a 10-seed is given in Figure 4. As the number of vertices of odd degree must be even, \(d\)-seeds do not exist for odd \(d\). Figure 4: A 10-seed with the unique hamiltonian path \(1,2,3,\ldots,11\) from \(s=1\) to \(t=11\) drawn as a minimum genus embedding on the torus. As usual, opposite sides have to be identified. The 10-seed has 2 vertices of degree 4, 8 vertices of degree 3 – among them \(s\) – and one vertex of degree 2 – the vertex \(t\). The uniqueness of the hamiltonian path has been checked by computer, but as the graph is small, it can also be confirmed by hand. **Remark 3.3**: _Let \(P=(V,E)\) be a graph with a unique hamiltonian path from \(s\in V\) to \(t\in V\), \(V^{\prime}\subseteq V\) and \(v\not\in V\). Then we have:_ **(i)**: \(P=(V\cup\{v\},E\cup\{\{v,w\}|w\in V^{\prime}\})\) _is an H-plugin._ **(ii)**: _If_ \(V^{\prime}=\{x,y\}\) _and_ \(x,y\) _are the endpoints of an edge not on the unique hamiltonian path from_ \(s\) _to_ \(t\)_, then_ \(P^{str}=(V\cup\{v\},E\cup\{\{v,x\},\{v,y\}\})\) _is a strong H-plugin._ This remark follows immediately from the definitions of H-plugin and strong H-plugin and the fact that a hamiltonian path from \(s\) to \(t\) containing \(v\) would imply a hamiltonian path in \(P\) containing the edge \(\{x,y\}\). We will first prove the existence of certain weak H-plugins, use those to construct strong H-plugins and the strong H-plugins to construct uniquely hamiltonian graphs with certain sets of degrees. **Lemma 3.4**: _Let \(d_{0}\in\mathbb{N},d_{0}\geq 4\). If there is a \(d_{0}\)-seed \(S_{d_{0}}\), then_ **(i)**: _For each even_ \(d\geq d_{0}\) _there are infinitely many_ \(d\)_-seeds, so there are_ \(d\)_-seeds with an arbitrarily large number of vertices of degree_ \(4\)_._ **(ii)**: _For each even_ \(d\geq d_{0}\) _there are infinitely many weak H-plugins_ \(\mathcal{P}_{d}\)_, so that after being used for splicing an edge_ \(\{x,y\}\) _with_ \(\deg(x)=3\) _and_ \(\deg(y)\in\{3,4\}\)_, there is one vertex in the copy of_ \(\mathcal{P}_{d}\) _with degree_ \(d\) _if_ \(\deg(y)=3\)_, resp._ \(d+1\) _if_ \(\deg(y)=4\)_, and all the rest have degree_ \(4\)_. If applied to an edge in a_ \(3\)_-connected graph, the result after splicing with_ \(\mathcal{P}_{d}\) _is also_ \(3\)_-connected._ **Proof:**: **(i)** We can replace cubic vertices different from \(s\) by triangles with \(3\) cubic vertices exactly \(\frac{d-d_{0}}{2}\) times and get graphs \(S_{d}\) that still have a unique hamiltonian path between \(s\) of degree \(3\) and \(t\) of degree \(2\), and then have \(d-2\) vertices of degree \(3\). The fact that \(W(S_{d})\) is \(3\)-connected follows easily from the \(3\)-connectivity of \(W(S_{d_{0}})\). We can now extend those \(d\)-seeds \(S_{d}\) to \(d\)-seeds \(S_{d}^{\prime}\) with arbitrarily many vertices of degree \(4\) by connecting \(s\) and \(t\) to an arbitrarily long zigzag path between two linear paths ending in a new vertex \(s^{\prime}\) of degree \(3\) and a new vertex \(t^{\prime}\) of degree \(2\). This operation is displayed in Figure 5. It only increases the number of vertices of degree \(4\) and \(W(S_{d}^{\prime})\) will also be \(3\)-connected. As a hamiltonian path from \(s^{\prime}\) to \(t^{\prime}\) in \(S_{d}^{\prime}\) would contain disjoint paths to \(s\) and \(t\), these can only be the left and right linear path, so the uniqueness of the hamiltonian path from \(s^{\prime}\) to \(t^{\prime}\) follows immediately from the uniqueness of the hamiltonian path in \(S_{d}\) Figure 5: Extending a \(d\)-seed by increasing the number of vertices of degree \(4\). The number of vertices of degree \(4\) can be increased by steps of \(1\) vertex. **(ii)**: For a \(d\)-seed \(S_{d}\), \(W(S_{d})\) is by definition a weak H-plugin. As the vertex \(v\) in \(S_{d}\) has degree \(d-2\), after using \(W(S_{d})\) for splicing an edge \(\{x,y\}\), \(v=y\) has degree \(d+\deg(y)-3\) and as \(s\) and \(t\) have degree \(3\) in \(W(S_{d})\) and get an additional neighbour during splicing, all other vertices different from \(v\) have degree \(4\) after splicing. The \(3\)-connectivity of the result follows by elementary arguments. We will now use the splicing operation and the weak H-plugins to prove the existence of certain strong H-plugins: **Lemma 3.5**: _Let \(d_{0}\in\mathbb{N},d_{0}\geq 4\). If there is a \(d_{0}\)-seed \(S_{d_{0}}\), then for each (not necessarily even) \(d\geq d_{0}\) there are infinitely many strong H-plugins \(\mathcal{P}_{d}^{str}\), so that when applied for splicing an edge with both endpoints of degree \(3\), in the copy of \(\mathcal{P}_{d}^{str}\) there are at most \(8\) vertices of degree \(d\) and all the other vertices have degree \(4\)._ **Proof:**: Assume that an even \(d\geq d_{0}\) is given. We will prove the existence of those strong H-plugins for \(d\) and \(d+1\). Figure 6 shows the graph \(P^{-}\) with a unique hamiltonian path \(1,2,\ldots,15\) from \(s=1\) to \(t=15\) (given in [3]). Edges with both endpoints of degree \(3\) to which the splicing operation with a weak H-plugin can be applied in a way that there is still a unique hamiltonian path between the endpoints are drawn as arrows pointing at the vertices which can or must be chosen as the vertex \(y\) in the operation. If we splice these edges with a weak H-plugin Figure 6: The graph \(P^{-}\) from [3], which has two hamiltonian cycles: \(1,2,3,\ldots,15\) and \(1,2,3,4,5,11,12,13,14,15,6,7,8,9,10\). As only one of them contains the edge \(\{1,15\}\) it has a unique hamiltonian path \(1,2,\ldots,15\) from \(s=1\) to \(t=15\). Edges with both endpoints of degree \(3\) to which the splicing operation with a weak H-plugin can be applied while the uniqueness of the hamiltonian path is preserved, are drawn as arrows pointing at the vertices which can or must be chosen as \(y\). from Lemma 3.4, we get a graph with \(5\) vertices of degree \(d\), \(2\) vertices (the vertices \(5\) and \(11\)) with degree \(3\), and all other vertices of degree \(4\). Due to Corollary 3.2, this graph still has a unique hamiltonian path from \(s\) to \(t\) not containing the edge \(\{5,11\}\). If we remove the edge between \(s\) and \(t\), add a new vertex \(v\), and connect it to the vertices \(5\) and \(11\), due to Remark 3.3 we get a strong H-plugin \({\cal P}^{str}_{d}\). The vertices \(s\) and \(t\) now have degree \(d-1\), so when applied in a splicing operation the degree is again \(d\), \(v\) has degree \(2\), and all other vertices have degree \(4\) or \(d\). If we apply the \({\cal P}^{str}_{d}\)-splice to an edge with both endpoints of degree \(3\), one of them is deleted and the other one is identified with \(v\) and gets degree \(4\). As there are infinitely many H-plugins \({\cal P}_{d}\) with \(1\) vertex of degree \(d\), we also get infinitely many strong H-plugins \({\cal P}^{str}_{d}\), that all have \(5\) vertices of degree \(d\) after being used for splicing an edge with both endpoints of degree \(3\). The graph \(P^{-}_{ex}\) in Figure 8 is constructed from the graph \(P^{-}\) in Figure 6 by applying the operation described in Figure 7 to three triangles. Any hamiltonian cycle or path traversing the part on the right hand side of Figure 7 in another way than displayed by the bold edges would imply the existence of a hamiltonian path traversing the part on the left in another way - which does not exist in the parts of \(P^{-}\) to which it is applied. So also \(P^{-}_{ex}\) has a unique hamiltonian path from \(s\) to \(t\) with in this case \(s=1\) and \(t=21\). Note that all vertices of degree \(3\) in the modified part are adjacent to a vertex of degree \(4\) by an edge not on the hamiltonian path and contained in a triangle not containing another vertex of degree \(3\) of an edge that is to be spliced. If we use a weak H-plugin \({\cal P}_{d}\) for splicing edges of \(P^{-}_{ex}\) in the way described in Figure 8, all vertices but \(1,7,15\), and \(21\) have degree \(4\) or \(d+1\). The vertices \(s=1\) and \(t=21\) have degree \(d\), so when used as \(s\) and \(t\) in a splicing operation, they get degree \(d+1\) too. Vertices \(7\) and \(15\) have degree \(3\) and are adjacent by an edge not on the unique hamiltonian path from \(s\) to \(t\). So we can again add a new vertex \(v\) and connect it to the vertices \(7\) and \(15\), which then have degree \(4\) and get a strong H-plugin \({\cal P}^{str}_{d+1}\), so that when used for splicing an edge with both endpoints of degree \(3\) the copy of \({\cal P}^{str}_{d+1}\) contains \(8\) vertices of degree \(d+1\) and all other vertices have degree \(4\). Also in this case we can use infinitely many H-plugins \({\cal P}_{d}\), in this case producing one vertex of degree \(d+1\) each. **Lemma 3.6**: _For each \(k\in\mathbb{N}\) there are \(3\)-connected uniquely hamiltonian graphs \(G_{k}=(V_{k},E_{k})\) with \(M_{deg}(G_{k})=\{3,4\}\), so that the edges not on the hamiltonian cycle form a 2-regular subgraph containing all vertices of degree \(4\) and a matching of size at least \(k\) containing all vertices of degree \(3\)._ **Proof:** We can again apply the technique from [6] also used in the proof of Theorem 2.2 to obtain a uniquely hamiltonian graph from a graph with two hamiltonian cycles. This time we apply it to \(P^{-}\) and an arbitrary cubic vertex in \(P^{-}\) that is traversed by the two hamiltonian cycles Figure 7: If a unique hamiltonian cycle or path traverses the triangle as described by the bold edges on the left hand side, then after extending the graph, the part on the right hand side can only be traversed by a hamiltonian cycle or path as given by the bold edges. in two different ways. As in both hamiltonian cycles the vertices of degree \(4\) are traversed in a way so that the edges not on the hamiltonian cycle and incident with the \(4\)-valent vertices form a triangle, the result will in each case have a unique hamiltonian cycle with two triangles of edges not on the hamiltonian cycle containing all \(6\) vertices of degree \(4\). As each cubic vertex has exactly one edge not on the hamiltonian cycle, these edges form the required matching. Starting from this graph, we can replace vertices of degree \(3\) by triangles to increase the number of cubic vertices and therefore also the size of the matching until we have a matching of size at least \(k\). We now get the following theorem as an immediate consequence: **Theorem 3.7**: _If there is a \(4\)-seed \(S_{4}\), or a \(d_{1}\)-seed \(S_{d_{1}}\) for \(d_{1}>4\), then any set \(M=\{d_{0}=4,d_{1},d_{2},\ldots,d_{k}\}\) with \(4<d_{1}<d_{2}<\cdots<d_{k}\) is uhc-realizable. If \(k\geq 1\), \(M\) is also strongly uhc-realizable._ **Proof:**: Given the set \(M=\{4,d_{1},d_{2},\ldots,d_{k}\}\), we can take any uniquely hamiltonian graph \(G_{k^{\prime}}\) with \(k^{\prime}\geq k\), \(k^{\prime}>0\) and the properties of Lemma 3.6 and splice the edges of the matching using each of the strong H-plugins \(\mathcal{P}_{d_{1}}^{str}\),\(\ldots\),\(\mathcal{P}_{d_{k}}^{str}\) (or \(\mathcal{P}_{4}\) if \(k=0\)) at least once. This removes Figure 8: The graph \(P_{ex}^{-}\), which is an extension of the graph \(P^{-}\) in Figure 6 in a way that it has a unique hamiltonian path \(1,2,3,\ldots,21\) between \(s=1\) and \(t=21\) and that except for \(6\) vertices, all vertices of degree \(3\) have a neighbour of degree \(4\) along an edge not on the hamiltonian path (not counting the edge \(\{1,21\}\)). Edges to which the splicing operation with a weak H-plugin can be applied while the uniqueness of the hamiltonian path is preserved, are drawn as arrows pointing at the vertices which must be chosen as \(y\). all vertices of degree \(3\) or increases their degree to \(4\). Furthermore outside the H-plugins only degree \(4\) occurs and in the H-plugins exactly all vertex degrees in \(M\) occur, while the graph has still one unique hamiltonian cycle. To show that \(M\) is strongly uhc-realizable for \(k\geq 1\), assume a partition \(D_{1},D_{2}\) to be given. If \(D_{2}=\emptyset\), to construct the sequence of graphs we can use increasingly large strong H-plugins - keeping the numbers of vertices with degree \(d\) constant for \(d\in\{d_{1},\ldots,d_{k}\}\). If \(D_{2}\neq\emptyset\), we can use graphs \(G_{k^{\prime}}\) for increasingly large \(k^{\prime}\) and use the same arbitrarily large number of copies of strong H-plugins \(\mathcal{P}_{d}^{str}\) for each \(d\in D_{2}\). The proof is chosen for its simplicity and not for giving the optimal values for the constants \(c_{1},c_{2}\) in the definition of strong realizability. The constants also depend on \(M\) and the partition. In order to get the best constant \(c_{1}\) that can be obtained by this construction or determine upper bounds for the number of vertices of a smallest graph \(G\) realizing \(M\), in some cases with odd degrees it would be better to use one or two copies of \(P_{ex}^{-}\) instead of \(P^{-}\) for the construction of the \(G_{k}\) and to analyze which edges can be spliced by a weak H-plugin instead of a strong H-plugin. We leave this to the reader. The graph in Figure 4 now immediately implies the main result for minimum degree \(4\): **Theorem 3.8**: _Any set \(M=\{4,d_{1},d_{2},\ldots,d_{k}\}\) with \(10\leq d_{1}<d_{2}<\cdots<d_{k}\) and \(k\geq 1\) is strongly uhc-realizable._ The formulation of the main result as a direct consequence of Theorem 3.7 is in order to make Theorem 3.7 citable in case a \(d\)-seed for \(d<10\) is discovered - instead of referring to the fact that the proofs can be completely analogously repeated with the new \(d\)-seed. Due to Theorem 3.7 the existence of a \(4\)-seed implies the existence of a \(3\)-connected uniquely hamiltonian \(4\)-regular graph, but in fact also the other direction is correct: **Corollary 3.9**: _There is a \(3\)-connected uniquely hamiltonian \(4\)-regular graph, if and only if there is a \(4\)-seed. In that case there are infinitely many \(3\)-connected uniquely hamiltonian \(4\)-regular graphs and every set \(M\) of natural numbers \(d\geq 2\) with \(4\in M\) and \(|M|\geq 2\) is strongly uhc-realizable._ * From a \(3\)-connected uniquely hamiltonian \(4\)-regular graph \(G\) we can get a \(4\)-seed by choosing a vertex of \(G\) as \(s\), subdivide an edge on the hamiltonian cycle incident with \(s\) with a new vertex \(t\), and remove an edge incident with \(s\) that is not on the hamiltonian cycle. The rest of the statement is a direct consequence of Remark 2.1, Theorem 2.2, and Theorem 3.7. In [2] Fleischner proved that there are \(4\)-regular uniquely hamiltonian multigraphs and in fact \(2k\)-regular uniquely hamiltonian multigraphs with arbitrarily high degree. Another direct consequence of Lemma 3.6 is the following simple generalisation: **Corollary 3.10**: _For a set \(M=\{d_{1},\ldots,d_{k}\}\) with \(2\leq d_{1}<d_{2}<\cdots<d_{k}\) of natural numbers there is a uniquely hamiltonian multigraph \(G\) with \(M_{deg}(G)=M\) if and only if \(M\) contains an even number. In that case there are infinitely many \(3\)-connected uniquely hamiltonian multigraphs \(G\) with \(M_{deg}(G)=M\)._ **Proof:**: In [7] it is shown that uniquely hamiltonian multigraphs do not exist if all degrees are odd, so we only have to prove that they do exist if an even degree is contained. For \(2\in M\) this is even proven for simple graphs in Remark 2.1, so assume that all elements of \(M\) are at least 3. Taking graphs \(G_{k^{\prime}}\) with \(k^{\prime}\geq k\) from Lemma 3.6 with the matching and 2-factor with the described properties, we can multiply the edges of the 2-factor containing the 4-regular vertices until the vertices all have an even degree contained in \(M\). For each remaining degree \(d_{i}\), we can now choose an edge in the matching and multiply it until it has degree \(d_{i}\). If there are still vertices with degree 3 left and \(3\not\in M\), we can multiply the corresponding edges of the matching until a degree in \(M\) is reached. ## 4 Final remarks In this article we are interested only in uniquely hamiltonian graphs. Nevertheless the method of splicing can also be useful when constructing graphs with few hamiltonian cycles. We will only give a short sketch of the possibilities. We will not formally state results, as we do not give formal proofs. The following statements should be considered as preliminary as long as no proofs are given somewhere. If we allow \(n_{s}\) hamiltonian paths from \(s\) to \(t\) in a seed and \(n_{G}\) hamiltonian cycles in a graph \(G\) - none of them containing the edge \(e\) of \(G\) - then with the otherwise same prerequisites of Lemma 3.1, the proof can be repeated, this time showing that the result after splicing has \(n_{s}\cdot n_{G}\) hamiltonian cycles. This implies that for any set \(M\) of natural numbers with minimum 4 there is a constant \(C\) and an infinite series of graphs with degree set \(M\) and at most \(C\) hamiltonian cycles. In fact there should be one constant working for all sets \(M\). The constants we get from our proof that used \(P^{-}\) are nevertheless very large and far worse for the 4-regular case than in [9]. For better constants one has to search for starting graphs that need fewer splicing operations, but can have more than one hamiltonian cycle. An example is the construction in [9] proving that there are infinitely many (2-connected) 4-regular graphs with 144 hamiltonian cycles. It was found and proven in a completely different way, but can be interpreted making use of splicing: The graph in Figure 9(c) has 36 hamiltonian cycles - none of them containing \(\{x,y\}\). Furthermore removing \(y\), the graph is non-hamiltonian. The _generalized seed_ (that is: allowing more Figure 9: The splicing operation for more than one hamiltonian cycle. than one hamiltonian path from \(s\) to \(t\)) in Figure 9(a) has 4 hamiltonian paths from \(s\) to \(t\), so with plugins obtained from it and its extensions, the results of splicing \(\{x,y\}\) have 144 hamiltonian cycles. The generalized seed in Figure 9(b) has 2 hamiltonian paths from \(s\) to \(t\) and would give one vertex of degree 6, so splicing \(\{x,y\}\) would give 72 hamiltonian cycles for the degree set \(M=\{4,6\}\) and replacing a vertex of degree 3 by a triangle also for \(M=\{4,8\}\). We focused on the 3-connected case and already required the seed to preserve 3-connectivity. Dropping this prerequisite, for all results a version only requiring and implying 2-connectivity could be proven. This might e.g. be interesting for Corollary 3.9. All graphs explicitly given in the previous sections can be inspected at and downloaded from the database _House of Graphs_[1]. They can be found by searching for the keyword UHG_degree_sequence. All properties about small graphs stated here have been checked by computer, but can easily be confirmed by hand.
2303.12761
LSTM-based Video Quality Prediction Accounting for Temporal Distortions in Videoconferencing Calls
Current state-of-the-art video quality models, such as VMAF, give excellent prediction results by comparing the degraded video with its reference video. However, they do not consider temporal distortions (e.g., frame freezes or skips) that occur during videoconferencing calls. In this paper, we present a data-driven approach for modeling such distortions automatically by training an LSTM with subjective quality ratings labeled via crowdsourcing. The videos were collected from live videoconferencing calls in 83 different network conditions. We applied QR codes as markers on the source videos to create aligned references and compute temporal features based on the alignment vectors. Using these features together with VMAF core features, our proposed model achieves a PCC of 0.99 on the validation set. Furthermore, our model outputs per-frame quality that gives detailed insight into the cause of video quality impairments. The VCM model and dataset are open-sourced at https://github.com/microsoft/Video_Call_MOS.
Gabriel Mittag, Babak Naderi, Vishak Gopal, Ross Cutler
2023-03-22T17:14:38Z
http://arxiv.org/abs/2303.12761v1
# LSTM-Based Video Quality Prediction Accounting for Temporal Distortions in Videoconferencing Calls ###### Abstract Current state-of-the-art video quality models, such as VMAF, give excellent prediction results by comparing the degraded video with its reference video. However, they do not consider temporal distortions (e.g., frame freezes or skips) that occur during videoconferencing calls. In this paper, we present a data-driven approach for modeling such distortions automatically by training an LSTM with subjective quality ratings labeled via crowdsourcing. The videos were collected from live videoconferencing calls in 83 different network conditions. We applied QR codes as markers on the source videos to create aligned references and compute temporal features based on the alignment vectors. Using these features together with VMAF core features, our proposed model achieves a PCC of 0.99 on the validation set. Furthermore, our model outputs per-frame quality that gives detailed insight into the cause of video quality impairments. The VCM model and dataset are open-sourced at [https://github.com/microsoft/Video_Call_MOS](https://github.com/microsoft/Video_Call_MOS). Gabriel Mittag, Babak Naderi, Vishak Gopal, Ross Cutler Microsoft Corp.Video quality, machine learning, QoE ## 1 Introduction One of the most important performance metrics for evaluating videoconferencing (VC) system is the perceived video quality of a call. The ground truth quality can only be assessed in subjective experiments, for example, according to ITU-T (International Telecommunication Union) Recommendation P.910 [1], in which participants are asked to view sample videos and rate their quality. The average rating across all participants gives the so-called Mean Opinion Score (MOS). Recently, crowdsourcing has become increasingly popular for conducting video-quality experiments. Implementations, such as P.910 Crowd [2], allow for conducting accurate video-quality experiments with high reproducibility. Because subjective experiments are time-consuming and costly, signal-based prediction models have been developed that are able to predict the quality of a given video. The most commonly used video quality metrics are PSNR (Peak Signal to Noise Ratio), SSIM [3], MS-SSIM [4], and VMAF [5], which are computed on a frame basis and with subsequent averaging across all frame scores to give the overall quality. Although other pooling mechanisms have been studied in order to take recency effects into account, in most studies mean pooling achieved the best performance [6, 7]. VMAF is the only of those metrics that include a temporal component, which considers temporal masking effects. However, video transmitted during VC calls can be affected by a number of temporal distortions that are perceived as frame freezes, frame skips, frame rate variations (e.g., video is played back faster after delayed packets arrive), or generally low frame rate. These kinds of distortions cannot be covered by the aforementioned quality metrics as they are mostly frame-based and do not take time dependencies into account. More recently, deep learning-based video quality models such as VSFA [8], DeepVQA [9], CompressedVQA [10], and C3DVQA [11] have been introduced. Although some of them take temporal-memory effects or temporal masking into account and apply more advanced pooling mechanisms, they are neither designed for nor trained on videos with temporal distortions as they appear in VC calls. In general, most of the temporal video quality modeling has been focused around streaming applications [12] that are subject to different degradation types than VC systems. The MOVIE [13], ST-MAD [14], and STRRED [15] models consider visual perception of motion artifacts. VQM_VFD [16] takes the impact of variable frame delays (including freezes) into account by combining 8 different parameters with a shallow neural network. ITU-T Rec. P.1203 [17] is a parametric bitstream-based audiovisual quality prediction model for adaptive streaming services. ATLAS [18] includes rebuffering-aware and memory-driven effects. The model in [19] also considers rebuffering in video streams by applying recurrent and dynamic neural networks. ST-GREED [20] applies entropy variations between reference and distorted videos to consider quality variations caused by frame rate changes. Another challenge for full reference VC quality prediction is the alignment of the received video with the original video. In the VQM_VFD model, a heuristic algorithm based on pixel-by-pixel comparisons is applied, which is prone to alignment errors. Instead, we apply markers to the source videos, which allows for reading the corresponding reference frame index from the degraded frames. To the best of our knowledge, this is the first work in which the alignment vector is leveraged to model the impact of temporal distortions as they occur during video calls with a data-driven approach. To this end, we use a recurrent neural network and directly input features based on the alignment vector as a sequence, fused together with image quality metrics. ## 2 Dataset To train and evaluate the presented model, a new dataset with recordings from live Microsoft Teams calls was created. The recorded videos are based on ten source videos. Eight of the videos contain one person speaking into the camera and two of the videos contained two people talking to each other. The video resolution of the source videos is 1920x1080 with a frame rate of 30 FPS. In a preprocessing step, two QR codes, containing the original frame index, were drawn onto the source videos to make an alignment between degraded and reference videos possible (see Subsection 3.1). To create the dataset, calls between two machines and various emulated real-call network scenarios were conducted. The scenarios comprised of different fixed bandwidths, burst loss, random loss, fluctuating bandwidth, cross traffic, and combinations of these conditions. During the call, the video bitrate and resolution are adapted depending on the network condition. As a consequence, the quality of the received videos changes over time and may contain different bitrates, switches between video resolutions, frame rate variation, and frozen or skipped frames. The degraded videos were captured on the receiving side with 30 FPS and cropped from the call duration of 2 minutes to clips of 6 seconds, where a random segment of the call was chosen as the clip. In that way, 1628 videos with 83 different network profiles were collected. The videos were annotated for their overall quality on an ACR (Absolute Category Rating) scale according to P.910 [1] via crowdsourcing using the P.910 Toolkit1[2]. On average, 17 valid votes were collected per clip. To split the dataset, the videos that are based on two of the ten source videos were selected for validation, resulting in 1299 training set videos and 329 validation set videos, where the training and validation sets are sharing the same network profiles but having different source videos. Footnote 1: [https://github.com/microsoft/P910](https://github.com/microsoft/P910) ## 3 Method ### Alignment Most video quality models compare reference video frames with degraded frames, but this is challenging for live video calls as recordings are not aligned with the original reference video. We added QR codes to the source videos to enable alignment, but reading codes in poor network conditions can be difficult. To improve detection, we added two identical QR codes to each video, one in the top-left and one in the bottom-right corner. To create an aligned reference video, the QR codes from the degraded videos are read for each frame. This gives an index vector \(r(i)\) with the reference frame indices for each degraded frame. In the case of frozen frames the vector contains consecutive repetitions of the same reference frame index, in the case of frame skips the vector is missing certain reference frame indices. Then we create a new reference video with the frame order given by \(r(i)\). ### Features Because VMAF is widely used and has been shown to give robust quality predictions for videos, we use the same input features as VMAF to cover the image quality of the individual frames. However, these features cannot cover temporal quality degradation caused by freezes/skips. For this reason, we add two additional frame freeze features that are based on the reference frame indices \(r(i)\). **VMAF core features** * Image quality metric that measures the information of fidelity loss between degraded and reference images. VMAF uses a modified version where the loss of fidelity in each scale is included as an elementary metric [5]. * Image quality metric that measures the loss of details that affect the content visibility and additionally the redundant impairment that distracts the viewer's attention. * Temporal metric that measures the temporal difference between adjacent frames by calculating the average absolute pixel difference for the luminance component. **Frame freeze feature** * Measures the reference frame indices difference between consecutive frames as follows: \(s_{i}=r_{i}-r_{i-1}\) for \(i\in[1,N-1]\) with \(s_{0}=0\), where \(r(i)\) is the reference indices vector and \(N\) is the number of frames of the degraded video. * Simple measure to capture the number of consecutive frozen frames according to Algorithm 1. ### Model As a model, we apply a simple LSTM (Long Short-Term Memory) network, as it is well-suited for modeling sequences and time dependencies. We use 6 layers with each 256 hidden units, although in our experiments smaller models (e.g., with 1 or 2 layers and 128 hidden units) reached similar performances. The model architecture is shown in Figure 1. Besides predicting the overall video quality, another goal of this work is to output the quality per frame, including the impact of temporal effects, such as frozen frames. In order to force the model to output a single quality value per frame, instead of applying time pooling first and then using a linear layer for the final output, we apply the linear layer directly after the LSTM to reduce the feature size from 256 to 1. Then we apply an average pooling layer to compute the final overall video quality. In this way, also the common problem of choosing a suitable time pooling method is solved, as the LSTM learns to weigh the impact of the individual frames on the overall quality automatically. ## 4 Results and Discussion In this section, the results of the model on the validation set are presented. At first, we present the results compared to the default VMAF model. Then, we present the results of an ablation study, where we compare the results for different input features and a retrained VMAF model. All of the presented scores were mapped to the subjective MOS with a Sigmoid function. The mapped values were then used for the plots and computing the figures of metrics PCC (Pearson's correlation coefficient) and RMSE (root-mean-square error). The mapping is particularly necessary for VMAF, SSIM, and PSNR. For example, the scores of the default VMAF model range between 0-100, while the videos in our dataset are rated in MOS between 1-5. All of the presented results are computed on a per-file base. ### Validation set results Figure 2 shows the scatter plots of the predicted values for the default VMAF model _(a)_ and the proposed LSTM with VMAF and temporal features as input _(b)_. The proposed model outperforms the default VMAF model with a PCC increase from 0.94 to 0.99. The plots also show that most of the VMAF outliers are caused by overestimating the video quality, presumably because these video samples contain video freezes that are not captured by VMAF. To further analyze this presumption, we plot the per-frame video quality over time for two example videos in Figure 3. The black line represents the overall MOS rating from the crowdsourcing experiment, the orange line shows the per-frame VMAF score, and the blue line shows the per-frame quality of the proposed LSTM model. In addition to the quality predictions, we also plot the frame freeze and skip features, which are shown in frames divided by 10 (i.e., in the plot a "freeze" value of 1 represents 10 consecutive frozen f Figure 1: Block diagram of the proposed neural network. Figure 3: Per-frame quality over time for two example videos. Figure 2: Scatterplot showing the ground truth and predicted MOS values per file on the validation set. to 0.33 seconds). In the top graph, a video without any major freezes or frame skips is shown. Both VMAF and the LSTM predict the quality to be close to the ground truth MOS. The bottom graph shows a video with two video freezes. VMAF clearly overestimates the quality of this video. The LSTM begins with a similar prediction as VMAF but lowers the quality prediction once the first frame freeze starts. After the freeze has ended and the video continues playing, the LSTM increases its quality prediction again. By considering the freezes the model reaches an average video quality prediction close to the ground truth MOS. These results show that the per-frame quality prediction of the proposed model works very well and can be used to analyze the root cause of videos with poor quality. ### Ablation study In this subsection, we study the influence of the different input features on the results and compare the model to a retrained VMAF model. For the retraining of VMAF, we used the default SVR (Support Vector Regression) settings with \(\gamma=0.05\), \(C=4.0\), and \(\nu=0.9\) and the 11 VMAF core features VIF (if_scale0, vif_scale1, vif_scale2, vif_scale3), ADM (adm2, adm_scale0, adm_scale1, adm_scale2, adm_scale3), and Motion (motion, motion2). Table 1 shows the validation set results for the default and retrained VMAF. We also show the results for the image quality metrics PSNR, SSIM, and MS-SSIM, which are commonly used as baseline metrics. It should be noted that the results for the LSTM show the average PCC and RMSE over three training runs for each feature configuration, where the best checkpoint of each training run was selected. Interestingly, SSIM outperforms MS-SSIM and the default VMAF model with a PCC of 0.951. Retraining VMAF notably improves the performance and increases the PCC from 0.937 to 0.965. The LSTM with the same VMAF core input features (VIF/ADM/Motion) outperforms the retrained VMAF model with a PCC of 0.983, showing that LSTM models are better suited to model the temporal distortions in VC calls than the default SVR used by VMAF. It can further be noted that adding either of the temporal features (Motion, Skip, or Freeze) increases the results from a PCC of 0.956, when only the image quality metrics VIF and ADM are applied, to a PCC of around 0.98. The LSTM seems to be able to consider frame freezes or skips with any of the temporal features. The purpose of the VMAF Motion feature is to capture the temporal masking of coding artifacts and is calculated solely on the basis of the reference video. In the case of high motion values, VMAF will judge distortions less severely as the human eye does not perceive some distortions during high motion between frames. However, since we aligned our reference videos to the degraded video, the reference also contains the frame freezes of the degraded video and thus leads to a value of zero for frozen frames as the frames are exactly identical. Therefore, the LSTM can use the motion feature to detect video freezes instead of only using it to capture the temporal masking effect. That being said, the Motion feature is not able to capture frame rate variations. However, in practice, such changes are usually preceded by frame freezes which already impact the perceived quality. Therefore no significant performance improvements could be observed on our dataset when the skip or freeze features were added. Furthermore, the LSTM could confuse a video with still sequences with frozen frames when Motion is used as the only temporal feature (in cases where the reference alignment is not done via video markers as they will lead to non-zero Motion values). Although the performance difference between the temporal features is only marginal, overall, the best performance can be achieved by including all of them, in particular when considering the RMSE. Adding the Skip and Freeze features decreases the RMSE from 0.218 to 0.206, compared to when only Motion is used. Adding PSNR and SSIM to the LSTM model did not further improve the results. ## 5 Conclusions We proposed the open-sourced VCM model based on a simple but highly efficient data-driven approach for modeling temporal distortions in VC videos. An open-source dataset with live video calls in 83 different network conditions was created in which we preprocessed the source videos with QR code markers for the alignment between degraded and reference videos. We show that by using VMAF core features to cover the non-temporal visual quality of the video frames together with temporal features based on the alignment vector, the LSTM learns to model temporal distortions automatically and achieves very high accuracy with a PCC of 0.99 on the validation set. Our model further outputs per-frame MOS that gives detailed insight into the cause of degraded quality. In future work, the VMAF core features could be replaced with an end-to-end deep learning approach to further improve the results. \begin{table} \begin{tabular}{l|c|c} \hline \hline **Model** & **PCC** & **RMSE** \\ \hline PSNR & 0.928 & 0.415 \\ SSIM & 0.951 & 0.342 \\ MS-SSIM & 0.946 & 0.360 \\ VMAF & 0.937 & 0.388 \\ VMAF (Retrained) & 0.965 & 0.289 \\ LSTM - VIF / ADM & 0.956 & 0.335 \\ LSTM - VIF / ADM / Motion & 0.983 & 0.218 \\ LSTM - VIF / ADM / Skip / Freeze & 0.984 & 0.213 \\ LSTM - VIF / ADM / Skip & 0.979 & 0.253 \\ LSTM - VIF / ADM / Freeze & 0.983 & 0.213 \\ LSTM - VIF / ADM / Motion / Skip / Freeze & **0.985** & **0.206** \\ LSTM - ”/”/”/”/”/”/ PSNR / SSIM & 0.983 & 0.211 \\ \hline \hline \end{tabular} \end{table} Table 1: Validation set results for baseline and proposed model using different input features.
2305.16689
Epidemic Transmission Modeling with Fractional Derivatives and Environmental Pathogens
This research presents an advanced fractional-order compartmental model designed to delve into the complexities of COVID-19 transmission dynamics, specifically accounting for the influence of environmental pathogens on disease spread. By enhancing the classical compartmental framework, our model distinctively incorporates the effects of order derivatives and environmental shedding mechanisms on the basic reproduction numbers, thus offering a holistic perspective on transmission dynamics. Leveraging fractional calculus, the model adeptly captures the memory effect associated with disease spread, providing an authentic depiction of the virus's real-world propagation patterns. A thorough mathematical analysis confirming the existence, uniqueness, and stability of the model's solutions emphasizes its robustness. Furthermore, the numerical simulations, meticulously calibrated with real COVID-19 case data, affirm the model's capacity to emulate observed transmission trends, demonstrating the pivotal role of environmental transmission vectors in shaping public health strategies. The study highlights the critical role of environmental sanitation and targeted interventions in controlling the pandemic's spread, suggesting new insights for research and policy-making in infectious disease management.
Moein Khalighi, Faïçal Ndaïrou, Leo Lahti
2023-05-26T07:21:55Z
http://arxiv.org/abs/2305.16689v2
# Fractional model of COVID-19 with pathogens as shedding effects ###### Abstract To develop effective strategies for controlling the spread of the virus and potential future outbreaks, a deep understanding of disease transmission dynamics is crucial. This study proposes a modification to existing mathematical models used to describe the transmission dynamics of COVID-19 with environmental pathogens, incorporating a variable population, and employing incommensurate fractional order derivatives in ordinary differential equations. Our analysis accurately computes the basic reproduction number and demonstrates the global stability of the disease-free equilibrium point. Numerical simulations fitted to real data from South Africa show the efficacy of our proposed model, with fractional models enhancing flexibility. We also provide reliable values for initial conditions, model parameters, and order derivatives, and examine the sensitivity of model parameters. Our study provides valuable insights into COVID-19 transmission dynamics and has the potential to inform the development of effective control measures and prevention strategies. keywords: mathematical modeling of COVID-19 pandemics, pathogens, environmental effect, fractional differential equations, numerical simulations. Msc: 26A33, 34A08, 92D30. ## 1 Introduction The COVID-19 pandemic has had severe consequences on global public health. To contain the spread of COVID-19 and prevent future viral outbreaks, a comprehensive understanding of the disease transmission dynamics is essential. Several mathematical models have been developed to describe the spread of the virus, including the traditional SIR and SEIR models. In this context, various incidence functions exist to model virus mutations during infection when the dynamics of infectious individuals undergo changes. Examples include the bilinear incidence or law of mass action described in [1] and [2], the saturated incidence function in [3] and [4], and the Beddington-DeAngelis function in [5] and [6]. Other specific nonlinear incidence functions are also used in [7] and [8], and recent developments in this field suggest utilizing Stieltjes derivatives to model time-varying contact rates and other parameters [9]. Although various mathematical models have been developed to describe the transmission of epidemic diseases, their efficacy is limited due to their assumption of uniform mixing of individuals, thereby failing to account for the complex dynamics of the disease, including the transmission of live viruses through contaminated environments. Recent research has attempted to overcome the limitations of traditional models by proposing interventions that account for environmental pathogens [10; 11; 12]. However, while these models have shown promise, there have been challenges in fitting real-world data due to the stringent mathematical conditions required for having positive and bounded solutions [10]. These conditions can sometimes conflict with biologically feasible regions, leading to difficulties in accurately representing the complex dynamics of the disease. To address this issue, our study proposes a modification to the existing model by incorporating a variable population and birth rate, which enables us to better capture the complex and evolving nature of disease spread. Fractional calculus has been identified as a promising approach to improving the efficiency of disease transmission models [13; 14; 15; 16]. By incorporating memory and long-range dependence into the models, fractional calculus offers greater flexibility in capturing the complex and heterogeneous nature of disease dynamics, including demographics, social behaviors, and interventions that affect transmission dynamics. While previous studies have utilized fractional order derivatives for modeling disease transmission and taking into account environment and social distancing [13], issues with mathematical rigor and accuracy of critical values, such as the basic reproduction number (\(R_{0}\)), have been identified. In our study, we address these issues by providing accurate computation of \(R_{0}\) and demonstrating the global stability of the disease-free equilibrium point. Additionally, we employ incommensurate fractional order derivatives in ordinary differential equations (ODEs) to refine the description of memory effects and improve the model's accuracy in capturing the impact of past infections on future transmissions. The article is structured as follows. Section 2 presents the model structure, which incorporates Caputo fractional derivatives and multiple classes of individuals, including susceptible, exposed, symptomatic, and asymptomatic infectious, recovery, and environmental pathogen compartments, with corresponding coefficients. In Section 3, the mathematical analysis of the model is provided, including the derivation of the basic reproduction number and the study of the disease-free equilibrium. Section 4 presents the numerical findings, including the fitting of the model to real data from South Africa. Additionally, sensitivity analysis is performed to evaluate the effect of various model parameters on the basic reproduction number, which offers valuable insights for designing effective measures to contain the spread of COVID-19. ## 2 The fractional-order model In this section, we introduce a modified fractional compartmental model of COVID-19 formerly proposed by [10] as follows: \[\begin{cases} C\,D_{0+}^{\alpha_{S}}S(t)=\Lambda N(t)-\frac{ \beta_{1}S(t)W(t)}{1+\phi_{1}W(t)}-\frac{\beta_{2}S(t)\left(I_{A}(t)+I_{S}(t) \right)}{1+\phi_{2}\left(I_{A}(t)+I_{S}(t)\right)}+\psi E(t)-\mu S(t),\\ C\,D_{0+}^{\alpha_{K}}E(t)=\frac{\beta_{1}S(t)W(t)}{1+\phi_{1}W(t)}+\frac{ \beta_{2}S(t)\left(I_{A}(t)+I_{S}(t)\right)}{1+\phi_{2}\left(I_{A}(t)+I_{S}(t) \right)}-\psi E(t)-\mu E(t)-\omega E(t),\\ C\,D_{0+}^{\alpha_{I_{A}}}I_{A}(t)=(1-\delta)\omega E(t)-(\mu+\sigma)I_{A}(t) -\gamma_{A}I_{A}(t),\\ C\,D_{0+}^{\alpha_{IS}}I_{S}(t)=\delta\omega E(t)-(\mu+\sigma)I_{S}(t)-\gamma_ {S}I_{S}(t),\\ C\,D_{0+}^{\alpha_{R}}R(t)=\gamma_{S}I_{S}(t)+\gamma_{A}I_{A}(t)-\mu R(t),\\ C\,D_{0+}^{\alpha_{W}}W(t)=\eta_{A}I_{A}(t)+\eta_{S}I_{S}(t)-\mu_{P}W(t).\end{cases} \tag{1}\] The model consists of six mutually exclusive compartments, that can be described as follows: Susceptible individuals denoted by \(S\), Exposed individuals by \(E\), Asymptomatic infectious individuals by \(I_{A}\), Symptomatic infectious individuals by \(I_{S}\), and Recovered individuals by \(R\) with a separate class of compartment for pathogens in the environment denoted by \(W\). The description of the model parameters is presented in Table 1. The fractional-order dynamics of the involved population denoted by \(N\), can be obtained by summing the first five equations of model (1), given by \[{}^{C}D_{0+}^{\alpha_{N}}N(t)=\Lambda N(t)-\sigma\left(I_{A}(t)+I_{S}(t) \right)-\mu N(t). \tag{2}\] Hence, we present a novel approach to modeling by incorporating a coupling equation (2) into (1), particularly its first equation involving \(\Lambda N(t)\). This coupling equation allows us to account for population density in the recruitment rate of susceptible individuals. This approach differs from that of [10], which did not incorporate a similar coupling equation. The transmission of viruses between healthy and non-healthy individuals is determined by the forces of infection \(\dfrac{\beta_{1}SW}{1+\phi_{1}W},\ \dfrac{\beta_{2}S(I_{A}+I_{S})}{1+\phi_{2}(I_{A}+I_{S})}\), which are indicative of saturation in the incidence functions [17; 18; 19; 20]. According to [10], the selection of these forces of infection is based on their ability to account for contact minimization with infectious individuals through social distancing. Furthermore, [21] incorporates this saturated function response into the COVID-19 system dynamics to capture behavioral changes and the crowding effect of the population. Given these considerations, it appears that the authors of these works have focused on incorporating relevant biological and behavioral factors into their models. ## 3 Basic reproduction number and stability analysis The basic reproduction number is computed using the method of the next-generation matrix [22] associated with our model (1). In doing so, the rate of appearance of new infection in the infectious compartment (that is \(\mathcal{X}=(E,I_{A},I_{S},W)\)), is given by \[\mathcal{F}(\mathcal{X})=\begin{pmatrix}\frac{\beta_{1}S(t)W(t)}{1+\phi_{1}W( t)}+\frac{\beta_{2}S(t)(I_{A}(t)+I_{S}(t))}{1+\phi_{2}(I_{A}(t)+I_{S}(t))}\\ 0\\ 0\\ 0\end{pmatrix},\] and the rate of other transitions involving shedding compartment is obtained as follows \[\mathcal{V}(\mathcal{X})=\begin{pmatrix}(\psi+\mu+\omega)E\\ (\mu+\sigma+\gamma_{A})I_{A}-(1-\delta)\omega E\\ (\mu+\sigma+\gamma_{S})I_{S}-\delta\omega E\\ \mu_{P}W-\eta_{A}I_{A}-\eta_{S}I_{S}.\end{pmatrix}.\] \begin{table} \begin{tabular}{|l|l|} \hline parameter & description \\ \hline \(\Lambda\) & birth rate in the population \\ \(\mu\) & natural human death rate \\ \(\mu_{P}\) & death rate of pathogens viruses \\ \(\phi_{1}\) & proportion of interaction with an infectious environment \\ \(\phi_{2}\) & proportion of interaction with an infectious individual \\ \(\beta_{1}\) & infection rate from \(S\) to \(E\) due to contact with W \\ \(\beta_{2}\) & infection rate from \(S\) to \(E\) due to contact with \(I_{A}\) and/or \(I_{S}\) \\ \(\delta\) & proportion of symptomatic infectious people \\ \(\psi\) & progression rate from \(E\) back to \(S\) due to robust immune system \\ \(\omega\) & progression rate from \(E\) to either \(I_{A}\) or \(I_{S}\) \\ \(\sigma\) & disease mortality rate \\ \(\gamma_{S}\) & rate of recovery of the symptomatic individuals \\ \(\gamma_{A}\) & rate of recovery of the asymptomatic individuals \\ \(\eta_{S}\) & rate of virus spread to environment by \(I_{S}\) \\ \(\eta_{A}\) & rate of virus spread to environment by \(I_{A}\) \\ \hline \end{tabular} \end{table} Table 1: Description of the parameters. Next, the Jacobian matrices associated to \(\mathcal{F}\) and \(\mathcal{V}\) at the disease free equilibrium point \(dfe=(\frac{\Lambda N}{\mu},0,0,0,0,0)\), are obtained respectively as in the following \[J_{\mathbb{F}}=\left[\begin{array}{cccc}0&\beta_{2}\frac{\Lambda N}{\mu}& \beta_{2}\frac{\Lambda N}{\mu}&\beta_{1}\frac{\Lambda N}{\mu}\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{array}\right]\quad\text{ and }\quad J_{\mathbb{V}}=\left[\begin{array}{ ccc}\psi+\mu+\omega&0&0&0\\ -(1-\delta)\omega&\mu+\sigma+\gamma_{A}&0&0\\ -\delta\omega&0&\mu+\sigma+\gamma_{S}&0\\ 0&-\eta_{A}&-\eta_{S}&\mu_{P}\end{array}\right].\] Finally, the basic reproduction number \(R_{0}\) is obtained as the spectral radius of generation matrix \(J_{\mathbb{F}}.J_{\mathbb{V}}^{-1}\), that is precisely \[R_{0}=\frac{\Lambda N\omega}{\mu}\left[\frac{\beta_{1}\delta\eta_{S}}{\mu_{P} \varpi_{e}\varpi_{is}}+\frac{\beta_{2}\delta}{\varpi_{e}\varpi_{is}}+\frac{ \beta_{1}(1-\delta)\eta_{A}}{\mu_{P}\varpi_{e}\varpi_{ia}}+\frac{\beta_{2}(1- \delta)}{\varpi_{e}\varpi_{ia}}\right], \tag{3}\] where \[\varpi_{e}=\psi+\mu+\omega\quad\varpi_{ia}=\mu+\sigma+\gamma_{A},\quad\varpi_ {is}=\mu+\sigma+\gamma_{S}.\] ### Global stability of the disease free equilibrium point Here, we investigate the global stability of the steady state when the disease is dying out in the population. **Theorem 1**.: _The disease free equilibrium point (\(dfe\)) of system (1) is globally asymptotically stable whenever \(R_{0}\leq 1\)._ Proof.: Let us consider the following Lyapunov function: \[V(t)=b_{0}E(t)+b_{1}I_{A}(t)+b_{2}I_{S}(t)+b_{3}W(t),\] where \(b_{1}\), \(b_{1}\), \(b_{2}\), and \(b_{3}\) are positive constant to be determine. By linearity of the Caputo derivative, we have that \[{}^{C}D_{0+}^{\alpha}V(t)=b_{0}^{C}D_{0+}^{\alpha}E(t)+b_{1}^{C}D_{0+}^{\alpha }I_{A}(t)+b_{2}^{C}D_{0+}^{\alpha}I_{S}(t)+b_{3}^{C}D_{0+}^{\alpha}W(t).\] Next, by substituting expression of \({}^{C}D_{0+}^{\alpha}E(t)\), \({}^{C}D_{0+}^{\alpha}I_{A}(t)\), \({}^{C}D_{0+}^{\alpha}I_{S}(t)\) and \({}^{C}D_{0+}^{\alpha}W(t)\) from system model (1), we obtain \[{}^{C}D_{0+}^{\alpha}V(t) =b_{0}\left(\frac{\beta_{1}S(t)W(t)}{1+\phi_{1}W(t)}+\frac{\beta _{2}S(t)\left(I_{A}(t)+I_{S}(t)\right)}{1+\phi_{2}\left(I_{A}(t)+I_{S}(t) \right)}-\psi E(t)-\mu E(t)-\omega E(t)\right)\] \[+b_{1}\left((1-\delta)\omega E(t)-(\mu+\sigma)I_{A}(t)-\gamma_{A }I_{A}(t)\right)+b_{2}\left(\delta\omega E(t)-(\mu+\sigma)I_{S}(t)-\gamma_{S} I_{S}(t)\right)\] \[+b_{3}\left(\eta_{A}I_{A}(t)+\eta_{S}I_{S}(t)-\mu_{P}W(t)\right).\] Since, the inequality \(S\leq\frac{\Lambda N}{\mu}\) holds, it follows that \[{}^{C}D_{0+}^{\alpha}V(t) \leq b_{0}\left(\frac{\Lambda N(t)}{\mu}\frac{\beta_{1}W(t)}{1+ \phi_{1}W(t)}+\frac{\Lambda N(t)}{\mu}\frac{\beta_{2}\left(I_{A}(t)+I_{S}(t) \right)}{1+\phi_{2}\left(I_{A}(t)+I_{S}(t)\right)}-\psi E(t)-\mu E(t)-\omega E (t)\right)\] \[+b_{1}\left((1-\delta)\omega E(t)-(\mu+\sigma)I_{A}(t)-\gamma_{A }I_{A}(t)\right)+b_{2}\left(\delta\omega E(t)-(\mu+\sigma)I_{S}(t)-\gamma_{S} I_{S}(t)\right)\] \[+b_{3}\left(\eta_{A}I_{A}(t)+\eta_{S}I_{S}(t)-\mu_{P}W(t)\right).\] Note that, because parameters and states variables are positive, we have also \[\frac{1}{1+\phi_{1}W}\leq 1,\quad\text{ and }\quad\frac{1}{1+\phi_{2}(I_{A}+I_{S})} \leq 1.\] It follows that \[{}^{C}D_{0+}^{\alpha}V(t) \leq b_{0}\left(\frac{\Lambda N(t)}{\mu}\beta_{1}N(t)+\frac{\Lambda N (t)}{\mu}\beta_{2}\left(I_{A}(t)+I_{S}(t)\right)-\psi E(t)-\mu E(t)-\omega E(t)\right)\] \[+b_{1}\left((1-\delta)\omega E(t)-(\mu+\sigma)I_{A}(t)-\gamma_{A} I_{A}(t)\right)+b_{2}\left(\delta\omega E(t)-(\mu+\sigma)I_{S}(t)-\gamma_{S}I_{S}(t)\right)\] \[+b_{3}\left(\eta_{A}I_{A}(t)+\eta_{S}I_{S}(t)-\mu_{P}N(t)\right).\] Rearranging and reducing leads to the below expression \[{}^{C}D_{0+}^{\alpha}V(t) \leq\left(b_{0}\frac{\Lambda N(t)}{\mu}\beta_{2}+b_{3}\eta_{A}-b_ {1}\varpi_{ia}\right)I_{A}(t)+\left(b_{0}\frac{\Lambda N(t)}{\mu}\beta_{2}+b_ {3}\eta_{S}-b_{2}\varpi_{is}\right)I_{S}(t) \tag{4}\] \[\left(b_{0}\frac{\Lambda N(t)}{\mu}\beta_{1}-b_{3}\mu_{P}\right) W(t)+\left(b_{1}(1-\delta)\omega+b_{2}\delta\omega-b_{0}\varpi_{e}\right)E(t). \tag{5}\] Therefore, by choosing \[b_{0}=\mu_{P}\varpi_{ia}\varpi_{is}\mu;\quad b_{1}=\Lambda N(t)(\beta_{1}\eta _{A}+\beta_{2}\mu_{P})\varpi_{is};\quad b_{2}=\Lambda N(t)(\beta_{1}\eta_{S}+ \beta_{2}\mu_{P})\varpi_{ia};\quad b_{3}=\Lambda N(t)\beta_{1}\varpi_{ia} \varpi_{is},\] it easy to see that \(V\) is continuous and positive definite for all \(E(t)>0\), \(I_{A}(t)>0\), \(I_{S}(t)>0\) and \(W(t)>0\). As a consequence, we obtain that \[b_{0}\frac{\Lambda N(t)}{\mu}\beta_{2}+b_{3}\eta_{A}-b_{1}\varpi _{ia}=0,\quad b_{0}\frac{\Lambda N(t)}{\mu}\beta_{2}+b_{3}\eta_{S}-b_{2}\varpi _{is}=0,\quad b_{0}\frac{\Lambda N(t)}{\mu}\beta_{1}-b_{3}\mu_{P}=0,\] \[\text{and}\quad b_{1}(1-\delta)\omega+b_{2}\delta\omega-b_{0} \varpi_{e}=\mu_{P}\varpi_{ia}\varpi_{is}\mu(R_{0}-1).\] Hence, putting altogether in the inequality (4), we get \[{}^{C}D_{0+}^{\alpha}V(t)\leq\mu_{P}\varpi_{ia}\varpi_{is}\mu(R_{0}-1).\] Finally, \({}^{C}D_{0+}^{\alpha}V(t)\leqslant 0\) if \(R_{0}\leqslant 1\). In addition, it is not hard to verify that the largest invariant set of \(\left\{(S,E,I_{A},I_{S},R,W)\in\mathbb{R}^{6}:\ ^{C}D_{0+}^{\alpha}V(t)=0\right\}\) is the singleton \(\{dfe\}\). Hence by the Lasalle invariance principle, we conclude that the disease-free equilibrium \(dfe\) is globally asymptotically stable. ## 4 Numerical results In this section, we demonstrate the effectiveness of our suggested model, (1), in simulating the dynamics of COVID-19 transmission, accounting for environmental contamination by infected individuals and variable involved population. The model is fine-tuned using actual data collected from South Africa, which was obtained from the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University [23]. In March 2020, South Africa confirmed its first COVID-19 case, prompting the government to declare a National State of Disaster on March 15th, followed by a nationwide lockdown on March 26th. To better understand the spread of the disease in South Africa, our study focuses on simulating its propagation from the start of the lockdown measures until the end of the first peak of the epidemic, which occurred on September 22nd, 2020. In light of the United Nations' documentation, as reported by the Department of Economic and Social Affairs, Population Division, World Population Prospects 2022, we incorporate a birth rate of \(\Lambda=19.995/1000\) (19.995 births per 1000 individuals) and a natural human death rate of \(\mu=9.468/1000\), while also assuming the initial conditions \(I_{S}(0)=17,R(0)=0\), and \(D(0)=0\). We determine the remaining parameter values and initial conditions through the process of fitting model (1) to the daily new confirmed cases data. Next, we optimize the order derivatives to examine the efficacy of fractional orders in the fitting process. We utilize the root mean square deviation (RMSD) as a measure of the model accuracy, where \(\text{RMSD}(x,\widehat{x})=\sqrt{\frac{1}{n}\sum_{t=1}^{n}(x_{t}-\widehat{x}_{t}) ^{2}}\). Here, \(n\) denotes the number of data points, \(x\) denotes the estimated values, and \(\widehat{x}\) denotes the actual values. It is noteworthy that when fitting a model to data, the consideration of biologically or clinically meaningful parameters plays a crucial role. For instance, previous research studies [24, 25] suggest that the parameters \(\gamma_{S}\) and \(\gamma_{A}\) should be confined to the range of (0,1), and the parameter \(\sigma\) should fall within the range of (0.01, 0.06). Given this point, we have chosen to display a set of fitted results, rather than solely presenting the best fit, to demonstrate the overall efficiency of the model utilizing fractional calculus. In Figure 1(a), the Bayesian inference method is employed to provide initial estimates for both the parameter values and initial conditions, as the results depicted by the light gray curves. Subsequently, 300 optimal fits are chosen from the results (as illustrated by the dark gray curves), and the black curve represents the best fit with an RMSD of 978.7969. In Figure 1(b), we further optimize the order derivatives for the 300 selected best fits, and their results are depicted by the gray curves, while the black curve shows the best fit with an RMSD of 956.775. Additionally, The boxplot representation in Figure 1(c) portrays the statistical properties of the parameter values utilized in the selected candidates, which can potentially provide valuable insights into fitting parameters in similar compartmental models. Figure 1(d) depicts the density of errors for the 300 selected candidates, categorized into integer orders and fractional orders, represented by the left Figure 1: Demonstrating the accuracy of the analyzed model (1), in fitting the daily new confirmed cases obtained from CSSE [23]. (a) The data points are represented by the circles, and the evaluation of errors is carried out by employing the root mean square deviation (RMSD). The light gray curves indicate the results of the model with initial estimates of parameter values and initial conditions. The dark gray curves represent 300 optimal fits, and the black curve shows the best fit with an RMSD of 978.7969. (b) The gray curves represent the results of the model with the optimized order derivatives for the 300 selected best fits, and the black curve shows the best fit, with an RMSD of 956.775. (c) The distributions of selected parameter values are illustrated through boxplot representation. (d) The distribution of error scores for the integer and fractional model (left and right, respectively). and right sides of the violin plot, respectively. The results reveal that the mean and minimum errors of the model with fractional orders are comparatively lower than those of the model with integer orders. Nonetheless, it should be noted that there are a few isolated instances where the fractional models fitted worse than the integer-order models. The tables presenting the optimal values of the parameters, initial conditions, and order derivatives, along with their corresponding statistical properties such as mean, standard deviation, and median can be found in Table 2 and Tab 3. ### Sensitivity analysis of the basic reproduction number The purpose of this section is to examine the influence of different factors on the transmission of epidemic diseases. By assessing the sensitivity of \(R_{0}\) to various parameters, such as social distancing measures, policymakers can make informed decisions regarding public health interventions. Furthermore, this methodology allows for the identification of parameters that have the most significant impact on disease propagation. To assess the effects of minor variations in parameter \(\mathcal{P}\) on the basic reproduction number \(R_{0}\), we introduce the forward normalized sensitivity \begin{table} \begin{tabular}{c|c c c|c c} \hline parameters & mean & std & median & optimized value & sensitivity \\ \hline \(\mu\) & - & - & - & - & 1.0000 \\ \(\Lambda\) & - & - & - & -0.7169 \\ \(\mu_{p}\) & 0.6566 & 0.2165 & 0.6747 & 0.6642 & -0.1361 \\ \(\phi_{1}\) & 3.49e-5 & 0.0002 & 6.37e-6 & 9.09e-7 & 0.0000 \\ \(\phi_{2}\) & 0.6268 & 0.2456 & 0.6603 & 0.0634 & 0.0000 \\ \(\beta_{1}\) & 0.0002 & 0.0007 & 4.72e-5 & 7.69e-6 & 0.1382 \\ \(\beta_{2}\) & 6.90e-5 & 5.96e-5 & 5.17e-5 & 1.68e-5 & 0.8618 \\ \(\delta\) & 0.8988 & 0.0193 & 0.8973 & 0.9338 & 0.5860 \\ \(\psi\) & 0.3749 & 0.2765 & 0.3183 & 0.0081 & -0.0081 \\ \(\omega\) & 0.7214 & 0.1725 & 0.7484 & 0.9677 & 0.0280 \\ \(\sigma\) & 0.1711 & 0.0477 & 0.1593 & 0.3194 & -0.8957 \\ \(\gamma_{S}\) & 0.0011 & 0.0015 & 0.0007 & 0.0002 & -0.0007 \\ \(\gamma_{A}\) & 0.7747 & 0.1725 & 0.8165 & 0.8667 & -0.0195 \\ \(\eta_{S}\) & 0.1344 & 0.1639 & 0.0760 & 0.2188 & 0.1276 \\ \(\eta_{A}\) & 0.3206 & 0.2733 & 0.2359 & 0.9048 & 0.0105 \\ \hline \end{tabular} \end{table} Table 2: Statistical properties of the selected values of the parameters and the sensitivity index of \(R_{0}\) across all parameters (\(\mathcal{S}_{\mathcal{P}}^{R_{0}}\)) for their optimized values. \begin{table} \begin{tabular}{c c c c c} \hline initial conditions & mean & std & median & optimized value \\ \hline \(S(0)\) & 45395.9 & 14711.4 & 41078.4 & 90336.8 \\ \(E(0)\) & 7.42043 & 23.2615 & 0.00044 & 4.297e-6 \\ \(I_{A}(0)\) & 153.791 & 87.4703 & 156.002 & 231.888 \\ \(W(0)\) & 94.6353 & 79.8807 & 77.6268 & 220.056 \\ \hline order derivatives & mean & std & median & optimized value \\ \hline \(\alpha_{S}\) & 0.995353 & 0.004721 & 0.996313 & 1.000000 \\ \(\alpha_{E}\) & 0.998563 & 0.008932 & 1.000000 & 0.998880 \\ \(\alpha_{I_{A}}\) & 0.977973 & 0.060090 & 1.000000 & 1.000000 \\ \(\alpha_{I_{S}}\) & 0.999547 & 0.004181 & 1.000000 & 0.980835 \\ \(\alpha_{R}\) & 0.749677 & 0.016673 & 0.750000 & 0.750032 \\ \(\alpha_{W}\) & 0.973167 & 0.032041 & 0.987370 & 0.964870 \\ \(\alpha_{N}\) & 0.996496 & 0.004506 & 0.999530 & 1.000000 \\ \hline \end{tabular} \end{table} Table 3: Statistical properties of the selected values for initial conditions and order derivatives, and their optimized values. index of \(R_{0}\) for \(\mathcal{P}\), which can be mathematically formulated as: \[\mathcal{S}_{\mathcal{P}}^{R_{0}}=\frac{\partial R_{0}}{\partial\mathcal{P}} \frac{\mathcal{P}}{R_{0}}.\] If the sensitivity index of \(\mathcal{S}_{\mathcal{P}}^{R_{0}}\) is positive, it indicates an increase in the value of the basic reproduction number \(R_{0}\) with respect to the parameter \(\mathcal{P}\). Conversely, if the value of \(R_{0}\) decreases in response to changes in the parameter \(\mathcal{P}\), the sensitivity index will be negative. We have conducted an analysis of the sensitivity of the basic reproduction number, \(R_{0}\), with respect to the optimized parameters. The computed sensitivity indices for \(R_{0}\) have been demonstrated in Table 2. In addition, we illustrate the distribution of sensitivity indices of the selected values of parameters on \(R_{0}\) through boxplots, together with the density of \(R_{0}\) for these values with considering mean of \(N\), in Figure 2. Our analysis revealed that only the parameter \(\delta\), which represents the proportion of symptomatic infectious individuals, exhibited mostly positive but sometimes negative sensitivity, while the other parameters maintained their sign of sensitivity indices. Additionally, it would be interesting to discuss the sensitivity of the parameters associated with pathogenic viruses, namely \(\mu_{P}\), \(\phi_{1}\), \(\beta_{1},\eta_{S}\), and \(\eta_{A}\). The sensitivity index for \(\phi_{1}\) is zero as it is not in equation (3), and as expected, for \(\mu_{P}\) is always negative, while for \(\beta_{1},\eta_{S}\), and \(\eta_{A}\) is positive. ### Numerical methods and implementation The numerical analyses in this study were carried out using the Julia programming language and the high-performance computing system PUHTI at the Finnish IT Center for Science (CSC). To solve fractional differential equations, the FdeSolver.jl package (v 1.0.7) was utilized, which implements predictor-corrector algorithms and product-integration rules [26]. Parameter estimation was performed using Bayesian inference and Hamiltonian Monte Carlo (HMC) with the Turing.jl package, and ODEs were solved using the DifferentialEquations.jl package. The order of derivatives was optimized using the function (L)BFGS, which employs the (Limited-memory) Broyden-Fletcher-Goldfarb-Shanno algorithm from the Optim.jl package in combination with FdeSolver.jl. ### Data and code availability All computational results for this article, including all data and code used for running the simulations and generating the figures are available on GitHub and accessible via [https://github.com/moeinkh88/Covid_Shedding.git](https://github.com/moeinkh88/Covid_Shedding.git). Figure 2: (a) The boxplots of the sensitivity indices for \(R_{0}\) associated with the selected parameter values, and (b) the distribution of \(R_{0}\) for these values. ## 5 Conclusion Our study presents a modified approach to modeling COVID-19 transmission dynamics by introducing a fractional model that incorporates pathogens in the environment as a shedding effect. This approach offers a more comprehensive representation of disease transmission dynamics by capturing the complex and heterogeneous nature of disease transmission dynamics. Our model also accounts for a variable population and birth rate and employs incommensurate fractional order derivatives in ODEs. Our analysis accurately computes the basic reproduction number and demonstrates the global stability of the disease-free equilibrium point. We use numerical simulations fitted to real data from South Africa to show the efficacy of our proposed model. Furthermore, we compare the performance of our model with fractional orders against those with integer orders. Our results indicate that fractional models perform more flexibly. We also provide a range of reliable values for the initial conditions, model parameters, and order derivatives. Additionally, our analysis examines the sensitivity of the model parameters. Overall, our study offers valuable insights into COVID-19 transmission dynamics and has the potential to inform the development of effective measures to control the ongoing pandemic and prevent future outbreaks. We hope that our findings will contribute to a better understanding of this complex disease and its transmission dynamics. ## Declaration of Competing Interest The authors declare no conflict of interest. ## Acknowledgment This study has been supported by the Academy of Finland (330887 to MK, LL) and the UTUGS graduate school of the University of Turku (to MK). FN is supported by the Bulgarian Ministry of Education and Science, Scientific Programme "Enhancing the Research Capacity in Mathematical Sciences (PIKOM)", Contract No. DO1-67/05.05.2022. The authors wish to acknowledge CSC-IT Center for Science, Finland, for computational resources.
2302.05134
Neural Capacitated Clustering
Recent work on deep clustering has found new promising methods also for constrained clustering problems. Their typically pairwise constraints often can be used to guide the partitioning of the data. Many problems however, feature cluster-level constraints, e.g. the Capacitated Clustering Problem (CCP), where each point has a weight and the total weight sum of all points in each cluster is bounded by a prescribed capacity. In this paper we propose a new method for the CCP, Neural Capacited Clustering, that learns a neural network to predict the assignment probabilities of points to cluster centers from a data set of optimal or near optimal past solutions of other problem instances. During inference, the resulting scores are then used in an iterative k-means like procedure to refine the assignment under capacity constraints. In our experiments on artificial data and two real world datasets our approach outperforms several state-of-the-art mathematical and heuristic solvers from the literature. Moreover, we apply our method in the context of a cluster-first-route-second approach to the Capacitated Vehicle Routing Problem (CVRP) and show competitive results on the well-known Uchoa benchmark.
Jonas K. Falkner, Lars Schmidt-Thieme
2023-02-10T09:33:44Z
http://arxiv.org/abs/2302.05134v2
# Neural Capacitated Clustering ###### Abstract Recent work on deep clustering has found new promising methods also for constrained clustering problems. Their typically pairwise constraints often can be used to guide the partitioning of the data. Many problems however, feature cluster-level constraints, e.g. the Capacitated Clustering Problem (CCP), where each point has a weight and the total weight sum of all points in each cluster is bounded by a prescribed capacity. In this paper we propose a new method for the CCP, _Neural Capacited Clustering_, that learns a neural network to predict the assignment probabilities of points to cluster centers from a data set of optimal or near optimal past solutions of other problem instances. During inference, the resulting scores are then used in an iterative k-means like procedure to refine the assignment under capacity constraints. In our experiments on artificial data and two real world datasets our approach outperforms several state-of-the-art mathematical and heuristic solvers from the literature. Moreover, we apply our method in the context of a cluster-first-route-second approach to the Capacitated Vehicle Routing Problem (CVRP) and show competitive results on the well-known Uchoa benchmark. ## 1 Introduction In recent years much progress has been achieved in applying deep learning methods to solve classical clustering problems. These problems can arise in different areas like compression, classification and discrete optimization. Motivated by the success of neural networks for supervised learning tasks they are now also applied for unsupervised learning tasks like clustering. Especially for high-dimensional data and very large datasets neural methods have shown superior results compared to classic methods like k-means [10] and Gaussian Mixture Models [11]. Due to its ability to leverage prior knowledge and information to guide the partitioning of the data, constrained clustering in particular has recently gained increasing traction. It is often used to incorporate existing domain knowledge in the form of pairwise constraints expressed in terms of _must-link_ and _cannot-link_ relations [26]. However, another type of constraints has been largely ignored so far: cluster level constraints. This type of constraint can for example restrict each assignment group in terms of the total sum of weights which are associated with its members. The simplest case of such a constraint is the maximum size of the cluster, where each point exhibits a weight of one. In the more general case, weights and cluster capacities are real valued and can model a plenitude of practical applications such as edge server placement [1] or customer segmentation [1]. This formulation gives rise to well-known problems from combinatorial optimization, the _capacitated \(p\)-median problem_ (CPMP) [12] where each center has to be an existing point of the data and the _capacitated centered clustering problem_ (CCCP) [27] where cluster centers correspond to the geometric center of their members. The general objective is to select a number \(K\) of cluster centers and find an assignment of points such that the total distance between the points and their corresponding centers is minimized while respecting the cluster capacity. Both problems are known to be _NP_-hard and have been extensively studied [27]. **Contributions** * We propose the first approach to solve general Capacitated Clustering Problems based on deep learning. * Our problem formulation includes well-known problem variants like the CPMP and CCCP as well as more simple constraints on the cluster size. * The presented approach achieves competitive performance on several artificial and real world datasets, compared to methods based on mathematical solvers while achieving run time improvements of up to one order of magnitude. * We present a cluster-first-route-second extension of our method as effective construction heuristic for the CVRP. ## 2 Background A typical clustering task is concerned with the grouping of elements in the given data and is normally done in an unsupervised fashion. This grouping can be achieved in different ways where we usually distinguish between partitioning and hierarchical approaches [11]. In this work we are mainly concerned with partitioning methods, i.e. methods that partition the data into different disjoint sub-sets without any hierarchical structure. Although clustering methods can be applied to many varying data modalities like user profiles or documents, in this work we consider the specific case of spatial clustering [11], that normally assumes points to be located in a metric space of dimension \(D\), a setting often encountered in practical applications like the facility location problem [10]. ### Capacitated Clustering Problems (CCPs) Let there be a set of \(n\) points \(N=\{1,2,\ldots,n\}\) with corresponding feature vectors \(x_{i}\in\mathbb{R}^{D}\) of coordinates and a respective weight \(q_{i}\in\mathbb{R}\) associated with each point \(i\in N\). Further, we assume that we can compute a distance measure \(d(x_{i},x_{j})\) for all pairs of points \(i,j\in N\). Then we are concerned with finding a set of \(K\) capacitated disjoint clusters \(c_{k}\in C,\ c_{k}\subset N,\ c_{k}\cap c_{l}=\emptyset\;\forall k,l\in\{1, \ldots,K\}\) with capacities \(Q_{k}>0\). The assignment of points to these clusters is given by the set of binary decision variables \(y_{ik}\), which are 1 if point \(i\) is a member of cluster \(k\) and 0 otherwise. **Capacitated \(p\)-Median Problem** For the CPMP the set of possible cluster _medoids_ is given by the set of all data points \(N\) and the objective is to minimize the distance (or dissimilarity) \(d(x_{i},x_{m_{k}})\) between all \(i\) and their cluster _medoid_\(m_{k}\): \[\min\sum_{i\in N}\sum_{k\in K}d(x_{i},x_{m_{k}})y_{ik} \tag{1}\] s.t. \[\sum_{k\in K}y_{ik} =1,\quad\forall i\in N,\quad\forall k\in K, \tag{2}\] \[\sum_{i\in N}q_{i}y_{ik} \leq Q_{k},\quad\forall k\in K,\] (3) \[m_{k} =\operatorname*{argmin}_{m\in c_{k}}\sum_{i\in c_{k}}d(x_{i},x_{m }),\] (4) \[y_{ik} \in\{0,1\},\quad\forall i\in N,\quad\forall k\in K, \tag{5}\] where each point is assigned to only one cluster (2), all clusters respect the capacity constraint (3), medoids are selected minimizing the dissimilarity of cluster \(c_{k}\) (4) and \(y\) being a binary decision variable (5). **Capacitated Centered Clustering Problem** In the CCCP formulation, instead of medoids selected among the data points we consider _centroids_\(\mu_{k}\) corresponding to the geometric center of the points assigned to each cluster \(c_{k}\), replacing (4) with \[\mu_{k}=\operatorname*{argmin}_{\mu\in\mathbb{R}^{D}}\sum_{i\in c_{k}}d(x_{i},\mu), \tag{6}\] which in the case of the Euclidean space for spatial clustering considered in this paper has a closed form formulation, which (with \(|c_{k}|\) as cardinality of cluster \(c_{k}\)) is given by: \[\mu_{k}=\frac{1}{|c_{k}|}\sum_{i\in N}x_{i}y_{ik}. \tag{7}\] This leads to the new minimization objective of \[\min\sum_{i\in N}\sum_{k\in K}d(x_{i},\mu_{k})y_{ik}. \tag{8}\] **Constrained Clustering** While constrained clustering can be treated as a meta set of clustering problems including the CCP, in practice most approaches under its umbrella are mainly concerned with pairwise (or triplet) constraints between cluster points [13] or cardinality constraints (e.g. each table at a wedding reception should have the same number of males and females). Sometimes the constraint on cluster size is taken into account as well, while the explicit consideration of cluster level capacity constraints is rarely encountered and most existing constrained clustering approaches can not directly be applied to solve CCPs. ### Supervised Learning for Clustering Since in practice there are often several instances of the same problem type and from the same ground truth distribution, which have to be solved, we can utilize supervised learning to learn from existing data how to solve new unseen instances. Moreover, most heuristics require expert and domain knowledge and often cannot directly be applied to instances from different problem types, while learned approaches can achieve better generalization and do not require additional domain knowledge. ## 3 Related Work Clustering AlgorithmsTraditional partitioning methods to solve clustering problems, like the well-known k-means algorithm [12], have been researched for more than half a century. Meanwhile, there exists a plethora of different methods including Gaussian Mixture Models [12], density based models like DBSCAN [15] and graph theoretic approaches [14]. Many of these algorithms have also been extended to solve other problem variants like the CCP. In particular, Mulvey and Beck [1] introduce a k-medoids algorithm utilizing a regret heuristic for the assignment step combined with additional re-locations during an integrated local search procedure while Geetha et al. [2] propose an adapted version of k-means which instead uses a priority heuristic to assign points to capacitated clusters. Meta-HeuristicsApart from direct clustering approaches there are also methods from the operations research community which tackle CCPs or similar formulations like the facility location problem. Different algorithms were proposed modeling and solving the CCP as General Assignment Problem (GAP) [10], via simulated annealing and tabu search [12], with genetic algorithms [13] or using a scatter search heuristic [12]. Math-HeuristicsIn contrast to meta-heuristics, _math-heuristics_ combine heuristic methods with powerful mathematical programming solvers like Gurobi [13], which are able to solve small scale instances to optimality and have shown superior performance to traditional meta-heuristics in recent studies. Stefanello et al. [2015] combine the mathematical solution of the CPMP with a heuristic post-optimization routine in case no optimality was achieved. The math-heuristic proposed in [14] comprises two phases: First, a global optimization phase is executed. This phase alternates between an assignment step, which solves a special case of the GAP as a binary linear program (BLP) for fixed medoids, and a median update step, selecting new medoids \(m_{k}\) minimizing the total distance to all cluster members under the current assignment. This is followed by a local optimization phase relocating points by solving a second BLP for the sub-set of clusters which comprises the largest unused capacity. Finally, the PACK algorithm introduced in [1] employs a block coordinate descent similar to the method of [14] where the assignment step is solved with Gurobi and the step updating the centers is computed following eq. 7 according to the current assignment. Deep ClusteringSince the dawn of deep learning, an increasing number of approaches in related fields is employing deep neural networks. Most approaches in the clustering area are mainly concerned with learning better representations for downstream clustering algorithms, e.g. by employing auto-encoders to different data modalities [12, 13, 14], often trained with enhanced objective functions, which, apart from the representation loss, also include a component approximating the clustering objective and additional regularization to prevent the embedding space from collapsing. A comprehensive survey on the latest methods is given in [15]. More recently, also deep approaches for constrained clustering have been proposed: Genevay et al. [2019] reformulate the clustering problem in terms of optimal transport to enforce constraints on the size of the clusters. Zhang et al. [2021] present a framework describing different loss components to include pairwise, triplet, cardinality and instance level constraints into auto-encoder based deep embedded clustering approaches. Finally, Manduchi et al. [2021] propose a new deep conditional Gaussian Mixture Model (GMM), which can include pairwise and instance level constraints. Usually, the described deep approaches are evaluated on very large, high dimensional datasets like MNIST [1] or Reuters [13], on which classical algorithms are not competitive. This is in strong contrast to spatial clustering with additional capacity constraints, for which we propose the first deep learning based method. ## 4 Proposed Method ### Capacitated k-means The capacitated k-means method proposed by Geetha et al. [2009] changes the assignment step in Lloyd's algorithm [12], which is usually used to compute k-means clustering. To adapt the procedure to the CCP, the authors first select the \(K\) points with the highest weights \(q\) as initial centers, instead of selecting them randomly. Moreover, they introduce priorities \(\omega_{ik}\) computed according to \[\omega_{ik}=\frac{q_{i}}{d(x_{i},\mu_{k})}, \tag{9}\] which divides the weight \(q_{i}\) of each point \(i\) by its distance to the center of cluster \(k\). Then the list of priorities is sorted and nodes are sequentially assigned to the centers according to their priority. The idea is to first assign points with a relatively larger weight to the centers and then points with smaller weight, which can be more easily assigned to other clusters. Then the centroids are recomputed via the arithmetic mean of the group members (eq. 7). The corresponding pseudo code is given in Algorithm 1. ``` input :\(K\), \(n\), coordinates \(x\), weights \(q\), cluster capacity \(Q\), convergence condition \(\delta\) output : binary assignment matrix Y 1\(M\leftarrow\texttt{initopt\_waitoms}(x,q,K)\) while\(\textit{not}\ (\delta(x,M,Y)\)do 2\(Y\leftarrow\texttt{allzero}(n,K)\)//resetassignment Q \(\leftarrow\)repeat\((Q,K)\)//reset capacities 3foreach\(i\in N\)do 4 compute priorities for all clusters (eq. 9) 5 sort priorities, insert into queue \(S\) while\(S\) not empto 6 get next \(i,k\) from \(S\) if\(i\) unassigned and \(Q_{k}\geq q_{i}\)then 7\(Y_{ik}\gets 1\)//clusterassignment Q\({}_{k}\leftarrow\) Q\({}_{k}-q_{i}\)//updatecapacity 8foreach\(k\in\{1,\ldots,K\}\)do 9 update centroids via eq. 7 returnY ``` **Algorithm 1** Capacitated k-means ## 5 Proposed Method ### Capacitated k-means The capacitated k-means method proposed by Geetha et al. [2009] changes the assignment step in Lloyd's algorithm [12], which is usually used to compute k-means clustering. To adapt the procedure to the CCP, the authors first select the \(K\) points with the highest weights \(q\) as initial centers, instead of selecting them randomly. Moreover, they introduce priorities \(\omega_{ik}\) computed according to \[\omega_{ik}=\frac{q_{i}}{d(x_{i},\mu_{k})}, \tag{10}\] which divides the weight \(q_{i}\) of each point \(i\) by its distance to the center of cluster \(k\). Then the list of priorities is sorted and nodes are sequentially assigned to the centers according to their priority. The idea is to first assign points with a relatively larger weight to the centers and then points with smaller weight, which can be more easily assigned to other clusters. Then the centroids are recomputed via the arithmetic mean of the group members (eq. 7). The corresponding pseudo code is given in Algorithm 1. ``` input :\(K\), \(n\), coordinates \(x\), weights \(q\), cluster capacity \(Q\), convergence condition \(\delta\) output : binary assignment matrix Y 1\(M\leftarrow\texttt{initopt\_waitoms}(x,q,K)\) while\(\textit{not}\ (\delta(x,M,Y)\)do 2\(Y\leftarrow\texttt{allzero}(n,K)\)//resetassignment Q \(\leftarrow\)repeat\((Q,K)\)//reset capacities 3foreach\(i\in N\)do 4 compute priorities for all clusters (eq. 10) 5 sort priorities, insert into queue \(S\) while\(S\) not empto 6 get next \(i,k\) from \(S\) if\(i\) unassigned and \(Q_{k}\geq q_{i}\)then 7\(Y_{ik}\gets 1\)//clusterassignment Q\({}_{k}\leftarrow\) Q\({}_{k}-q_{i}\)//updatecapacity 8foreach\(k\in\{1,\ldots,K\}\)do 9 update centroids via eq. 7 returnY ``` **Algorithm 2** Capacitated k-means While this heuristic works, it can easily lead to sub-optimal allocations and situations in which no feasible solution can be found, e.g. in cases where many nodes with high weight are located very far from the cluster centers. To solve these problems we propose several modifications to the algorithm. ### Neural Scoring Functions The first proposed adaption is to learn a neural scoring function \(f_{\theta}\) with parameters \(\theta\), which predicts the probability of each node \(i\) to belong to cluster \(k\). \[\hat{\omega}_{ik}=f_{\theta}(\mathcal{G},\mu_{k}) \tag{11}\] For that purpose we first create a graph representation \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) of the points by connecting each point with its \(\mathcal{K}\) nearest neighbors, producing edges \(e_{ij}\in\mathcal{E}\) with edge weights \(d(x_{i},x_{j})\). Nodes \(v_{i}\in\mathcal{V}\) are created by concatenating the respective coordinates and weights \([x_{i};q_{i}]\). This graph allows us to define a structure on which the relative spatial information of the different points can be efficiently propagated. We encode \(\mathcal{G}\) with the Graph Neural Network (GNN) introduced in [17], which is able to directly work with edge weights by employing the graph operator defined as \[h_{i}^{(l)}=\sigma\big{(}\operatorname{MLP}_{1}^{(l)}(h_{i}^{(l-1)})+ \operatorname{MLP}_{2}^{(l)}(\sum_{j\in\mathcal{H}(i)}e_{ji}\cdot h_{j}^{(l-1 )})\big{)} \tag{12}\] where \(h_{i}^{(l-1)}\in\mathbb{R}^{1\times d_{\texttt{emb}}}\) represents the embedding of node \(i\) at the previous layer \(l-1\), \(\mathcal{H}(i)\) is the 1-hop graph neighborhood of node \(i\), \(e_{ji}\) is the directed edge connecting nodes \(j\) and \(i\), \(\operatorname{MLP}_{1}\) and \(\operatorname{MLP}_{2}\) are Multi-Layer Perceptrons \(\operatorname{MLP}\) : \(\mathbb{R}^{d_{\text{emb}}}\rightarrow\mathbb{R}^{d_{\text{emb}}}\) and \(\sigma()\) is a suitable activation function. Furthermore, we add residual connections and regularization to each layer. In our case we choose _GELU_[1] and _layer normalization_[1] which outperformed _ReLU_ and _BatchNorm_ in preliminary experiments. The input layer projects the node features \(v_{i}=[x_{i};q_{i}]\in\mathbb{R}^{D+1}\) to the embedding dimension \(d_{\text{emb}}\) using a feed forward layer, which then is followed by \(L\) GNN layers of the form given in eq. 11. In order to create embeddings \(h_{\mu_{k}}\) for the centers \(\mu_{k}\in M\) we find the node \(j\) closest to \(\mu_{k}\) (corresponding to the cluster medoid \(m_{k}\)) and select its embedding \(h_{j}^{(L)}\) as \(h_{k}\). This embedding is concatenated with a globally pooled graph embedding \(h_{\mathcal{G}}\in\mathbb{R}^{d_{\text{emb}}}\): \[h_{\mathcal{G}}=\operatorname{MLP}_{\mathcal{G}}\Big{(}\big{[}\operatorname{ MAX}(h^{(L)});\operatorname{MEAN}(h^{(L)})\big{]}\Big{)} \tag{12}\] with \(\operatorname{MLP}_{\mathcal{G}}:\mathbb{R}^{2d_{\text{emb}}}\rightarrow \mathbb{R}^{d_{\text{emb}}}\). Then, the resulting vectors for all centers are fed through a self attention layer (SA) [21] followed by another \(\operatorname{MLP}_{\mu}:\mathbb{R}^{2d_{\text{emb}}}\rightarrow\mathbb{R}^{d _{\text{emb}}}\): \[h_{\mu_{k}}=\operatorname{MLP}_{\mu}\big{(}\operatorname{SA}\big{(}[h_{ \mathcal{G}};h_{k}]\big{)}\,\big{)}. \tag{13}\] Finally, we do conditional decoding by concatenating every center embedding with each node and applying a final stack of (element-wise) MLPs. The architecture of our neural scoring function is shown in Figure 1. Training the modelWe create the required training data by running the math-heuristic solver of Gangi and Baumann [2021] on some generated datasets to create a good (although not necessarily optimal) partitioning. Then we do supervised training using binary cross entropy1 (BCE) with pairwise prediction of the assignment of nodes \(i\) to clusters \(k\). Footnote 1: BCE: \(\mathcal{L}(\hat{y},y)=y\cdot\log\hat{y}+(1-y)\cdot\log(1-\hat{y})\) ### Neural Capacitated Clustering To fully leverage our score estimator we propose several adaptions and improvements to the original capacitated k-means algorithm (section 4.1 and Algorithm 1): #### Order of assignment Instead of sorting all center-node pairs by their priority and then sequentially assigning them according to that list, we fix an order given by permutation \(\pi\) for the centers and cycle through each of them, assigning one node at a time. Since the output of \(f_{\theta}\) is the log probability of point \(i\) belonging to cluster \(k\) and its magnitude does not directly inform the _order_ of assignments of different nodes \(i\) and \(j\) in the iterative cluster procedure, we found it helpful to scale the output of \(f_{\theta}\) by the heuristic weights introduced in eq. 9. Thus, we assign that node \(i\) to cluster \(k\), which has the highest scaled conditional priority and still can be accommodated considering the remaining capacity \(Q_{k}\). In case there remain any unassigned points \(j\) at the end of an iteration, which cannot be assigned to any cluster since \(q_{j}>Q_{k}\:\forall k\in K\), we assign them to a dummy cluster \(K+1\) located at the origin of the coordinate system. We observe in our experiments that already after a small number of iterations usually no nodes are assigned to the dummy cluster anymore, meaning a feasible allocation has been established. Moreover, since the neighborhood graph \(\mathcal{G}\) does not change between iterations, we can speed up the calculation of priorities by pre-computing and buffering the node embeddings \(h_{i}\) and graph embedding \(h_{\mathcal{G}}\) in the first iteration. #### Re-prioritization of later assignments This is motivated by the observation that later assignments are more difficult, since they have to cope with much more constrained center capacities. Thus, relying on the predefined cyclic order of the centers (which until this point has ensured that approx. the same number of nodes was assigned to each cluster) can lead to sub-optimal assignments in case some clusters have many nodes with very large or very small weights. To circumvent this problem we propose two different assignment strategies: 1. _Greedy:_ We treat the maximum priority over all clusters as an absolute priority \(\bar{\omega}_{i}\) for all remaining unassigned points \(i\): \[\bar{\omega}_{i}=\max_{k}\:\hat{\omega}_{ik}\] (14) Then the points are ordered by that priority and sequentially assigned to the closest cluster which can still accommodate them. 2. _Sampling:_ We normalize the absolute priorities \(\bar{\omega}_{i}\) of all remaining unassigned points \(i\) via the softmax function2 and treat them as probabilities according to which they are sequentially sampled and assigned to the closest cluster which can still accommodate them. This procedure can be further improved by sampling several assignment rollouts and selecting the configuration, which leads to the smallest resulting inertia. Footnote 2: softmax: \(\sigma(x)_{i}=\frac{e^{x_{i}}}{\sum_{j=1}^{n}e^{x_{j}}}\) The fraction \(\alpha\) of nodes for which the re-prioritization is applied we treat as a hyperparameter. Weight-adapted kmeans++ initializationAs found in the study of Celebi et al. [2013] standard methods for the selection of seed points for centers during the initialization of k-means algorithms perform quite poorly. This is why the k-means++ [11] initialization routine was developed, which aims to maximally spread out the cluster centers over the data domain, by sampling a first center uniformly from the data and then sequentially sampling the next center from the remaining data points with a probability equal to the normalized squared distance to the closest already existing center. We propose a small modification to the k-means++ procedure (called _ckm++_), that includes the weight information into the sampling procedure by simply multiplying the squared distance to the closest existing cluster center by the weight of the data point to sample. The full adapted method, which we dub _Neural Capacitated Clustering_ (NCC)3, is described in Algorithm 2. In order to justify the adaptions and quantify their usefulness we perform an ablation study and report the results in the supplementary. ## 5 Experiments We implement our model and the simple baselines in PyTorch [11] version 1.11 and use Gurobi version 9.1.2 for all methods that require it. All experiments are run on a i7-7700K CPU (4.20GHz). We use \(L=4\) GNN layers, an embedding dimension of \(d_{\text{emb}}=256\) and a dimension of \(d_{\text{h}}=256\) for all hidden layers. More details on training our neural scoring function we report in the supplementary. ### Capacitated Clustering For the experiments we use the CCCP formulation of the CCP (eq. 8) which considers centroids instead of medians. While there are several possible ways to select a useful number \(K\) of clusters, like the _Elbow method_[23], here we adopt a practical approach consisting of solving the problem with the _random_ assignment baseline method for a number of seeds and choosing the minimal resulting number of clusters as \(K\). For the \(n=200\) datasets we run all methods for 3 different seeds and report the mean cost with standard deviation and average run times. Since Gurobi requires a run time to be specified because it otherwise can take arbitrarily long for the computation to complete, we set reasonable total run times of 3min for \(n=200\) and 15min for the original sizes. If Gurobi times out, we return the last feasible assignment if available. Otherwise, we report the result for that instance as infeasible and set its cost to the average cost of the _rnd-NN_ baseline. Training loss and validation accuracy of our neural scoring function on the different training sets are shown in Figure 2. We evaluate our method in a _greedy_ and a _sampling_ configuration, which we tune on a separate validation set: (g-50-1) stands for 1 greedy rollout for a fraction of \(\alpha=0.5\) and (s-70-64) for 64 samples for \(\alpha=0.7\). lacchi _et al._ (2015) with the Milan cell-tower grid retrieved from OpenCelliD (OCID, 2021). After pre-processing it contains \(n=2020\) points to be assigned to \(K=25\) centers. We normalize all weights according to \(K\) with a maximum capacity normalization factor of 1.1. The experiments on the real world data are performed in two different settings: The first setting simply executes all methods on the full dataset, while the second setting sub-samples the data in a random local grid to produce 100 test instances of size \(n=200\), with weights multiplied by a factor drawn uniformly from the interval [1.5, 4.0) for ST and [2.0, 5.0) for TIM to produce more variation in the required \(K\). The exact pre-processing steps and sub-sampling procedure are described in the supplementary. ### Baseline Methods In our experiments we employ the following baseline models: * _random:_ sequentially assigns random labels to points while respecting cluster capacities. * _rnd-NN:_ selects \(K\) random points as cluster centers and sequentially assigns nearest neighbors to these clusters, i.e. points ordered by increasing distance from the center, until no capacity is left. * _topk-NN:_ similar to random-NN, but instead selects the \(K\) points with the largest weight as cluster centers. * _CapKMeans:_ the capacitated k-means algorithm of Geetha et al. (2009) with _clm++_ initialization which outperformed the original _topk_weights_ initialization. * _PACK:_ the block coordinate descent math-heuristic introduced in (Lahderanta _et al._, 2021) (using Gurobi). * _GB21:_ the two phase math-heuristic proposed by (Gnagi and Baumann, 2021) also using the Gurobi solver. instances of sizes between 100 and 1000 points sampled according to varying distributions (GMM, uniform, etc.) and with different depot positions and weight distributions. We split the benchmark into three sets of problems with size **N1** (\(100\leq n<250\)), **N2** (\(250\leq n<500\)) and **N3** (\(500\leq n\)). BaselinesIn our experiments we compare against several classical C1R2 approaches: First, the _sweep_ algorithm of Gillett and Miller [1974], which starts a beam at a random point and adds nodes in turn by moving the beam around the depot. We restart the beam at each possible point and run it clock and counter-clock wise. Next, _sweep+_, which instead of routing nodes in the order in which they were passed by the beam, routes them by solving a TSP with Gurobi. The _petal_ algorithm introduced in [Foster and Ryan, 1976] creates "petal" sets by running the sweep algorithm from different starting nodes and then solves a set covering problem with Gurobi to join them. Finally, for comparison (although not C1R2) the powerful auto-regressive neural construction method _POMO_ of [Kwon _et al._, 2020] which is trained with deep reinforcement learning and uses additional instance augmentation techniques. It is evaluated either greedily (g) or with sampling (s) and a beam width of \(n\) (size of the instance). ResultsAs shown in Table 3, our extended approach performs very competitive on the benchmark, beating all C1R2 approaches from the classical literature and being close to POMO on the small and medium sized instances (N1 and N2) while significantly outperforming it on the large instances (N3). Moreover, our method achieves the smallest fleet size of all methods, very close to the optimal fleet size \(K_{\text{optimal}}\). ## 6 Conclusion We present the first deep learning based method for the CCP. In experiments on artificial and real world data we show the competitive performance and fast and robust inference of our approach. Moreover, we demonstrate its usefulness also as constructive method for the CVRP achieving promising results on the well-known Uchoa benchmark. Figure 4: Clusters drawn with their convex hulls for the ST dataset. Black ”\(\mathbf{x}\)” markers are the centers. Figure 3: Clusters drawn with their convex hulls for the TIM dataset. Black ”\(\mathbf{x}\)” markers are the centers. \begin{table} \begin{tabular}{l r r|r r|r r} \hline \hline & \multicolumn{2}{c}{**N1**} & \multicolumn{2}{c}{**N2**} & \multicolumn{2}{c}{**N3**} \\ **Method** & **dist** & **t** (s) & **dist** & **t** (s) & **dist** & **t** (s) \\ \hline _sweep_ & 57.2 & 0.65 & 109.7 & 2.21 & 220.7 & 9.80 \\ & (28.1) & & (47.9) & & (96.5) & \\ _sweep+_ & 40.8 & 23.9 & 73.1 & 105.4 & 136.4 & 656.5 \\ & (28.1) & & (47.9) & & (96.5) & \\ _petal_ & 40.4 & 6.9 & 72.5 & 18.2 & 133.8 & 86.4 \\ & (28.1) & & (47.9) & & (96.5) & \\ _POMO_ (g) & 33.7 & 0.1 & 64.8 & 0.2 & 143.7 & 0.5 \\ & (24.7) & & (44.8) & & (87.2) & \\ _POMO_ (s) & **33.3** & 1.4 & **63.8** & 10.2 & 136.0 & 92.3 \\ & (24.7) & & (44.7) & & (87.1) & \\ NCC (g) & 35.9 & 5.1 & 67.2 & 10.2 & 122.5 & 29.2 \\ & (24.0) & & (43.6) & & (84.9) & \\ NCC (s) & 35.7 & 7.1 & 66.2 & 18.5 & **121.5** & 39.2 \\ & (24.0) & & (43.6) & & (84.9) & \\ \hline \(K_{\text{optimal}}\) & **(23.8)** & **(43.5)** & & **(84.5)** & \\ \hline \hline \end{tabular} \end{table} Table 3: Results on Uchoa benchmark. We report average of total distance, time (sec.) and number of vehicles \(K\) (in brackets). \(K_{\text{optimal}}\) is the target number of vehicles in the benchmark. Ablation In this section we present the results of an ablation study to evaluate the usefulness of our adaptions to the original capacitated k-means procedure of Geetha et al. (2009). First, we compare the performance of the original algorithm _CapKMeans_ for different initialization routines. The _topk-w_ initialization is the one originally proposed by Geetha et al., which simply selects the points with the k largest weights as initial cluster centers. We compare it to the _k-means_++ (Arthur and Vassilvitskii, 2006) initialization routine, which aims to maximally spread out the cluster centers over the data domain, by sampling a first center uniformly from the data and then sequentially sampling the next center from the remaining data points with a probability equal to the normalized squared distance to the closest already existing center. Moreover, we compare it to our own initialization method _ckm++_ that includes the weight information into the sampling procedure of _k-means++_ by simply multiplying the squared distance to the closest existing cluster center by the weight of the data point to sample. Since _topk-w_ is deterministic, the seed points for several random restarts are the same and therefore the procedure is only run once. For a fair comparison we also evaluate _k-means++_ and _ckm++_ for one run and report the results in Table 4. We sub-sample some \(n=200\) instances from the ST dataset and try to solve them in the different setups. The inertia is measured on a subset of 50 instances for which all five different initializations for _CapKMeans_ produced a feasible result. Those results show, that for a single run without restart, the _topk-w_ method leads to better performance. However, it is directly clear that several restarts can vastly improve the performance and lead to a significantly reduced number of infeasible instances. The remainder of the ablation study is concerned with evaluating the impact of the other proposed adaptions which led to our full NCC algorithm. The method _Cap-KM-alt_ uses the alternative assignment procedure described in the main paper, which cycles through the centers assigning only one point per turn, while _Cap-KM-nsf_ uses the original assignment method but employs the neural scoring function for the computation of priorities based on the absolute priority described in section 4.3. Finally, we report the results for the full _greedy_ and _sampling_ procedures (both with \(\alpha=0.7\)) as proposed in the main paper. We evaluate all methods with one run for _topk-w_ and eight random restarts for _k-means++_ and _ckm++_. The results in Figure 5 show that simply changing the assignment procedure leads to considerably worse performance. In contrast, using the neural scoring function achieves a significant improvement. Moreover, it can be seen that as soon as priorities are computed with the neural scoring function \(f_{\theta}\) (for _Cap-KM-nsf_, _NCC greedy_ and _NCC sampling_), _ckm++_ initialization outperforms the other two methods, while _k-means++_ works slightly better in the other two cases. Finally, the advanced methods using the alternative assignment and the neural scoring function lead to the overall best results. ## Appendix B Data Apart from the following description of our data related sample and processing steps, we will open source all our data and pre-processing pipelines as jupyter notebooks together with our model code at [https://github.com/jokofa/NCC](https://github.com/jokofa/NCC). ### Data Generation The generated (artificial) data is sampled from a Gaussian Mixture Model where the number \(K\) of mixture components is randomly selected between 3 and 12 for each instance. The mean \(\mu\) and (diagonal) covariance matrix \(\Sigma\) for each component are sampled uniformly from [0, 1]. Weights are also sampled from a standard uniform distribution and re-scaled by a factor of 1.1, i.e. for a problem with \(K=3\) components the sum off all weights will be \(3/1.1=2.727\). This allows for some flexibility in assigning points to different clusters and is useful to check the ability of the different algorithms to find good assignments in order to minimize the inertia. ### Real World Data The _Shanghai Telecom_ (ST) dataset (Wang _et al._, 2019) contains the locations and user sessions for base stations in the Shanghai region. In order to use it for our CCP task we aggregate the user session lengths per base station as its corresponding weight and remove base stations with only one user or less than 5min of usage in an interval of 15 days. Furthermore, we remove outliers far from the city center, i.e. outside of the area between latitude (30.5, 31.75) and longitude (120.75, 122), leading to a remaining number of \(n=2372\) stations. We set the required number of centers to \(K=40\) and normalize the weights with a capacity factor of 1.1. **TIM** The second dataset we assemble by matching the internet access sessions in the call record data of the _Telecom Italia Milan_ (TIM) dataset (Barlacchi _et al._, 2015) with the Milan cell-lower grid retrieved from OpenCelliD (OCID, 2021). To reduce the number of possible cell towers we only select the ones with LTE support. After pre-processing it contains \(n=2020\) points to be assigned to \(K=25\) centers. We normalize all weights according to \(K\) with a maximum capacity normalization factor of 1.1. ### Data Sub-Sampling In order to replicate the original data distribution of the full real world datasets on a smaller scale, we employ a sub \begin{table} \begin{tabular}{l r r r r} \hline \hline **Init** & _Restarts_ & _Inertia_ & _Avg Time_ (s) & _Inf. \%_ \\ \hline _topk-w_ & 1 & 0.825 & 1.88 & 54.0 \\ _k-means++_ & 1 & 0.841 & 1.91 & 40.8 \\ _ckm++_ & 1 & 0.860 & 1.88 & 44.0 \\ _k-means++_ & 8 & 0.624 & 2.99 & 2.0 \\ _ckm++_ & 8 & 0.640 & 3.02 & 3.6 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation on different initialization methods for _Cap-KMeans_. Inertia and average run time are measured on the subset of 50 feasible instances, while the percentage of infeasible solutions is w.r.t. the total number of 200 instances. sampling procedure. Our main concern is that if one would simply sample randomly from the whole grid, then the relative distance between points in the same cluster for small \(n\), e.g. in our case \(n_{\text{sub}}=200<<n_{\text{full}}=2020\), would be distorted. Thus, we first select a smaller part of the full point cloud and randomly sub-sample points within that region. To be exact, we first select a random rectangular region of size (0.5*L, 0.5*W), where L and W are the length and width of the full coordinate system in which the original point cloud is contained. Then, we uniformly sample \(n_{\text{sub}}\) points from the data points contained in that rectangle. To guarantee sufficient randomness of the sampling process, we require that at least some \(n>n_{\text{sub}}\) points are contained in the rectangular region, from which then \(n_{\text{sub}}\) points are selected. Moreover, for \(n_{\text{sub}}=200\) and \(n_{\text{full}}=2020\) with \(K=25\) clusters as in the TIM dataset, on average there would only be around 2.5 clusters required per sub sample, which does not present a very intersting clustering task. Thus, we rescale the weights of each sub-sample by a factor drawn uniformly from the interval [1.5, 4.0) for ST and [2.0, 5.0) for TIM to produce more variation in the required number of clusters \(K\). An exemplary plot of the procedure is given in Figure 6. ## Appendix C Training and Hyperparameters Here we report the training regime and hyperparameters used to learn our neural scoring function: Our model is implemented in PyTorch [10] version 1.11. We use \(L=4\) GNN layers, an embedding dimension of \(d_{\text{emb}}=256\) and a dimension of \(d_{\text{h}}=256\) for all hidden layers of the neural networks. Furthermore, GELU activations [11] and layer norm (LN) [1] are employed. The model is trained for 200 epochs with a mini-batch size of 128 using the Adam optimizer [12] and a learning rate of \(\lambda=0.001\) which is nearly halved every 40 epochs by multiplying it with a factor of \(0.55\). Moreover, we clip the squared gradient norm above \(0.5\). The global seed used for training is 1234. As \(\mathcal{K}\) for the KNN graph generation we use \(\mathcal{K}=25\) for all datasets. The corresponding models are trained with training and validation sets with instances of size \(n=200\), either generated artificial data or independently sub-sampled datasets for TIM and ST. The size of the training sets is 4000 for the artificial data and 4900 for both real world datasets. The targets are created by solving the respective instances with the two phase math-heuristic proposed by [1] which uses the Gurobi solver [13]. To solve these datasets in a reasonable time, we set the maximum time for the solver to 32 seconds for the artificial data and 200 seconds for TIM and ST. Figure 5: Comparison of different initialization methods and algorithm setups in terms of inertia. Figure 6: Example plot of the used sub-sampling procedure on the TIM dataset. A rectangle (black) of size (0.5*L, 0.5*W) at a random position within the coordinate system is selected. Then \(n\) data points (red) are sampled uniformly from within the rectangle. Finally, the weights of the chosen points are re-scaled. For the NCC algorithm we do a small non-exhaustive grid search on the validation data to select the fraction of re-prioritized points \(\alpha\) in {0.1, 0.3, 0.5, 0.7, 0.9} and the number of samples for the sampling method in {32, 64, 128}. The chosen values are directly reported in the result tables in the main paper. ## Ethical Statement There are no ethical issues.
2301.12118
Physics-informed Neural Network: The Effect of Reparameterization in Solving Differential Equations
Differential equations are used to model and predict the behaviour of complex systems in a wide range of fields, and the ability to solve them is an important asset for understanding and predicting the behaviour of these systems. Complicated physics mostly involves difficult differential equations, which are hard to solve analytically. In recent years, physics-informed neural networks have been shown to perform very well in solving systems with various differential equations. The main ways to approximate differential equations are through penalty function and reparameterization. Most researchers use penalty functions rather than reparameterization due to the complexity of implementing reparameterization. In this study, we quantitatively compare physics-informed neural network models with and without reparameterization using the approximation error. The performance of reparameterization is demonstrated based on two benchmark mechanical engineering problems, a one-dimensional bar problem and a two-dimensional bending beam problem. Our results show that when dealing with complex differential equations, applying reparameterization results in a lower approximation error.
Siddharth Nand, Yuecheng Cai
2023-01-28T07:53:26Z
http://arxiv.org/abs/2301.12118v1
# Physics-informed Neural Network: The Effect of Reparameterization in Solving Differential Equations ###### Abstract Differential equations are used to model and predict the behaviour of complex systems in a wide range of fields, and the ability to solve them is an important asset for understanding and predicting the behaviour of these systems. Complicated physics mostly involves difficult differential equations, which are hard to solve analytically. In recent years, physics-informed neural networks have been shown to perform very well in solving systems with various differential equations. The main ways to approximate differential equations are through penalty function and reparameterization. Most researchers use penalty functions rather than reparameterization due to the complexity of implementing reparameterization. In this study, we quantitatively compare physics-informed neural network models with and without reparameterization using the approximation error. The performance of reparameterization is demonstrated based on two benchmark mechanical engineering problems, a one-dimensional bar problem and a two-dimensional bending beam problem. Our results show that when dealing with complex differential equations, applying reparameterization results in a lower approximation error. ## 1 Introduction Differential equations are important because they can be used to model a wide variety of phenomena that occur in the physical world, such as the movement of fluids, the behaviour of financial markets, the growth of populations, and the electrical current in a circuit. By solving a differential equation, we can find out how a system will behave over time and make predictions about its future behaviour. This can be useful in a wide range of fields, including engineering, physics, economics, and biology. There exist two ways of solving differential equations, analytically and numerically. Analytical solutions give us an exact equation, but there are many differential equations whose exact solutions can't be found or are too hard to find. Numerical methods allow us to approximate a solution, but can't give us an exact formula. Recent developments in machine learning have led to neural networks (NNs), which are also universal approximators. This means a feedforward neural network with one hidden layer could accurately predict any continuous function arbitrarily well if there are enough hidden units [1]. Therefore, we can use NNs to approximate the solution of a differential equation. ### Origins The first paper outlining a method to approximate exact solutions for both ordinary differential equations (ODEs) and partial differential equations (PDEs) was published in 1997. In it, the authors converted a second-order differential equation (DE) into an optimization problem where the original function is approximated by adjusting the parameters of a neural network and using automatic differentiation to find the derivatives of the function [2]. ### Physics-informed Neural Network After about two decades, Raissi et al. [3] introduced the term "physics-informed neural network" (PINN) and described a method for using deep neural networks to solve DEs. Since most natural phenomena and physics are governed by DEs, applying the PINN can significantly improve the accuracy of the model. By incorporating the laws of physics into the unsupervised learning process, PINN can decrease or even eliminate the need for training data. In many fields of research, obtaining training data is difficult due to expensive experimental or high-fidelity simulations; PINN can solve this problem. Benefiting from this, many researchers have proposed various techniques based on PINN in solving different problems, such as in fluid dynamics and electricity transmission [4]. ### Current Research In the field of mechanical engineering, PINN shows great performance in predicting nonlinear behaviour of various structures. All these researchers applied PINN by adding penalty terms instead of reparameterization. For instance, by minimizing the potential energy of a structural system. Without any supervised learning phase, the trained unsupervised NN framework shows a high level of consistency compared to the analytical results in problems such as bending a beam in 2D, 3D geometry [5], and truss structures [6]. In addition to minimizing potential energy, the integrated physics knowledge can be engineered for constraints and boundary conditions, which has been demonstrated in modelling the bending plate [7], and 3D beam with different materials [8]. One can also incorporate this technique into other types of NNs such as recurrent neural networks (RNNs), where multiple long-short term memory networks (LSTMs) are utilized to sequentially learn different time-dependent features [9]. Other pieces of literature have investigated the effect of reparameterization. For instance, Zhu et al [10] compares the performance of different reparameterization schemes in PINNs and discusses the trade-offs between accuracy and computational efficiency. The authors present a series of numerical examples to demonstrate the effectiveness of different reparameterization schemes and provide guidelines for choosing the best scheme for a given problem. In addition, some scholars also present a detailed analysis of the effect of reparameterization on the accuracy and stability of PINN solutions such as Tran et al [11] and Zhang et al [12], who discuss the benefits and limitations of different reparameterization techniques and demonstrate its effectiveness through different examples. ### Problem and Contribution Comparing reparameterized and not reparameterized versions of a neural network to solve DEs is an important problem because the way that people embed physics (handle constraints) is through reparameterization and penalty functions. However, most works of literature only use one method or the other, as discussed in section 1.3, but never use both methods and compare the results for the DEs they are solving. The problem we will be solving is to see if reparameterization on a neural network can lower the approximation error compared to a benchmark case of not reparameterizing the network (using only penalty functions). Therefore, our contribution will be to show that a reparameterized version of a neural network reduces or does not reduce the approximation error when solving mechanical engineering DEs using neural networks. This contribution will give a greater inside into where reparameterized models perform better than non-reparameterized models and vice-versa. This will help researchers make smarter decisions as to which method can maximize their approximation accuracy. ## 2 Methods and Background Information In this section, we introduce some background information and our methods for solving DEs to the readers. We start by explaining what are the penalty function and reparameterization methods. Then we explain the finite differences method (FDM) for computing a numerical solution to DEs. Finally, we talk about the PINN we will be using for our tests. ### Penalty Function and Reparameterization We will use a second-order differential equation as an example. Given a differential equation of the form \[G(\vec{x},\Psi(\vec{x}),\nabla\Psi(\vec{x}),\nabla^{2}\Psi(\vec{x}))=0,\vec{x}\in D \tag{1}\] with certain boundary conditions (BC), where \(\vec{x}=(x_{1},x_{2},...,x_{n})\in\mathbb{R}^{n}\), \(D\subset\mathbb{R}^{n}\) and \(\Psi(\vec{x})\) is the function to be approximated [2]. Assume \(F(\vec{x})\) is the output of a neural network with \(\vec{x}\) as an input. The prediction \(F_{i}(x_{i})\) for input \(x_{i}\) of a neural network can be written as a function of the form \[F_{i}(x_{i})=v^{T}(h_{n}\circ h_{n-1}\circ\cdots\circ h(Wx_{i})) \tag{2}\] Then we can easily find the derivatives using automatic differentiation. Next, we let \(A(\vec{x})\) satisfy the boundary conditions. If \(\vec{F}(b_{1})=\vec{c}_{1},...,\vec{F}(b_{n})=\vec{c}_{n}\) are the boundary conditions, with \(\vec{b}_{i}\), set to constants \(\vec{c_{i}}\), then \(A(\vec{x})\) is of the form \[A(\vec{x})=\left\|F(\vec{b}_{1})-\vec{c}_{1}\right\|+\left\|F(\vec{b}_{2})- \vec{c}_{2}\right\|+...+\left\|F(\vec{b}_{3})-\vec{c}_{n}\right\| \tag{3}\] Then we can treat \(\Psi(\vec{x})\) as the following loss function \[Loss=\text{minimum }(A(\vec{x})+\left\|G(\vec{x},F(\vec{x}),\nabla F(\vec{x}), \nabla^{2}F(\vec{x}))\right\|) \tag{4}\] This is called the penalty function method, where the BCs are handled by adding penalty terms with penalty coefficient \(\lambda\). In short, the boundary conditions are not in-forced with this method, rather, the NN must minimize \(A(\vec{x})\) such that \(F(\vec{b}_{i})\) is as close to \(\vec{c_{i}}\) as possible. An alternative way to handle BC is by reparameterization, which explicitly forces the NN to satisfy the BCs by modifying the output representation of the NN. Instead of having \(F(\vec{x})\) be the approximator of \(\nabla\Psi(\vec{x})\), the reparameterized output becomes: \[K(\vec{x})=B(x)F(\vec{x}) \tag{5}\] where \(B(x)\) is a function of \(x\) so that the reparameterized function \(K(\vec{x})\) can naturally satisfy the BCs: \[\left\|K(\vec{b}_{1})-\vec{c}_{1}\right\|=\left\|K(\vec{b}_{2})-\vec{c}_{2} \right\|=...=\left\|K(\vec{b}_{n})-\vec{c}_{n}\right\|=0 \tag{6}\] ### Finite Differences (FDM) To obtain the higher-order derivatives for the numerical output, we use finite differences in our NN learning process. Since the problems considered (see Section 3) are static problems, only the spatial domain is discretized. Solving a set of equations based on the Taylor expansion and ignoring the high-order terms, the formula for calculating the 1st, 2nd and 4th-order derivative is shown here: \[\frac{dF}{dx}=\frac{F(x+h)-F(x-h)}{2h} \tag{7}\] \[\frac{d^{2}F}{dx^{2}}=\frac{F(x+h)-2F(x)+F(x-h)}{h^{2}} \tag{8}\] \[\frac{d^{4}F}{dx^{4}}=\frac{F(x+2h)-4F(x+h)+6F(x)-4F(x-h)+F(x-2h)}{h^{4}} \tag{9}\] ### Physics-informed Neural Network Fig. 1 exhibits the general schematic of the PINN that we used to solve our DEs. The artificial neural network (ANN) is applied to approximate the mapping between \(x\) and \(u\) or \(w\), depending on the purpose of the problem. A differential operator is employed to numerically calculate the derivatives of the output. The BCs are handled through penalty functions, reparameterization, or both. In this project, the NN is composed of two hidden layers with 128 neurons in each layer. The training is performed using Adam optimization with an initial learning rate of 0.001. Sigmoid and ReLu activation functions are applied to the hidden and output layers, respectively. For the accuracy of the comparison, the same hyperparameter of the neural network is applied to all test cases. ## 3 Problem Formulation In this section, we present two benchmark mechanical problems to demonstrate the effect of reparameterized and penalty functions. The implementation detail for both approaches of the two case studies will be exhibited. ### 1D Bar problem The first case study is a 1D bar problem, as described in Fig. 2, The left side of the bar is fixed to the wall, while the right side is free. A non-uniformly distributed load \(f(x)=x\) is applied through the central axis. An external concentrated force \(P\) is employed at the right side of the bar. Assuming the length of the bar is \(L\), the cross-sectional area is \(A\) and the elastic Young's modulus is \(E\) (a constant representing how easily a material can bend or stretch). The goal is to calculate the displacement of the bar along the x-axis. Based on the beam theory, the governing equation for this problem can be represented as: \[-\frac{d}{dx}(EA\frac{du}{dx})=f(x) \tag{10}\] Where the boundary conditions are \(u(x=0)=0\) and \(u^{\prime}(x=L)=0\), respectively. The first boundary condition states that there is zero displacement at the right side point, while the second boundary condition states that the velocity at the left side is zero. Figure 1: The general form of the PINN Figure 2: 1D Bar **Case 1: 1D Bar Reparameterized** To naturally satisfy both boundary conditions, the reparameterized NN estimator of the first problem can be represented as: \[U(x)=xe^{-x}NN(x)\text{; where NN(x) is the neural network} \tag{11}\] Instead of finding the mapping between \(x\) and \(u\), an additional term \(xe^{-x}\) is applied so that that the reparameterized function \(U(x)\) can have the following properties: \(U(x=0)=0\) and \(U^{\prime}(x=L)=0\). According to this formulation, the NN loss function can be defined as: \[Loss(x)=\left\|\frac{d^{2}U(x)}{dx^{2}}+\frac{f(x)}{EA}\right\|_{2} \tag{12}\] where \(x\) is a vector of grid points uniformly sampled along the bar from \(0\) to \(L\). **Case 2: 1D Bar Penalty Function** As stated in section 2, a penalty function is the most widely applied technique to handle boundary conditions of differential equations. Since the first case study involves two BCs, two penalty terms will be included in the loss function. Note that the reparameterized term is not considered in this scenario, thus the output \(U\) is calculated based on \(NN(x)\): \[U(x)=NN(x) \tag{13}\] The loss function based on the penalty function technique can be represented as: \[Loss(x)=\left\|\frac{d^{2}NN(x)}{dx^{2}}+\frac{f(x)}{EA}\right\|_{2}+\lambda_ {1}\left\|NN(x=0)\right\|_{2}+\lambda_{2}\left\|\frac{dNN(x)}{dx}\right|_{x=L }-P\right\|_{2} \tag{14}\] where \(\lambda_{1}\) and \(\lambda_{2}\) represent the penalty coefficients. They have been set to 100 for this project based on the preliminary investigations. ### 2D Bending Beam Problem The second case study is to estimate the vertical deflection (difference in direction between Earth's gravity and some reference direction) of a beam under non-uniformly distributed loading, as seen in Fig. 3. Similar to the 1D bar case study, we assume the length of the beam is \(L\), the Young's modulus is \(E\), the second moment of inertia of the beam cross-section is \(I\) (a measure of how resistant a cross-section is to bending), and the non-uniformly distributed loading as \(f(x)=sin(x)\). Both sides of the beam are simply supported, so the beam is free to rotate, and the bending moment at both ends is zero. Based on the classical beam theory, the governing equation and the boundary conditions of this problem are: \[EI\frac{d^{4}w(x)}{dx^{4}}+f(x)=0 \tag{15}\] Subject to: \[w(x=0)=0;w(x=L)=0;w^{\prime\prime}(x=0)=0;w^{\prime\prime}(x=L)=0 \tag{16}\] Figure 3: 2D Bending Beam **Case 3: 2D Beam Reparameterization** In this 2D bending beam case, it is difficult to fully satisfy all the BCs using reparameterization, due to the existence of zero and the second-order derivative. Thus, part of the BCs will be handled through the penalty functions. The NN approximator after partially reparameterized can be represented as: \[W(x)=sin(\pi\frac{x}{L})NN(x) \tag{17}\] The corresponding loss function in this case is: \[Loss(x)=\left\|\frac{d^{4}W(x)}{dx^{4}}+\frac{f(x)}{EI}\right\|_{2}+\lambda_{1 }\left\|\frac{d^{2}W(x)}{dx^{2}}\right|_{x=0}\right\|_{2}+\lambda_{2}\left\| \frac{d^{2}W(x)}{dx^{2}}\right|_{x=L}\right\|_{2} \tag{18}\] where \(W(x)\) is the vertical deflection estimation. The first term in Eq. 18 shows the residual of the governing Eq. 15, and the last two terms represent the residual of the second-order BCs in Eq.16. **Case 4: 2D Beam Penalty Function** Similar to case 1, no reparameterization will be considered in this case and the loss function is defined based on the residuals of governing equations and BCs. Therefore, the loss function of this 2D beam problem without reparameterized can be represented as: \[Loss(x)=\left\|\frac{d^{4}NN(x)}{dx^{4}}+\frac{f(x)}{EI}\right\|_{2}+\lambda_ {1}\left\|\frac{d^{2}NN(x)}{dx^{2}}\right|_{x=0,x=L}\right\|_{2}+\lambda_{2} \left\|\frac{d^{2}NN(x)}{dx^{2}}\right|_{x=0,x=L}\right\|_{2} \tag{19}\] where \(NN(x)\) is the vertical deflection estimation and the last two terms in Eq. 19 include all the BCs in Eq. 16. ## 4 Results According to the loss function we defined in Section 3, the NNs have been trained and the results are shown below. Fig. 3(a) and Fig. 3(b) represent the approximated displacement versus the analytical result at each node point for case 1 and case 2 respectively. It can be observed that the PINN shows good performance for both cases. The approximation error in cases 1 and 2 are 0.8063% and 0.8136% respectively. This indicates that the difference between the reparameterization and the penalty function method is not significant in the 1D bar case. Since only the first and second-order derivatives are included in the governing equations and BCs, this makes it easy for the neural network to estimate the target equations. However, when we remove the reparameterization in the 2D beam problem, the approximation noticeably deviates from the exact solution. Similar to Fig. 3(a) and Fig. 3(b), Fig. 4(a) and Fig. 4(b) show Figure 4: 1D Bar Test Results that the approximated vertical deflection point is very close to the exact solution. It can be easily noticed that the result in case 3 shows zero deviation at both ends of the beam (\(x=0\) and \(x=L\)). On the other hand, the penalty function does not manage to set the BCs to exactly zero and therefore, shows small deviations at both ends. This also affects the intermediate approximation of the results, at the first and last 30% of the approximated curve in 5b show a large error. This is extremely important in real-life physics and engineering, where errors in initial conditions may be amplified by further steps. The approximation accuracy can be significantly improved even though the output is partially parameterized. The importance of reparameterization can also be observed from the approximation error in both cases, the errors are 2.7813% and 12.9187% for cases 3 and 4 respectively. In the case of the beam problem, the errors may come from the second-order BC and fourth-order governing equations. In this case, FDM may not be suitable for dealing with higher-order derivatives, which can be investigated in future studies. ## 5 Conclusion In this paper, a physics-informed neural network is applied to solve two mechanical engineering benchmark problems involving different differential equations. The effect of reparameterization has been discussed based on the definition of the loss functions for both cases. The hyperparameter of the PINN for all test cases is the same. Based on the results in Section 4, we can conclude that: 1. Reparameterization shows a similar level of accuracy to the penalty function when the underlying DE is simple, evident in cases 1 and 2; 2. When dealing with difficult DEs, reparameterization dominates the penalty function in terms of approximation accuracy, evident in comparing cases 1 and 3 with 3 and 4; 3. Even partial reparameterization can significantly improve the PINN's overall performance, as evident in case 3. A strength of our contribution is that we filled a research gap; no researchers have investigated the effect of reparameterization on the approximated result. Some weaknesses of our findings is the use of only ODEs and a lack of variety in our case studies. Our case studies only use ODEs, so our findings do not encompass PDEs. In addition, we only used two DEs, which is not a very large sample size of DEs. With such few DEs, our results are biased towards a small subset of DEs. Given more time, we would like to address our contribution's weakness of not testing PDEs and using a lot more DEs. This would provide more evidence for or against our findings and make our finds more reputable. Finally, future work can be placed on how to incorporate more accurate differential operators into PINN. In addition, more research needs to go into creating efficient ways to perform reparameterization according to different BCs because each NN needs to be reparameterized using a Figure 5: 2D Bar Test Results different function based on BCs, there is no systematic way to finding a function that will ensure the BCs are satisfied.
2305.17597
On Ramanujan-Fourier expansions
We heuristically study the shifted convolution $\sum_{n\le X} \tau_k(n) \tau_\ell(n+h)$ using a normalized version of Ramanujan-Fourier expansions for $\tau_k(n)$ and verify they produce the expected answer.
David T. Nguyen
2023-05-28T00:00:56Z
http://arxiv.org/abs/2305.17597v1
# On Ramanujan-Fourier expansions ###### Abstract We study the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of the Fourier expansion of Fourier ### Definition Given an arithmetic function \(f(n)\), we say \(f(n)\) has an RF expansion if, for each \(n\), it can be written in the form \[f(n)=\sum_{q=1}^{\infty}\hat{f}(q)c_{q}(n)\] for some coefficients \(\hat{f}(q)\) that make the above series convergent. Here \(c_{q}(n)\) is the Ramanujan sum. ### Lemmas We need the following two lemmas. **Lemma 1** (Lucht (1995) [3][Prop. 3, p. 40]).: _J One has the following conditionally convergent RF expansion:_ \[\tau_{k}(n)=\sum_{q=1}^{\infty}\hat{\tau_{k}}(q)c_{q}(n), \tag{0.5}\] _where_ \[\hat{\tau_{k}}(q)=\frac{(-1)^{k-1}}{(k-1)!}\frac{\log^{k-1}(q)}{q}\prod_{p|q} \left(1-\frac{1}{p}\right)^{k-1}\sum_{\alpha\geq\nu_{p}(q)}\binom{k+\alpha-2}{ k-2}p^{\nu_{p}(q)-\alpha}. \tag{0.6}\] **Remark 1**.: _It seems that (0.6) corrects a small sign error in the original stated result of Lucht (the \(-1\) in the exponent in [3][Prop. 3, p. 40] should not be there). We believe so because if we kept the sign from Lucht, it produces the wrong answer in (0.1) when compared to [4]._ **Lemma 2** (Carmichael (1932) [1]).: _We have the following orthogonality relation/correlation of two Ramanujan sums:_ \[\sum_{n\leq X}c_{q_{1}}(n)c_{q_{2}}(n+h)\sim\begin{cases}Xc_{q}(h),&\text{ if }q_{1}=q_{2}=q,\\ 0,&\text{otherwise.}\end{cases} \tag{0.7}\] **Remark 2**.: _A finer asymptotic of the left side of (0.7) can be found in [2, Lemma 2, p. 694]._ ### Derivation of (0.1) via RF expansions By (0.5), we have \[\sum_{n\leq X}\tau_{k}(n)\tau_{\ell}(n+h)=\sum_{n\leq X}\sum_{q_{1}=1}^{ \infty}\hat{\tau_{k}}(q_{1})c_{q_{1}}(n)\sum_{q_{2}=1}^{\infty}\hat{\tau_{ \ell}}(q_{2})c_{q_{2}}(n+h).\] Bringing the \(n\)-sum inside, we get, by (0.7), \[\sum_{n\leq X}\tau_{k}(n)\tau_{\ell}(n+h)=\sum_{q_{1}=1}^{\infty}\hat{\tau_{k }}(q_{1})\sum_{q_{2}=1}^{\infty}\hat{\tau_{\ell}}(q_{2})\sum_{n\leq X}c_{q_{1 }}(n)c_{q_{2}}(n+h)\sim X\sum_{q=1}^{\infty}\hat{\tau_{k}}(q)\hat{\tau_{\ell}} (q)c_{q}(h). \tag{0.8}\] Replacing \((-1)^{k}\log^{k-1}(q)\) by \(\log^{k-1}X\), the product \(\hat{\tau_{k}}(q)\hat{\tau_{\ell}}(q)\) becomes, by (0.6), \[\frac{(\log X)^{k+\ell-2}}{(k-1)!(\ell-1)!}\frac{1}{q^{2}}A_{k,\ell}(q), \tag{0.9}\] where \[A_{k,\ell}(q)=\prod_{p\mid q}\left(1-\frac{1}{p}\right)^{k+\ell-2}\sum_{\alpha \geq\nu_{p}(q)}\binom{k+\alpha-2}{k-2}p^{\nu_{p}(q)-\alpha}\sum_{\beta\geq\nu _{p}(q)}\binom{\ell+\beta-2}{\ell-2}p^{\nu_{p}(q)-\beta}. \tag{0.10}\] In particular, we have \[A_{k,\ell}(p)=p^{2}\left(1-\left(1-\frac{1}{p}\right)^{k-1}\right)\left(1- \left(1-\frac{1}{p}\right)^{\ell-1}\right). \tag{0.11}\] Thus, by (0.9), (0.8) becomes \[\sum_{n\leq X}\tau_{k}(n)\tau_{\ell}(n+h)\sim X\frac{(\log X)^{k+\ell-2}}{(k-1 )!(\ell-1)!}B_{k,\ell}(h),\] where \[B_{k,\ell}(h)=\sum_{q=1}^{\infty}\frac{c_{q}(h)}{q^{2}}A_{k,\ell}(q). \tag{0.12}\] Going to Euler products, we have \[B_{k,\ell}(h)=\prod_{p\mid h}\left(1+\sum_{j=1}^{\infty}\frac{c_{p^{j}}(h)}{p^ {2j}}A_{k,\ell}(p^{j})\right)\prod_{p\mid h}\left(1+\sum_{j=1}^{\infty}\frac{c _{p^{j}}(h)}{p^{2j}}A_{k,\ell}(p^{j})\right),\] say. If \(p\nmid h\), then \[c_{p^{j}}(h)=\begin{cases}-1,&\text{if $j=1$},\\ 0,&\text{if $j>1$},\end{cases}\] and, thus, by (0.11), \[\prod_{p\nmid h}\left(1+\sum_{j=1}^{\infty}\frac{c_{p^{j}}(h)}{p^ {2j}}A_{k,\ell}(p^{j})\right)=\prod_{p\nmid h}\left(1-\frac{1}{p^{2}}A_{k, \ell}(p)\right)\] \[\qquad=\prod_{p\nmid h}\left(1-\left(1-\left(1-\frac{1}{p}\right) ^{k-1}\right)\left(1-\left(1-\frac{1}{p}\right)^{\ell-1}\right)\right). \tag{0.13}\] On the other hand, if \(p\mid h\), then \[c_{p^{j}}(h)=\begin{cases}p^{j}-p^{j-1},&\text{if $j\leq\nu_{p}(h)$},\\ -p^{\nu_{p}(h)},&\text{if $j=\nu_{p}(h)+1$},\\ 0,&\text{if $j>\nu_{p}(h)+1$}.\end{cases}\] Thus, by the above, \[\prod_{p|h}\left(1+\sum_{j=1}^{\infty}c_{p^{j}}(h)\frac{A_{k,\ell}(p^{j})}{p^{2j} }\right)=\prod_{p|h}\left(1+\sum_{1\leq j\leq\nu_{p}(h)}(p^{j}-p^{j-1})\frac{A_{k,\ell}(p^{j})}{p^{2j}}-p^{\nu_{p}(h)}\frac{A_{k,\ell}(p^{\nu_{p}(h)+1})}{p^{2( \nu_{p}(h)+1)}}\right). \tag{0.14}\] Hence, by (0.13) and (0.14), \[B_{k,\ell}(h)=C_{k,\ell}f_{k,\ell}(h). \tag{0.15}\] This completes the heuristic computation. **Remark 3**.: _Equations (0.12), (0.10), and (0.15) provide a new arithmetic interpretation of the local factor: One has_ \[C_{k,\ell}f_{k,\ell}(h)=\sum_{q=1}^{\infty}\frac{c_{q}(h)}{q^{2}}A_{k,\ell}(q),\] _where \(A_{k,\ell}(q)\) are certain normalized RF coefficients of \(\tau_{k}(n)\)._ **Remark 4**.: _The operations "replacing \((-1)^{k}\log^{k-1}(q)\) by \(\log^{k-1}X\)" and interchanging order of summations of conditionally convergent series are obviously not rigorous, and could be even wrong. But we do them because they produce the right answer._ ### Verifying (0.1) We match our predictions (0.2) and (0.3) with the following forms from Ng and Thom [4][equations (1.6) & (1.27)] who had previously computed these quantities. **Lemma 3** ([4] (1.6) p. 98).: (0.16) \[C_{k,\ell}=\prod_{p}\left(\left(1-\frac{1}{p}\right)^{k-1}+\left(1-\frac{1}{p }\right)^{\ell-1}-\left(1-\frac{1}{p}\right)^{k+\ell-2}\right).\] For the local factor \(f_{k,\ell}\), Ng and Thom gave five equivalent expressions [4][c.f. (1.7), (1.8), (1.27), (1.28), (4.6)] for this quantity. We chose the following expression to make the verification seemingly child's play. **Lemma 4** ([4] (1.27) p. 102).: (0.17) \[f_{k,\ell}(p^{\alpha})=\frac{\sum_{j=0}^{\alpha}\left(\frac{\sigma_{k-1}(p^{j },1)\sigma_{\ell-1}(p^{j},1)}{p^{j}}-\frac{\sigma_{k-1}(p^{j+1},1)\sigma_{\ell -1}(p^{j+1},1)}{p^{j+2}}\right)}{\left(1-\frac{1}{p}\right)^{k-1}+\left(1- \frac{1}{p}\right)^{\ell-1}-\left(1-\frac{1}{p}\right)^{k+\ell-2}},\] _where_ \[\sigma_{k}(p^{j},s)=\frac{\sum_{i=0}^{\infty}\frac{\tau_{k}(p^{j+i})}{p^{is}} }{\sum_{i=0}^{\infty}\frac{\tau_{k}(p^{i})}{p^{is}}}=\left(1-\frac{1}{p}\right) ^{k}\sum_{i=0}^{\infty}\frac{\tau_{k}(p^{j+i})}{p^{is}}. \tag{0.18}\] We now come to the sole theorem of this note. **Theorem 5**.: _The quantity \(C_{k,\ell}\) in (0.2) matches with (0.16), and the same is true for \(f_{k,\ell}\) from (0.3) and (0.17)._ Proof.: It is clear that \(C_{k,\ell}\) in (0.2) is exactly the same as (0.16). The function \(f_{k,\ell}(h)\) in (0.3) is multiplicative in \(h\), so it suffices to check for prime powers \(h=p^{\alpha}\), where \(\alpha=\nu(h)\). The factors appearing in the denominators of both (0.3) and (0.17) are identical. This observation motivated the choice (0.17). The factors \(A_{k,\ell}\) and \(\sigma_{k}\) in the numerators are related via the following identity. **Lemma 6**.: _We have_ \[A_{k,\ell}(p^{j})=\sigma_{k-1}(p^{j},1)\sigma_{\ell-1}(p^{j},1), \tag{0.19}\] _where \(A_{k,\ell}(p^{j})\) and \(\sigma_{k-1}(p^{j},1)\) are given in (0.4) and (0.18), respectively._ Proof.: This identity follows from a simple change of variables and the identity \[\tau_{k}(p^{j})=\binom{k+j-1}{k-1}.\] As such, \[\sum_{\alpha\geq j}\binom{k+\alpha-2}{k-2}p^{j-\alpha}=\sum_{i=0}^{\infty} \frac{\tau_{k-1}(p^{j+i})}{p^{i}}.\] Thus, \[A_{k,\ell}(p^{j})=\left(1-\frac{1}{p}\right)^{k-1}\sum_{i=0}^{ \infty}\frac{\tau_{k-1}(p^{j+i})}{p^{is}}\left(1-\frac{1}{p}\right)^{\ell-1} \sum_{i=0}^{\infty}\frac{\tau_{\ell-1}(p^{j+i})}{p^{is}}=\sigma_{k-1}(p^{j},1 )\sigma_{\ell-1}(p^{j},1).\] Next, the \(j=0\) term of the first term in the numerator of (0.17) is equal to \[\frac{\sigma_{k-1}(1,1)\sigma_{\ell-1}(1,1)}{1}=1.\] The sum from \(j=1\) to \(\alpha\) of the first term in the numerator of (0.17) is equal to, by (0.19), \[\sum_{j=1}^{\alpha}\frac{\sigma_{k-1}(p^{j},1)\sigma_{\ell-1}(p^{j},1)}{p^{j} }=\sum_{1\leq j\leq\nu_{p}(h)}p^{j}\frac{A_{k,\ell}(p^{j})}{p^{2j}}.\] The \(j=\alpha\) term of the second term in the numerator of (0.17) is equal to \[-\frac{\sigma_{k-1}(p^{\alpha+1},1)\sigma_{\ell-1}(p^{\alpha+1},1)}{p^{\alpha +2}}=-\frac{A_{k,\ell}(p^{\alpha+1})}{p^{\alpha+2}}=-p^{\nu_{p}(h)}\frac{A_{k, \ell}(p^{\nu_{p}(h)+1})}{p^{2(\nu_{p}(h)+1)}},\] by (0.19) and \(\alpha=\nu_{p}(h)\). The sum from \(j=0\) to \(\alpha-1\) of the second term in the numerator of (0.17) is equal to \[-\sum_{j=0}^{\alpha-1}\frac{\sigma_{k-1}(p^{j+1},1)\sigma_{\ell-1}(p^{j+1},1)} {p^{j+2}}=-\sum_{1\leq j\leq\nu_{p}(h)}p^{j-1}\frac{A_{k,\ell}(p^{j})}{p^{2j}},\] after another change of variables and application of (0.19). Thus, by the above four identities, \[\sum_{j=0}^{\alpha}\left(\frac{\sigma_{k-1}(p^{j},1)\sigma_{\ell-1}(p^{j},1)}{p^{ j}}-\frac{\sigma_{k-1}(p^{j+1},1)\sigma_{\ell-1}(p^{j+1},1)}{p^{j+2}}\right)\] \[=1+\sum_{1\leq j\leq\nu_{p}(h)}(p^{j}-p^{j-1})\frac{A_{k,\ell}(p^{j})}{p^{2j}}-p ^{\nu_{p}(h)}\frac{A_{k,\ell}(p^{\nu_{p}(h)+1})}{p^{2(\nu_{p}(h)+1)}}.\] Thus, \(f_{k,\ell}(p^{\nu_{p}(h)})\) with \(f_{k,\ell}\) in (0.3) is equal \(f_{k,\ell}(p^{\alpha})\) in (0.17). ### Limitations and extensions It seems that the theory of RF expansions is unable to "see" the powers of \(\log\)'s in (0.1), and so these factors need to be put in by hand. This is somewhat, though not entirely, analogous to the situation where random matrix theory can predict parts of the leading order main terms in moments and ratios of \(L\)-functions, and the remaining factors need to be inserted separately. For an extension of the method, it would be very interesting to compute/predict the leading order main term of the triple convolution \[\sum_{n\leq X}\tau_{k_{1}}(n)\tau_{k_{2}}(n+h_{1})\tau_{k_{3}}(n+h_{2}).\]
2310.14228
Hierarchical Vector Quantized Transformer for Multi-class Unsupervised Anomaly Detection
Unsupervised image Anomaly Detection (UAD) aims to learn robust and discriminative representations of normal samples. While separate solutions per class endow expensive computation and limited generalizability, this paper focuses on building a unified framework for multiple classes. Under such a challenging setting, popular reconstruction-based networks with continuous latent representation assumption always suffer from the "identical shortcut" issue, where both normal and abnormal samples can be well recovered and difficult to distinguish. To address this pivotal issue, we propose a hierarchical vector quantized prototype-oriented Transformer under a probabilistic framework. First, instead of learning the continuous representations, we preserve the typical normal patterns as discrete iconic prototypes, and confirm the importance of Vector Quantization in preventing the model from falling into the shortcut. The vector quantized iconic prototype is integrated into the Transformer for reconstruction, such that the abnormal data point is flipped to a normal data point.Second, we investigate an exquisite hierarchical framework to relieve the codebook collapse issue and replenish frail normal patterns. Third, a prototype-oriented optimal transport method is proposed to better regulate the prototypes and hierarchically evaluate the abnormal score. By evaluating on MVTec-AD and VisA datasets, our model surpasses the state-of-the-art alternatives and possesses good interpretability. The code is available at https://github.com/RuiyingLu/HVQ-Trans.
Ruiying Lu, YuJie Wu, Long Tian, Dongsheng Wang, Bo Chen, Xiyang Liu, Ruimin Hu
2023-10-22T08:20:33Z
http://arxiv.org/abs/2310.14228v1
# Hierarchical Vector Quantized Transformer for Multi-class Unsupervised Anomaly Detection ###### Abstract Unsupervised image Anomaly Detection (UAD) aims to learn robust and discriminative representations of normal samples. While separate solutions per class endow expensive computation and limited generalizability, this paper focuses on building a unified framework for multiple classes. Under such a challenging setting, popular reconstruction-based networks with continuous latent representation assumption always suffer from the "identical shortcut" issue, where both normal and abnormal samples can be well recovered and difficult to distinguish. To address this pivotal issue, we propose a hierarchical vector quantized prototype-oriented Transformer under a probabilistic framework. First, instead of learning the continuous representations, we preserve the typical normal patterns as discrete iconic prototypes, and confirm the importance of Vector Quantization in preventing the model from falling into the shortcut. The vector quantized iconic prototype is integrated into the Transformer for reconstruction, such that the abnormal data point is flipped to a normal data point. Second, we investigate an exquisite hierarchical framework to relieve the codebook collapse issue and replenish frail normal patterns. Third, a prototype-oriented optimal transport method is proposed to better regulate the prototypes and hierarchically evaluate the abnormal score. By evaluating on MVTec-AD and VisA datasets, our model surpasses the state-of-the-art alternatives and possesses good interpretability. The code is available at [https://github.com/RuiyingLu/HVQ-Trans](https://github.com/RuiyingLu/HVQ-Trans). ## 1 Introduction Anomaly detection is an essential task with increasingly wide applications in various areas, such as video surveillance [1], industrial inspection [2], and medical image analysis [3]. Due to the scarcity of anomalous samples, the unsupervised anomaly detection [4; 5; 6; 7] methods gain wide attention by modeling the distribution of normal data only, and then identify the samples deviates from the normal profile as anomalies. Common approaches follow the one-for-one scheme [8; 9] by training separate models for different classes of objects, which is time-memory-consuming for real application and uncongenial to the object class with large intra-class diversity. Recently, a newly emerging one-for-all scheme [10] tries to use a unified model to detect anomalies from all the different object classes without any fine-tuning. Modeling high-dimensional data is notoriously challenging, and the problem becomes even more difficult to capture the multi-class distribution in a unified model precisely. Under the unsupervised setting, a powerful approach to modeling data distribution follows the deep autoencoding frameworks, reckoning that a well-trained model with normal data will always recon struct normal patterns regardless of the defects present in the input data. Thus, it is generally assumed that the reconstruction error will be larger for the anomalous input, making them distinguishable from the normal samples. However, this assumption may not always hold that sometimes the abnormal inputs can also be well reconstructed, which is named as "identical shortcut" issue [9, 10, 11]. Intuitively, compared to working extremely hard to learn the joint distribution, returning a direct copy of the input disregarding its content appears as a far easier solution. This phenomenon has been observed in existing researches [4, 9]. Furthermore, under the unified case, the "identical shortcut" issue becomes even more severe as the distribution of multi-class data is more complex [10]. This motivates us to enhance the discriminability of model encountering normal and anomalous samples. Learning representations with continuous features have been the focus of many previous works [12, 13, 14]. However, these methods lack a reliable mechanism to encourage the model to induce large reconstruction error on the anomalies, restricting the performances by the under-designed representation of the latent space. In recent researches, a branch of approaches [8, 9, 15] investigate the memory-augmented networks for mitigating the "identical shortcut" issue of AEs. Those approaches augment the deep autoencoder with a memory module to record the normal patterns in the normal training data, manifesting in different forms such as the memory set in the latent space [9], the fixed Transformer value matrix in the attention layer [10], or neighbourhood-aware patch-level memory bank [8]. This kind of methods hopes to obtain low reconstruction error for normal samples and highlight the reconstruction error if the input is not similar to normal data, that is, an anomaly. The most relevant items in the memory are retrieved and weighted averaging all the related memory content are aggregated into the decoder for reconstruction. However, the discrete memory items are recombined and weighted averaged, falling into an unknown continuous latent space which might be distorted. Intuitively, some anomalous regions can not be reconstructed by the discrete latent memory but could be decoded from the unknown latent space. To intrinsically mitigate the problems, we aim to learn a representative and discriminative discrete latent space for anomaly detection. To preserve the typical normal patterns in the discrete latent space, we hope to successfully model critical features that usually span many dimensions in the normal data space, as opposed to focusing or spending capacity on noise and imperceptible details. Incorporating ideas from vector quantization (VQ), we model the discrete latent space as codebooks for each category, consisting of iconic prototypes learned from normal training data. During reconstruction, we replace the original encoding features with the nearest iconic prototypes, and then decoded with a VQ-based transformer decoder to intensify the use of iconic prototypes. As a result, the abnormal data point is flipped to a normal data point, highlighted by large reconstruction errors, as shown in Fig. 1. However, the model may suffer from the codebook collapse issue [16, 17]: At some point during training, a part of latent codes in the codebook may no longer work and the modeling capacity is limited by the discrete representations, resulting in collapsed reconstruction [18]. Thus, we further investigate the hierarchical nature of images and propose a hierarchical VQ framework by merging fine-grained and abstract features to prevent codebook collapse, which could also reduce the decoding search time and retain high inference speeds. In addition, most abnormal scoring methods are constrained to the observation space and can be fallible to complex data distributions. Therefore, we have introduced a hierarchical prototype-oriented optimal transport (OT) based optimization and anomaly detection scoring method to enhance the robustness and discriminability of our model for normal and anomaly samples. In conclusion, we carefully tailor a variational autoencoding framework for unsupervised anomaly detection, called hierarchical vector quantized Transformer (HVQ-Trans). Our work contributes in the following ways: (1) We realize the learning of discrete normal representations by extracting prototypes and propose a VQ-based transformer to address the "identical shortcut" issue by inducing large feature discrepancy for anomalies. (2) We develop a hierarchical VQ-based approach with Figure 1: By replacing the continuous latent features with the normal iconic prototypes of corresponding category, the normal regions are reconstructed as normal patterns (shown in yellow boxes), while the anomalies are also reconstructed as normal (shown in red boxes). switching mechanism to overcome the "prototype collapse" problem and effectively use multi-level feature representations to maximize the nominal information available. (3) A hierarchical prototype-oriented learning and anomaly scoring method is developed to guide prototype learning and dexterously measure the feature level anomaly score to robustly and accurately identify anomalies. (4) Extensive experiments demonstrate our method achieves state-of-the-art performances for anomaly detection and localization, and possesses enhanced interpretability through prototype visualization. ## 2 Methodology ### Overview Our proposed model is a Transformer-based reconstruction method that assumes normal and anomaly cannot be reconstructed with comparable performances. In contrast to other Transformer-based autoencoding [19] with continuous embeddings, we focus on compressing input images into discrete representations and achieve discriminative reconstruction for anomaly detection and localization. We denote the set of normal images available at training time as \(\mathcal{X}_{N}\) (\(\forall\mathbf{x}\in\mathcal{X}_{N}:\mathbf{y}_{x}=0\)), with \(\mathbf{y}_{x}\in\{0,1\}\) denoting if an image \(\mathbf{x}\) is normal (0) or anomalous (1). Accordingly, we define the test sample as \(\forall\mathbf{x}\in\mathcal{X}_{T}:\mathbf{y}_{x}=\{0,1\}\), including both the normal test images and abnormal test images. The model pipeline, shown in Figure 2, can be summarized as follows: i) The input image is fed into the pre-trained EfficientNet [20] to extract visual tokens by splitting 3-D feature maps; ii) The extracted tokens are passed through the cascaded _vanilla transformer encoder_ for non-local multi-level feature aggregation; iii) Aggregated features at certain layer are hierarchically fed into the corresponding _VQ-based layer_ to select the most relevant iconic prototypes; iv) The visual tokens are then successively fused with vector quantized embeddings via cascaded _VQ-based transformer decoder_ for reconstruction; v) Decoded tokens are passed through the _switching experts_ to reconstruct features, which possess flexibility in high diversity multi-class image scenarios; vi) Finally, anomaly detection and localization are achieved through a calibrated anomaly score map refined via prototype-oriented module by measuring the OT-based hierarchical feature discrepancy. ### Improving Feature Reconstruction with Hierarchical Vector Quantization **Motivation:** Normal memory augmentation was initially introduced by Gong et al. [9] and has obtained wide interests in unsupervised anomaly localization and detection. To record the "normal" appearance, image features are augmented by weighted averaging the similar patterns in the memory matrix. This augmentation is, however, rebuilding a continuous latent space again which might be distorted to contain abnormal patterns. Relied on the vector quantization, discrete variational autoencoder[21] learns the discrete latent space, but suffers from the issue of codebook collapse. Therefore, to simultaneously learn the discrete representation and avoid codebook collapse, we proposed the hierarchical VQ-based framework. Figure 2: (a) The overall framework of our HVQ-Trans. (b) Each VQ-based Layer replaces continuous features with iconic prototypes, equipped with the POT module to promote better learning and scoring. (c) The codebook and expert network are switched for individual image. (d) The detailed structure of each VQ-based Transformer decoder, where the prototypes are integrated via cross-attention. **Hierarchical Vector Quantized (HVQ) Transformer:** Our proposed HVQ-Trans can be viewed as a communication system serving as an information bottleneck to better capture the normal patterns during training, which could be further generalized to test unknown images with arbitrary anomalies. We denote the input image as \(\mathbf{x}\in\mathbb{R}^{H\times W\times 3}\), then the \(N\) visual tokens \(\mathbf{h}^{0}=f_{\mathbf{\phi}^{0}}(\mathbf{x})\in\mathbb{R}^{N\times C}\) are extracted by the pre-trained EfficientNet \(\mathbf{\phi}^{0}\)[20] to be fed into the Transformer encoder. As shown in the graphical model of Fig. 3, HVQ-Trans comprises a cascaded vanilla Transformer encoder (_vanTrans-enc_) parameterized by \(\mathbf{\phi}^{l}\) that encodes multi-layer patch embeddings as \(\mathbf{h}^{l}=f_{\mathbf{\phi}^{l}}(\mathbf{h}^{l-1})\in\mathbb{R}^{N\times C}\). To further enlarge the normality and suppress the anomaly, we subsequently develop hierarchical VQ-based layers (_VQLayer_) to layer-wisely quantize the refined visual tokens \(\{\mathbf{h}^{l}\}_{l=1}^{L}\) to the prototypes \(\mathbf{e}_{k}^{l}\) in the learnable codebooks \(\mathbf{E}^{l}\in\mathbb{R}^{K\times C}\), as: \[\mathbf{\theta}=(\mathbf{h}^{L})=\mathbf{e}_{i}^{L},\quad i=\min_{j}\|\mathbf{h}^{L}-\mathbf{e}_{j }^{L}\|_{2}^{2}, \tag{1}\] where \(\Upsilon^{l}(\cdot)\) refers to the layer-wise embedding function, and \([\cdot]\) denotes the concatenation operation. Intuitively, we hierarchically replace visual tokens \(\mathbf{h}^{l}\) with their most similar prototypes in the codebook \(\mathbf{E}^{l}\) as quantized vector \(\mathbf{z}^{l}\) (note that this process can be lossy). Moreover, we find that merging fine-grained concrete information with abstraction-level semantics is critical for robust anomaly detection. Hence, we fuse the multi-level visual tokens with the global quantized vector \(\mathbf{\theta}\) to learn hierarchical prototypes, maximizing the preserved nominal information. To make full use of the quantized multi-level visual tokens \(\mathbf{z}^{l}\) derived from by Eq. 1, we further developed a cascaded VQ-based Transformer decoder (_VQTrans-dec_). As shown in Fig. 2 (d), given the quantized visual tokens \(\mathbf{z}^{l}\) of the \(l^{th}\) layer and its corresponding output \(\mathbf{d}^{l+1}\) from the last _VQTrans-dec_ layer, the operation in each _VQTrans-dec_ layer can be expressed as: \[\mathbf{q}^{l}=\texttt{MSA}(query=\mathbf{W}_{q}\mathbf{d}^{l+1},key= \mathbf{W}_{k}\mathbf{d}^{l+1},value=\mathbf{W}_{v}\mathbf{d}^{l+1})+\mathbf{d}^{l+1}, \tag{2}\] \[\tilde{\mathbf{d}}^{l}=\texttt{MCA}(query=\mathbf{W}_{q}^{\prime}\bm {q}^{l},key=\mathbf{W}_{k}^{\prime}\mathbf{z}^{l},value=\mathbf{W}_{v}^{\prime}\mathbf{ z}^{l})+\mathbf{q}^{l},\quad\mathbf{d}^{l}=\texttt{FFN}(\tilde{\mathbf{d}}^{l})+\tilde{\mathbf{d}}^{l},\] where \(\texttt{MSA}(\cdot)\) and \(\texttt{MCA}(\cdot)\) share the same architecture as the standard multi-head self attention and multi-head cross attention in vanilla Transformer [22]. An essential aspect of our method is that, in the \(\texttt{MCA}(\cdot)\) operation, the refined visual tokens from the previous layer \(\mathbf{q}^{l}\) crossly attend to the prototypes of normal images \(\mathbf{z}^{l}\). Hence, the values at abnormal regions of \(\mathbf{q}^{l}\) will be suppressed, and the abnormal signals could be rarely transmitted to the output terminal for reconstruction. Namely, the fewer reconstructed anomalies there are, the larger the reconstruction difference will be, which in turn leads to better performance in localizing and detecting anomalies. During training, the typical normal patterns are recorded in the discrete variables, _i.e._, iconic prototypes. When encountering the anomalous during testing, the abnormal patterns will also be quantized as the normal prototypes, leading to larger feature migration and information loss, highlighted by higher reconstruction error. It is worth noting that while information loss triggered by VQ is exist for normal images, it is significantly more pronounced for anomaly images. This discrepancy in information loss serves as a key factor in effective anomaly detection. By investigating this difference, we can enhance the accuracy of our model in distinguishing abnormal regions. **Switching Mechanism:** We adopt the switching mechanism to make the proposed model more suitable for multi-class anomaly detection. On the one hand, we develop the _switching codebooks_ by assembling the independent iconic prototypes for each class. On the other hand, we develop the _switching experts_ for flexible reconstruction of multi-classes, inspired by the Mixture of Experts (MoE) models [23; 24] and its sparsely-activated version [25]. Here, we choose to reconstruct at the feature level rather than the pixel level, due to the invariance to subtle noise, rotation, and translation at the pixel level. Specifically, the switching mechanism contains a multi-category classifier, \(M\) codebooks, and \(M\) reconstruction experts. The multi-category classifier takes the image feature as input and outputs Figure 3: Graphical model of our HVQ-Trans. the classification probability over the \(M\) category. In order to fit the data diversity property in the one-for-all setting, we switch the specific codebook (including a group of prototypes) from \(M\) codebooks according to the classification probability. Furthermore, we switch the individual reconstruction network (dubbed as expert) for feature reconstruction. The visual tokens from the last _VQTrans-dec_ layer are depicted as \(\mathbf{d}^{0}\in\mathbb{R}^{N\times C}\), which is expected to reconstruct the input patch features \(\mathbf{h}^{0}\in\mathbb{R}^{N\times C}\) as: \[\tilde{\mathbf{h}}^{0}=\Psi_{m}(\mathbf{d}^{0}),\quad m=\operatorname*{ argmax}_{j}p_{j}(\mathbf{x}),\quad p_{j}(\mathbf{x})=\frac{exp(\Omega(\mathbf{x})_{j})}{ \sum_{j=1}^{M}exp(\Omega(\mathbf{x})_{j})}, \tag{3}\] where \(\Omega(\cdot)\) is a classifier for producing logits, which are then normalized via a Softmax function over the total \(M\) experts. \(p_{j}(\cdot)\) is the probability of selecting the codebook and reconstruction expert, as shown in Fig. 2 (c). The codebook and expert \(\Psi_{m}\) with the highest probability are employed for reconstruction. The switching mechanism under the one-for-all setting will classify each input image into a single category and choose the corresponding codebook and expert for reconstruction. For the normal images, it is highly likely to be classified into the correct category and thus switch the proper reconstruction expert and codebook. For the abnormal images, there remains big uncertainty that which reconstruction expert and codebook will be switched, because the anomalies are unseen during training. Thus, the reconstruction uncertainty of the abnormal image is increased. Noting that the difference between normal and abnormal is the key factor deciding the anomaly detection performance. To this end, the uncertainty during anomalous sample switching could facilitate multi-class anomaly detection. ### Prototype-oriented Learning and Scoring for Anomaly Detection **Motivation:** During training with normal data, the HVQ-Trans enhances the point-wise correlations of the selected iconic prototypes from codebooks and the continuous visual features of normal images. However, it may despise the global relations between the above two sets. Thus, we propose a hierarchical prototype-oriented optimal transport (POT), a transport solver defined within the scope of the basic distance between two unknown sampling sets, to improve the tightness between the codebooks and the normal features. Meanwhile, at the testing stage, it is worth noting that the anomaly score adopted in the existing methods [10; 17] mainly concern the most significant difference of the score map between the input features and the reconstructed ones, as measured by the Euclidean distance. However, the importance of hierarchical differences at multiple feature levels is neglected. To this end, we also proposed a hierarchical POT-based anomaly scoring method to reinforce the identification of the score map and further boost the anomaly detection performance. **Learning Iconic Prototypes with POT:** Each POT module, included within each _VQLayer_ as shown in Fig. 2 (b), is responsible for enhancing consistency between codebooks and normal image features per layer. This enables the prototypes in codebooks to be more representative of normal patterns and less so of anomalous patterns. At the \(l\)-th layer, the codebook of each category contains a group of prototypes \(\mathbf{e}^{l}=[\mathbf{e}^{1}_{1},...,\mathbf{e}^{l}_{K}]\in\mathbb{R}^{K\times C}\). We omit the index \(l\) in the following for simplicity without causing ambiguity. To assemble the normal patterns conveyed by images, we represent \(N\) patches per image as an empirical distribution \(\mathbb{P}_{h}=\sum_{i=1}^{N}\frac{1}{N}\delta_{\mathbf{h}_{i}}\), where \(\mathbf{h}\in\mathbb{R}^{N\times C}\) is the features sampled from the latent variables of the HVQ-Trans. The prototypes serve to represent normal patterns across different classes. As a result, when attempting to identify suitable prototypes to reconstruct a specific normal image, each prototype is given equal importance. Thus, the distribution over normal prototypes can also be expressed as an empirical distribution \(\mathbb{P}_{e}=\sum_{j=1}^{K}\frac{1}{K}\delta_{\mathbf{e}_{j}}\). In this way, the transport matrix \(\mathbf{M}^{*}\in\mathbb{R}^{N\times K}\) from \(\mathbb{P}_{h}\) to \(\mathbb{P}_{e}\) can be estimated by \(\mathbf{M}^{*}=\min\limits_{\mathbf{M}}\sum_{i=1}^{N}\sum_{j=1}^{K}\mathbf{M}_{i,j}\mathbf{C}_ {i,j}\), where the transport matrix \(\mathbf{M}\) should satisfy \(\Pi([\frac{1}{K}],[\frac{1}{N}])=\{\mathbf{M}|\mathbf{M}\mathbf{1}_{K}=[\frac{1}{K}],\mathbf{M }^{T}\mathbf{1}_{N}=[\frac{1}{K}]\}\). \([\frac{1}{K}]\) and \([\frac{1}{N}]\) are two uniform distributed prior defined in \(\mathbb{P}_{h}\) and \(\mathbb{P}_{e}\), respectively. The cost matrix \(\mathbf{C}\in\mathbb{R}^{N\times K}\) is defined as \(\mathbf{C}_{i,j}=\sqrt{(\mathbf{h}_{i}-\mathbf{e}_{j})^{2}}\). In order to learn the prototypes of normal codebook at certain layer, we define the average POT loss inspired by Sinkhorn algorithm [26] as: \[\mathbb{L}_{POT}=\min_{\mathbf{E}}\mathbb{E}_{\mathbf{h}\sim F_{\mathbf{\phi}}(\mathbf{x})} \sum_{i=1}^{N}\sum_{j=1}^{K}\mathbf{M}^{*}_{i,j}\mathbf{C}_{i,j}+\sum_{i=1}^{N}\sum_{j =1}^{K}\mathbf{M}^{*}_{i,j}ln\mathbf{M}^{*}_{i,j}. \tag{4}\] **Calibrating Anomaly Score with POT:** The anomaly score computed via Transformer-based methods always suffers from the sub-optimal distance measurement, which is usually calculated as the point-wise L2 norm of the reconstruction differences as \(\mathbf{s}_{org}=\|\mathbf{f}_{org}-\mathbf{f}_{rec}\|_{2}^{2}\). In this paper, on the one hand, we alleviate the mismatch by restricting the distance of prototypes and visual features during training. On the other hand, we further calibrate the anomaly score with multi-level POT at test time. Accordingly, in our proposed method, we note that the anomaly degree could also be reflected by the dissimilarity between visual features and normal iconic prototypes in the codebooks. As for the \(l\)-th layer, the dissimilarity is evaluated by \(\mathbf{s}_{POT}^{l}=\mathbf{M}^{*}\mathbf{C}\). The transport matrix \(\mathbf{M}^{*}\) acts as a probability to re-weight the cost with different prototypes \(\mathbf{C}\), which measures the importance of different distances between image features and the normal prototypes. Therefore, we calibrate the multi-level anomaly score as \(\mathbf{s}_{cab}=\mathbf{s}_{org}+\lambda\sum_{l=1}^{L}\mathbf{s}_{POT}^{l}\) for better anomaly detection. ### Overall Optimization Noting that there is no real gradient defined for equation 1, following [17; 21; 27; 28], we approximate the gradient by copying gradients from the refined visual tokens \(z^{l}\) to the visual tokens \(h^{l}\) for \(l=0,...,L\). Thus, our proposed HVQ-Trans incorporates five terms into its objective, specified as: \[\mathbb{L}_{HVQ-Trans}=||\mathbf{h}^{0}-\bar{\mathbf{h}}^{0}||_{2}^{2}+\sum_{l=1}^{L} \Big{[}||\mathbf{s}\texttt{g}(\mathbf{h}^{l})-\mathbf{e}^{l}||_{2}^{2}+\beta^{l}||\mathbf{h}^ {l}-\texttt{s}\texttt{g}(\mathbf{e}^{l})||_{2}^{2}+\alpha^{l}\mathbb{L}_{POT}^{l} \Big{]}-\sum_{j=1}^{M}p_{j}(\mathbf{x})\log\mathcal{P}_{x}, \tag{5}\] where the \(\texttt{s}\texttt{g}(\cdot)\) refers to the stop-gradient operation and \(\mathcal{P}_{x}\) the category label. \(\beta^{l}\) and \(\alpha^{l}\) are hyperparameters. The first term refers to the reconstruction loss. The second one in the scope of summation along \(L\) layers is the hierarchical prototypical loss, pushing the selected prototype \(\mathbf{e}^{l}\) closer to the visual token \(\mathbf{h}^{l}\). The third term denotes the hierarchical commitment loss, optimizing the encoder by encouraging the output of the encoder \(\mathbf{h}^{l}\) to stay close to the chosen prototype and prevent it from fluctuating too frequently from one prototype to another. The fourth term is the POT loss defined in Eq. 4. The last term is the cross entropy loss for training the classifier to adaptively switch the proper reconstruction expert and codebook. Following [21], we use the exponential moving average updates for codebooks. More details can be found in Appendix. ## 3 Connection with previous works In unsupervised anomaly detection, only normal samples are available at the training stage. Unsupervised anomaly detection methods can be roughly categorized into density-based and reconstruction-based methods. Density-based methods estimate the distribution of normal data points to identify anomalous data points [4; 5; 6]. Our HVQ-Trans method adheres to the probabilistic variation framework but refrains from assuming a specific distribution of normal data. Therefore, the image prior is learned dynamically rather than relying on a static distribution. On the other hand, reconstruction-based methods assume that the model trained on normal data only can well reconstruct normal regions, but fail in anomalous regions [29; 30; 31]. Typical approaches include Auto-Encoder (AE) [32; 33; 34], Variational Auto-Encoder (VAE) [35; 36], and Generative Adversarial Net (GAN) [37; 38; 39]. However, most of these methods do not incorporate a reliable mechanism for encouraging the model to induce large reconstruction error on the anomalous region. Adopting a memory matrix for unsupervised anomaly detection has proven to be an effective solution. The idea was first proposed in MemAE [9] by injecting an extra memory matrix to assemble normal patterns during training. Based on this paradigm, the memory-based models have attracted attentions in recent years [15; 31; 40]. These methods always record the normal patterns into the memory, then recombine and re-weight the relevant patterns for reconstruction. However, if anomalous representations can be recovered through the re-weighting of normal patterns, the discriminating process may collapse. In contrast, our method compresses images into a discrete latent space, inspired by the vector quantization technology [21; 28], referring to ideas from lossy compression to relieve the model from modeling negligible information. Additionally, our hierarchical framework allows us to increase the size of the codebooks without incurring the codebook collapse problem, achieving meticulous anomaly detection at multiple levels with our prototype-oriented scoring method. ## 4 Experiment ### Datasets and Metrics **MVTec-AD**[2] is a wildly-used industrial anomaly detection dataset with 15 classes, it covers more than 5k high-resolution images including objects and textures. For each class, the training samples are normal while the test samples can be either normal or anomalous. For each anomalous sample, the ground-truths of image label and segmentation are available for evaluation. In this paper, we investigate the unified case following [10], where only one model is used to handle all categories. **VisA**[41] is a recently published large dataset, which consists of 9,621 normal and 1,200 anomalous high-resolution images. The dataset includes images with complex structures, objects placed in sporadic locations, and various types of objects. Anomalies in the dataset encompass scratches, dents, color spots, cracks, and structural defects. All images are resized to \(224\times 224\) to facilitate training. **CIFAR-10**[41] is a classical image classification dataset of 10 categories. We adopt the challenging _many-versus-many_ setting as in [10], where 5 classes are viewed as normal while the rest 5 classes are viewed as anomalies that remain unseen during training. **Evaluation metrics:** We report the Area Under the Receiver Operator Curve (AUROC) on image-level anomaly detection and pixel-wise anomaly localization following the previous works [2; 10; 42]. ### Anomaly detection and localization performance on MVTec-AD **Implementation details:** The input image size of MVTec-AD is \(224\times 224\times 3\), after being fed into the pre-trained EfficientNet [20], the feature maps become \(14\times 14\times 272\), namely, the patch size is 16. Then we reduce the channel dimension of each patch into 256, followed by feeding them into a 4-layer _vanTrans-enc_ followed by the corresponding and a 4-layer _VQTrans-dec_. We use AdamW [49] with weight decay 0.0001 for optimization. Our model is trained for 1000 epochs on 2 GPUs (NVIDIA GeForce RTX 3080 10GB) with batch size 16. The learning rate is initialized as \(1\times 10^{-4}\) and dropped by 0.1 after 800 epochs. Our model is trained from scratch besides the pre-trained EfficientNet. For more details, please refer to the Appendix. **Quantitative results of anomaly detection on MVTec-AD:** As shown in Table 1, the proposed HVQ-Trans generally outperforms all the competitive baselines. Specifically, our model surpasses PatchCore and UniAD by \(1.6\%\) and \(1.5\%\) on average. The former is a SOTA method under the one-for-one setting, the latter is a SOTA method under the one-for-all setting. Especially, in the one-for-all case, our model far exceeds UniAD by \(8.5\%\) and \(8.1\%\) on Capsule and Screw, respectively. We attribute this to that our model is more robust for identical shortcuts thanks to our well-designed architecture and algorithm. Notably, the performance of various categories are significantly different (Toothbrush), which might due to the data distribution of each category is different, corresponding to different requests for representation ability. The switching mechanism learns individual codebooks and experts for each class, decreasing the distortion between multiple classes. To sum up, our model is proven to be effective and efficient for one-for-all anomaly detection applications. \begin{table} \begin{tabular}{c|c **Quantitative results of anomaly localization on MVTec-AD:** Anomaly localization aims to detect the anomalous regions given anomalous sample. The localization results under the one-for-all setting are shown in Table 1. As we can see, our model outperforms all the competitive baselines on average. Specifically, as a strong SOTA baseline, UniAD is also been left behind by our model by \(0.5\%\) on both settings averagely speaking. We attribute this to that the hierarchical VQ also plays an important role in precisely localization besides the learnable query-embeddings verified by UniAD since our HVQ-Trans only employs the traditional query-embeddings. Moreover, localization requires more precise position information compared with its detection counterpart, a proper measurement alignment to enhance the real anomalous regions may be important, which is exactly our POT good at. Another interesting finding is reported in Appendix that although there exists information loss in reconstructing normal images, such information loss is even more significant in reconstructing anomalous images. Hence, the actual anomaly localization performance is improved. **Qualitative results of anomaly localization on MVTec-AD:** As shown in Fig. 4, our method can successfully recover the anomalous regions with their corresponding normal patterns for both object anomalies (Left) and texture damages (Right). It can be seen that the model with hierarchical VQ layers could better generate normal patterns at abnormal regions, resulting in more accurate anomaly localization. More qualitative results are given in Appendix. ### Anomaly detection on VisA **Quantitative results of anomaly detection on VisA:** Compared to MVTecAD, VisA poses greater difficulty due to its more complex structures and scenes with multiple misaligned instances. Table 2 demonstrates the superior performance of our model in comparison to the other three reconstruction-based methods under the unified setting. Our proposed model surpasses the best of the comparison methods, _i.e._, UniAD, by 1.3% on image-AUROC, leading to significantly superior performances than the modest recent unified model OmniAL [50] (5.4% and 2.1% on detection and localization). **Qualitative results of anomaly detection on VisA:** Figure 5 illustrates the impressive performance of our reconstruction and localization in various categories. Even in multi-instance scenes, Our model effectively restores the anomaly region to its normal state. The reconstructed images exhibit a high \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multicolumn{2}{c|}{Category} & DRAEM[46] & JNLD[51] & OmniAL[50] & UniAD[10] & Ours \\ \hline \multirow{4}{*}{Complex structure} & PCB1 & 83.9 / 94.0 & 82.9 / 98.0 & 77.7 / 97.6 & 95.4 / 99.3 & **96.7** / **99.4** \\ & PCB2 & 81.7 / 94.1 & 79.1 / 95.0 & 81.0 / 93.9 & **93.6** / 97.8 & 93.4 / **98.0** \\ & PCB3 & 87.7 / 94.1 & 90.1 / 98.5 & 88.1 / 94.7 & 88.6 / **98.3** & **92.0** / **98.3** \\ & PCB4 & 87.1 / 72.3 & 96.2 / 97.5 & 95.3 / 97.1 & 99.4 / **97.9** & **99.5** / 97.7 \\ \hline \multirow{4}{*}{Multiple instances} & Macaroni 1 & 68.6 / 89.8 & 90.5 / 93.3 & 92.6 / 98.6 & 92.2 / 99.3 & **93.1** / **99.4** \\ & Macaroni 2 & 60.3 / 83.2 & 71.3 / 92.1 & 72.5 / 97.9 & 85.9 / 98.0 & **86.2** / **98.5** \\ & Capsules & **98.6** / 96.6 & 91.4 / 99.6 & 90.6 / 99.4 & 72.0 / 98.3 & **76.1** / **99.0** \\ & Candles & 70.2 / 82.6 & 85.4 / 94.5 & 86.8 / 95.8 & 96.9 & 96.8 / **99.2** & **96.8** / **99.2** \\ \hline \multirow{4}{*}{Single instance} & Cashew & 67.3 / 68.5 & 82.5 / 94.1 & 88.6 / 95.0 & 92.4 / 98.7 & **94.9** / **99.2** \\ & Chewing gum & 90.0 / 92.7 & 96.0 / 98.9 & 96.4 / 99.0 & 99.4 / **99.2** & **99.4** / 98.8 \\ & Fryum & 86.2 / 83.2 & 91.9 / 90.0 & 94.6 / 92.1 & 89.8 / 97.7 & **90.4** / **97.7** \\ & Pipe fryum & 87.1 / 72.3 & 87.5 / 92.5 & 86.1 / 98.2 & 97.4 / 99.2 & **98.5** / **99.4** \\ \hline \multicolumn{2}{c|}{Mean} & 80.5 / 87.0 & 87.1 / 95.2 & 87.8 / 96.6 & 91.9 / 98.6 & **93.2** / **98.7** \\ \hline \hline \end{tabular} \end{table} Table 2: Anomaly detection/location results (image AUROC, pixel AUROC) on VisA. Our model is applied to all categories without specific parameter-tuning on each category. Figure 5: Qualitative results for anomaly localization on VisA. level of fidelity, closely matching the appearance of normal regions and meeting expectations in recovering anomalies. More qualitative results are given in Appendix. ### Anomaly detection on CIFAR-10 **Implementation details:** In order to implement _many-versus-many_ anomaly detection, we select 5 normal classes while the rest classes are viewed as anomalies. As shown in Table 3, {01234} means the normal samples include images from classes 0, 1, 2, 3, and 4, while the images from 5, 6, 7, 8, and 9 are anomalies. For statistical robustness, we repeat the splitting and obtain four combinations. **Quantitative results of anomaly detection on CIFAR-10:** As shown in Table 3, the performance of our model surpasses all the other competitors under _many-versus-many_ setting with each dataset splitting. Besides, CIFAR-10 dataset itself is more complex than MVTec-AD because of its poor shooting conditions. Hence, the one-for-all setting on CIFAR-10 imposes stricter requirements for the model to exactly distinguish normal patterns from anomalous interference. Therefore, the substantial improvement further verifies the superiority of our method. ### Ablation studies **Component study:** To verify the effectiveness of the proposed modules, including single _VQLayer_, hierarchical _VQLayers_, switching mechanism for codebooks and reconstruction experts, and POT scoring, we implement extensive ablation studies on MVTec-AD. As shown in Table 4, we have the following observations: (i) The performance of the model without VQ drops by nearly 26% (96.4 to 70.5), which demonstrates that VQ plays the key role in anomaly detection and the vanilla Transformer is powerful to well reconstruct both the normal and anomaly. Our VQ module acts as the information bottleneck where only the normal information is allowed to pass through, thus degrading the reconstruction of anomalous; (ii) The hierarchical structure also presents performance gain since it provides local access to multi-level codebooks, thus reducing the search complexity per layer and releasing the codebook collapse issue; (iii) The switching brings slight improvements on MVTec-AD, while it achieves significant gains up to 3.2% on CIFAR-10, as shown in the Appendix. We attribute this to the different degrees of difficulty posed by the two dataset distributions. Our switching mechanism plays a more critical role for the complex datasets, _i.e._, CIFAR-10 in this case; (iv) The POT module is effective in detecting and localizing anomalies due to its cascade measurement alignment property. \begin{table} \begin{tabular}{c|c|c} \hline \hline \(K\) & Detection & Localization \\ \hline 1024 & 97.1 & 97.2 \\ 512 & **98.0** & **97.3** \\ 256 & 97.0 & 97.1 \\ \hline \hline \end{tabular} \end{table} Table 6: Different prototype numbers \(K\). \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{w/o VQ} & \multicolumn{2}{c|}{with VQ} & \multicolumn{2}{c|}{Switching} & POT & \multirow{2}{*}{Detection} & \multirow{2}{*}{Localization} \\ \cline{2-2} \cline{6-7} & Single & Hierarchical & Codebook-Switching & Expert-Switching & & & \\ \hline ✓ & - & - & - & - & - & 70.5 & 81.4 \\ - & ✓ & - & - & - & - & 96.2 & 96.8 \\ - & ✓ & - & ✓ & - & - & 96.4 & 96.8 \\ - & - & ✓ & - & - & - & 97.1 & 96.9 \\ - & - & ✓ & ✓ & - & - & 97.2 & 97.0 \\ - & - & ✓ & - & - & ✓ & 97.4 & 97.2 \\ - & - & ✓ & ✓ & ✓ & - & 97.6 & 97.2 \\ - & - & ✓ & ✓ & ✓ & ✓ & **98.0** & **97.3** \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation studies with AUROC metric on MVTec-AD. w/o VQ means without VQ. \begin{table} \begin{tabular}{c|c|c} \hline \hline Structure & Detection & Localization \\ \hline \(h^{l}\to z^{l}\) & 94.9 & 96.5 \\ \(h^{l-1}\oplus h^{l}\to z^{l}\) & 97.2 & 97.0 \\ \(h^{l-1}\oplus\theta\to z^{l}\) & **98.0** & **97.3** \\ \hline \hline \end{tabular} \end{table} Table 5: Different hierarchical structures. **Different hierarchies:** We demonstrate experiments to investigate the impact of various hierarchies, as shown in Table 5, where the different information is encoded in different layers. While the multi-level features \(h^{l}\) are concatenated with the global prototypes \(\theta\), the joint performance over anomaly detection and localization increases. One possible explanation is that the global prototypes \(\theta\) at much higher abstraction levels may result in efficient latent representation for anomaly detection. **Robustness to prototype number:** We conduct the experiments by using different prototype numbers \(K\) and show the AUC values in Table 6. Given different numbers of prototypes, our HVQ-Trans can still surpass most of the competitors in Table 1, proving the robustness of our method. **Visualizing how the prototypes works:** In order to investigate what exactly the prototypes have learned, we further train a mapping function from the reconstructed feature to the observation space and present the visualization results in Fig. 6. We compulsively instructed the model to reconstruct the patches within the red box using prototypes from the codebook of a different, irrelevant category. As demonstrated in the figure, the reconstructed region is highly correlated to the prototypes, _e.g._. the centering region in the 'Cable' image is reconstructed as 'Grid'. The observations confirm that our iconic prototypes accurately represent typical normal patterns of each category, and only reconstruct the corresponding normal appearances as intended. ## 5 Conclusion We introduce a unified model, HVQ-Trans, for multi-class Unsupervised Anomaly Detection under the one-for-all setting. The latent space is modeled as hierarchical discrete prototypes learned from normal training data. We vector quantize visual features to reconstruct normal patterns and employ a switching mechanism for codebook selection and exquisite reconstruction. Our hierarchical designation incorporates multi-level normative information and encourages the model to reconstruct anomalous images as normal. Furthermore, we propose the hierarchical prototype-oriented optimal transport module to regulate the prototypes and calibrate the anomaly score. Under the one-for-all setting, our model significantly surpasses competitors on MVTec-AD and VisA datasets, and provides visualization and interpretability for both anomaly localization and detection. **Discussion:** In this work, the category labels are assumed to be available during the training stage. How to incorporate the model with clustering methods rather than category labels should be further studied. In practice, our model can assemble the normal iconic prototypes which may facilitate domain adaption for real scenes, and be potentially applied to time series, text, and video data. However, anomaly detection for video surveillance or social multimedia may raise privacy concerns. ## Acknowledgements This work was supported in part by the National Natural Science Foundation of China under Grant U21B2006; in part by the Shaanxi Youth Innovation Team Project; in part by the Fundamental Research Funds for the Central Universities QTZX23037 and QTZX22160; in part by the 111 Project under Grant B18039.
2309.00869
Nondestructive discrimination of Bell states between distant parties
Identifying Bell states without destroying it is frequently dealt with in nowadays quantum technologies such as quantum communication and quantum computing. In practice, quantum entangled states are often distributed among distant parties, and it might be required to determine them separately at each location, without inline communication between parties. We present a scheme for discriminating an arbitrary Bell state distributed to two distant parties without destroying it. The scheme requires two entangled states that are pre-shared between the parties, and we show that without these ancillary resources, the probability of non-destructively discriminating the Bell state is bounded by 1/4, which is the same as random guessing. Furthermore, we demonstrate a proof-of-principle experiment through an IonQ quantum computer that our scheme can surpass classical bounds when applied to practical quantum processor.
Bohdan Bilash, Youngrong Lim, Hyukjoon Kwon, Yosep Kim, Hyang-Tag Lim, Wooyeong Song, Yong-Su Kim
2023-09-02T08:54:34Z
http://arxiv.org/abs/2309.00869v2
# Nondestructive discrimination of Bell states between distant parties ###### Abstract Identifying Bell state without destroying it is frequently dealt with in nowadays quantum technologies such as quantum communication and quantum computing. In practice, quantum entangled states are often distributed among distant parties, and it might be required to determine them separately at each location, without inline communication between parties. We present a scheme for discriminating an arbitrary Bell state distributed to two distant parties without destroying it. The scheme requires two entangled states that are pre-shared between the parties, and we show that without these ancillary resources, the probability of non-destructively discriminating the Bell state is bounded by \(1/4\), which is the same as random guessing. Furthermore, we demonstrate a proof-of-principle experiment through an IonQ quantum computer that our scheme can surpass classical bounds when applied to practical quantum processor. ## I Introduction Discriminating different quantum states is at the heart of many quantum information applications. For example, the security of quantum key distribution is based on the difficulty of discriminating non-orthogonal quantum states [1; 2; 3]. It is noteworthy that B92 protocol is essentially equivalent to the unambiguous quantum state discrimination between two non-orthogonal qubits [4]. If we consider the case where a two-qubit system is prepared in one of the four Bell states, we can distinguish which Bell state it is in by using a circuit shown in Fig. 1(a). This is known as Bell state measurement and is quintessential for measurement-device-independent quantum communications [5; 6; 7; 8], quantum teleportation [9], and entanglement swapping [10]. However, this is a destructive measurement of the state. Therefore, if we have an arbitrary state and need to perform further quantum information processing after confirming what state it is, or if we desire to verify and validate it during process, we need a way to maintain the state without changing it even after distinguishing it. This can be achieved by utilizing ancillary qubits that interact with system qubits. For instance, as shown in Fig. 1(b), Bell states of system qubits \(s_{A}\) and \(s_{B}\) can be nondestructively discriminated by the measurement results of two ancillary qubits \(a_{1}\) and \(a_{2}\)[11; 12; 13; 14]. Bell states are often distributed between two parties who cannot perform direct non-local quantum operations between them. Therefore, it is necessary to verify and guarantee the state in such processes. However, Fig. 1(b) scheme is not applicable to distant parties because it is necessary to implement \(CNOT\) operations between an ancillary qubit with both Bell state qubits that are apart. Specifically, many quantum network applications begin with shared entangled particles \(s_{A}\) and \(s_{B}\) among distant parties. Since entanglement cannot be generated via local operation and classical communication (LOCC), entangled particles are usually generated by a _trusted_ party and distributed via quantum channels [15]. While the trusted third party is widely presumed in quantum network scenarios, a malicious third party or insecure quantum channels can cause significant threats in quantum communications [16; 17]. Therefore, once entangled particles are shared by distant parties, it is desirable to check them in a nondestructive manner. Recently, it has been investigated as _quantum delocalized interaction_ and shown that entangled ancillary qubits between parties are essential for non Figure 1: Schemes for discriminating Bell states. (a) Direct measurement on the Bell state qubits. The quantum state is destroyed after knowing the state, thus it is unsuitable for further quantum information processing. (b) Nondestructive Bell state discrimination using ancillary qubits. The measurement result is registered by the outcome of the ancillary qubits and the Bell state qubits remain unchanged. destructive quantum state discrimination [18; 19]. In this paper, we further investigate nondestructive quantum state discrimination of all four Bell states between distant parties. We theoretically investigate the upper bound of success probability with LOCC. Then, we present a scheme to discriminate all four Bell states without destroying it with pre-shared ancillary entangled qubits. We also demonstrate a proof-of-principle experiment using an IonQ quantum computer [20]. The experimental results verify the effectiveness of our protocol by showing that the experimental success probability surpasses the classical upper bound. ## II Theory ### Nondestructive Bell state discrimination with local operations and classical communications Consider two distant parties, Alice and Bob, each receiving one qubit from a pair of qubits \(|\Psi\rangle_{AB}\). Here, \(|\Psi\rangle_{AB}\) is prepared in one of four Bell states, \(|\phi^{\pm}\rangle=\frac{1}{\sqrt{2}}\left(|00\rangle\pm|11\rangle\right)\) and \(|\psi^{\pm}\rangle=\frac{1}{\sqrt{2}}\left(|01\rangle\pm|10\rangle\right)\). Alice and Bob want to determine the Bell state \(|\Psi\rangle_{AB}\) without altering the state. Now we refer the probability to successfully discriminate a given Bell state as \(P_{D}\), the probability to a given Bell state remain unchanged after probe it as \(P_{F}\). The overall success probability for both cases would be \(P_{F}P_{D}\leq P_{\text{succ}}\leq\min\{P_{F},P_{D}\}\). The lower bound is reached when \(P_{F}\) and \(P_{D}\) are independent, and the upper bound happens when \(P_{F}\) and \(P_{D}\) are highly correlated. A trivial strategy that Alice and Bob can take into account is random guessing. Alice and Bob do not touch the target Bell state at all while they just guess without any information. In this case, Bell state is unchanged so the nondestructive probability \(P_{F}=1\). However, the probability to successful discrimination becomes \(P_{D}=1/4\) since they randomly guess the state out of four possible Bell states. Therefore, the overall success probability of nondestructive quantum state discrimination becomes \(P_{\text{succ}}=1/4\). Another strategy can be performing simple projective measurements and reconstructing a Bell state from the measurement results. Although Alice and Bob cannot perform joint measurements, they can still perform projective measurements on their own qubits and obtain some information. Consider the case both Alice and Bob measure the Bell state in \(Z\) basis and get the measurement result of \(|0\rangle\) and \(|0\rangle\) each. Then, they can assure that the state was in either \(|\phi^{+}\rangle\) or \(|\phi^{-}\rangle\). Therefore, the success probability for state discrimination becomes \(P_{D}=1/2\). But even after Alice and Bob know their own qubit outcomes, they cannot recover an entangled state since they are separated. If they simply prepare the same quantum state from the outcome they achieved i.e., \(|00\rangle\), nondestructive probability, or state overlap, with the original quantum state is \(P_{F}=|\langle\phi^{\pm}|00\rangle|^{2}=1/2\). Therefore, the overall success probability of nondestructive quantum state discrimination becomes \(P_{\text{succ}}=1/4\). Actually, the success probability \(P_{\text{succ}}=1/4\) is the upper bound that Alice and Bob can achieve classically without genuine quantum resource i.e. ancillary entanglements. **Theorem 1**.: _The success probability to nondestructively discriminate all four Bell states distributed between two distant parties with local operations and classical communications (LOCC) can be at most \(P_{\text{cl}}=1/4\)._ The proof of Theorem 1 is represented in Appendix A in detail. ### Nondestructive Bell state discrimination with ancillary entanglement Having established that it is impossible to nondestructively discriminate four Bell states between distant parties without quantum resources. Now, we propose a pre-shared entanglement assisted scheme for nondestructive discrimination of Bell states. Note that the pre-shared entanglement can be considered as quantum resources and no further global operations between distant parties are necessary. Therefore, once the distant parties share ancillary entanglement, they can nondestructively discriminate Bell states via LOCC later on. Figure 2 shows our proposed scheme. It begins with two ancillary entangled states between Alice and Bob along with Bell state \(|\Psi\rangle_{sAs_{B}}\) as follow. \[|\Psi\rangle_{\text{tot}} = |\phi^{+}\rangle_{a_{1}b_{1}}\otimes|\phi^{+}\rangle_{a_{2}b_{2}} \otimes|\Psi\rangle_{sAs_{B}}, \tag{1}\] where two ancillary entangled qubits are prepared in \(|\phi^{+}\rangle\) and the target Bell state is in one of four Bell states, \(|\Psi\rangle_{sAs_{B}}\in\{|\phi^{\pm}\rangle,|\psi^{\pm}\rangle\}\). Let us assume that the target Bell state is prepared in \(|\psi^{-}\rangle_{sAs_{B}}\). Then the state evolves as follows: \[|\Psi\rangle_{\text{tot},|\psi^{-}\rangle}= |\phi^{+}\rangle_{a_{1}b_{1}}\otimes|\phi^{+}\rangle_{a_{2}b_{2}} \otimes|\psi^{-}\rangle_{sAs_{B}}\] \[\xrightarrow{\text{X}_{sAs_{B}}\otimes\text{X}_{s_{B}}} |\phi^{+}\rangle_{a_{1}b_{1}}\otimes|\psi^{+}\rangle_{a_{2}b_{2}} \otimes|\psi^{-}\rangle_{sAs_{B}}\] \[\xrightarrow{\text{X}_{a_{1}s_{A}}\otimes\text{X}_{b_{1}s_{B}}} |\phi^{-}\rangle_{a_{1}b_{1}}\otimes|\psi^{+}\rangle_{a_{2}b_{2}} \otimes|\psi^{-}\rangle_{sAs_{B}}\] \[\xrightarrow{\text{H}_{a_{1}}\otimes\text{H}_{b_{1}}} |\psi^{+}\rangle_{a_{1}b_{1}}\otimes|\psi^{+}\rangle_{a_{2}b_{2}} \otimes|\psi^{-}\rangle_{sAs_{B}},\] where \(X_{kl}\) and \(H_{m}\) are a CNOT gate on \(k\) (control) and \(l\) (target) qubits and a Hadamard gate on qubit \(m\), respectively. Note that all the CNOT gates are locally performed within Alice and Bob. Since both resultant ancillary pairs \(a_{1}b_{1}\) and \(a_{2}b_{2}\) are in \(|\psi^{+}\rangle\), measuring them yields one of the outcomes \(\{0101,\,0110,\,1001,\,1010\}\). Note that the Bell state \(|\psi^{-}\rangle_{s_{A}s_{B}}\) has not been changed during the procedure. Similarly, one can find the state evolution with other Bell states as follow: \[|\Psi\rangle_{\text{tot},|\psi^{+}\rangle}\longrightarrow |\phi^{+}\rangle_{a_{1}b_{1}}\otimes|\psi^{+}\rangle_{a_{2}b_{2}} \otimes|\psi^{+}\rangle_{s_{A}s_{B}},\] \[|\Psi\rangle_{\text{tot},|\phi^{-}\rangle}\longrightarrow |\psi^{+}\rangle_{a_{1}b_{1}}\otimes|\phi^{+}\rangle_{a_{2}b_{2}} \otimes|\phi^{-}\rangle_{s_{A}s_{B}},\] \[|\Psi\rangle_{\text{tot},|\phi^{+}\rangle}\longrightarrow |\phi^{+}\rangle_{a_{1}b_{1}}\otimes|\phi^{+}\rangle_{a_{2}b_{2}} \otimes|\phi^{+}\rangle_{s_{A}s_{B}}.\] It is straightforward to see that the different Bell states \(|\Psi\rangle_{s_{A}s_{B}}\) provide distinct ancillary qubit states while the Bell states are unaltered. Therefore, the Bell state can be _completely_ nondestructively discriminated by measuring the ancillary qubits. Table 2 summarizes the measurement results of the ancillary qubits according to the Bell states. ### Nondestructive Bell state discrimination with non-maximally entangled ancilla In the previous section, we considered the case where both ancillary qubits were in the \(|\phi^{+}\rangle\), which is the maximally entangled. In this section, we explore the case where ancillary qubits are not in maximally entangled states. Consider the case of analyzing the target Bell state by utilizing the Werner state, an incoherent combination of a pure maximally entangled state and a completely mixed state. The Werner state is written as \[W_{2} = (1-\lambda)|\phi^{+}\rangle\langle\phi^{+}|+\lambda\frac{\mathbb{I }}{4}, \tag{2}\] where \(\lambda\) determines depolarizing noise added in maximally entangled state \(|\phi^{+}\rangle\). When \(\lambda=0\), the \(W_{2}\) becomes a density matrix of maximally entangled state \(|\phi^{+}\rangle\), and \(\lambda=1\) makes a state maximally mixed state. Note that \(\mathbb{I}\) can be rewritten in terms of mixture of all four-Bell states, and thus, the Werner state can be rewritten as \[W_{2} = (1-\frac{3\lambda}{4})|\phi^{+}\rangle\langle\phi^{+}|+\frac{ \lambda}{4}(|\phi^{-}\rangle\langle\phi^{-}| \tag{3}\] \[+|\psi^{+}\rangle\langle\psi^{+}|+|\psi^{-}\rangle\langle\psi^{ -}|).\] The only term we need to focus on is \(|\phi^{+}\rangle\langle\phi^{+}|\), which is the Bell basis term that can be applied to nondestructive Bell state discrimination we are considering. It is worth noting that the arbitrary maximally entangled states can be used to achieve nondestructive discrimination, but with different output combination we have to know in initial. Here we are only considering \(|\phi^{+}\rangle\langle\phi^{+}|\) as an example. In case of two pairs of ancilla, Eq. (2), the state can be represented by a product state of two Werner states \(W_{2}\otimes W_{2}\). Then, total ancillary state can be written as follow: \[W_{2}\otimes W_{2} = (1-\frac{3\lambda}{4})^{2}|\phi^{+}\phi^{+}\rangle\langle\phi^{+ }\phi^{+}|\] \[+ (1-\frac{3\lambda}{4})(\frac{\lambda}{4})\{\text{terms with one }|\phi^{+}\rangle\}\] \[+ (\frac{\lambda}{4})^{2}\{\text{terms with no }|\phi^{+}\rangle\},\] and an entire equation would be revealed in Appendix B. Here we assume that both ancilla has same value of \(\lambda\). It is straightforward to see that only the case \(|\phi^{+}\phi^{+}\rangle\langle\phi^{+}\phi^{+}|\) can successfully discriminate a Bell state without destroying it, and it appears with probability \(P_{\text{succ}}=(1-\frac{3}{4}\lambda)^{2}\). Otherwise, the estimation of the state would be incorrect, and/or the determination would contaminate the target Bell state. Now, we can plot the success probability \(P_{\text{succ}}\) of our protocol with respect to \(\lambda\) as shown in Fig. 3. \begin{table} \begin{tabular}{c|c} \hline \(|\Psi\rangle_{s_{A}s_{B}}\) & Bit values (\(a_{1}b_{1}a_{2}b_{2}\)) \\ \hline \(|\phi^{+}\rangle_{s_{A}s_{B}}\) & 0000, 0011, 1100, 1111 \\ \hline \(|\phi^{-}\rangle_{s_{A}s_{B}}\) & 0111, 0100, 1000, 1011 \\ \hline \(|\psi^{+}\rangle_{s_{A}s_{B}}\) & 0001, 0010, 1110, 1101 \\ \hline \(|\psi^{-}\rangle_{s_{A}s_{B}}\) & 0101, 0110, 1001, 1010 \\ \hline \end{tabular} \end{table} Table 1: Measurement results of ancillary qubits according to the Bell state \(|\Psi\rangle_{s_{A}s_{B}}\). Both ancillary entanglement pairs are prepared in \(|\phi^{+}\rangle_{ab}\). Figure 2: Scheme for nondestructive Bell state discrimination between distant parties. The qubits \(s_{A}\) and \(s_{B}\) are prepared in one of four Bell states. This can be nondestructively discriminated with the help of two ancillary entanglement pairs \(|\phi^{+}\rangle_{a_{1}b_{1}}\) and \(|\phi^{+}\rangle_{a_{2}b_{2}}\). For the case \(\lambda=\frac{2}{3}\), the Werner state with no entanglement, cause the success probability to lose the quantum advantage over the classical one, \(P_{\rm cl}=\frac{1}{4}\). And when the ancilla is maximally mixed state with \(\lambda=1\), the success probability becomes \(P_{\rm succ}=\frac{1}{16}\), which is the case the target Bell state is contaminated from the original one, and the discrimination becomes random. ## III Experiment using IonQ ### Results for nondestructive Bell state discrimination with maximally entangled ancilla We demonstrate our protocol using an IonQ quantum computer [20]. Figure 4 shows the quantum circuit run on the IonQ. The entire quantum circuit is composed of four sections as follow: I. Ancillary entanglement preparation-. By applying Hadamard gate and CNOT gate to two pairs of two ancillary qubits, we generate two pairs of entangled states \(|\phi^{+}\rangle_{a_{1}b_{1}}\) and \(|\phi^{+}\rangle_{a_{2}b_{2}}\). II. Preparation of target Bell state-. The Bell state to probe is prepared in one of four Bell states. Arbitrary Bell states \(|\Psi\rangle\) are prepared with different single qubit gates \(W\in\{I,X,Y,Z\}\), where \(I\) is an identity operator and \(X,Y,\) and \(Z\) are the Pauli operators. III. Interaction with ancillary qubits-. The interactions between ancillary states and target Bell state are performed to implement nondestructive entanglement discrimination. The quantum circuit is identical to that of Fig. 2. Note that no direct non-local operations between Alice and Bob are allowed as they are assumed to be separate. The Bell state discrimination result is registered as bit values of \(a_{1}b_{1}a_{2}b_{2}\), refer Table 2. IV. Verifying non-destructive characteristic-. We perform a Bell state measurement to check whether the Bell state prepared in step II has been altered or not. This is identical to that of Fig. 1(a). Note that this is unnecessary for other applications. In our experiment, we repeat 10,000 times of the experimental runs for each Bell state. The state discrimination results depending on the different Bell states are shown in Figure 5(a). It clearly shows that the Bell states are well discriminated with high probability, the average successful state discrimination probability \(P_{D}=0.796\pm 0.005\) for all four Bell states. Figure 5(b) shows the results of post-discriminate states. One can see the initial state has been maintained with high probability, the average probability that the Bell states remain unchanged is given as \(P_{F}=0.800\pm 0.010\). The complete experimental results can be categorized as a following truth table: i) successful state discrimination with unchanged Bell state (TT), ii) successful state discrimination but Bell state altered (TF), iii) Failed to discriminate the state but Bell state is unchanged (FT), and iv) state discrimination failed and Bell state is altered (FF). The successful nondestructive Bell states discrimination result corresponds to simultaneous success of state discrimination and unaltered Bell state, i.e., TT case. Figure 6 presents the truth table with the prepared Bell states. The nondestructive discrimination probability averaged over for all four Bell states is \(P_{\rm succ}=0.736\pm 0.012\). The success probability \(P_{\rm succ}\) achieved by the practical quantum computer is well above the upper bound of Bell state discrimination Figure 4: A complete quantum circuit to perform a proof-of-principle experiment on IonQ quantum computer. It consists of 4-steps described in main text. Here, \(W\) is a single qubit Pauli gate where \(W\in\{I,X,Y,Z\}\) to prepare arbitrary Bell state. Figure 3: Success probability depends on value of \(\lambda\). Analytical solution is based on expanded for two ancilla Werner state. When \(\lambda=0\), the Werner state becomes two pairs of \(|\phi^{+}\rangle\), then success probability would be 1. However, as increasing \(\lambda\), the result goes to \(\frac{1}{16}\) since the ancillary Werner state becomes maximally mixed one. without shared entanglements, \(P_{\text{cl}}=1/4\). Note that \(P_{\text{succ}}\leq P_{D}P_{F}\) means there is a correlation between \(P_{D}\) and \(P_{F}\). We also can find such correlations, as shown in Fig.6, the probability for FF cases is much higher than individual failures of state discrimination (FT) and non-destructiveness (TF). ### Results for nondestructive Bell state discrimination with non-maximal entanglements In this section we show IonQ experiment results with Werner states ancilla, which are analyzed in Chapter II.3. We dealt with the statistics of outcomes of events that can occur in Werner states ancilla, similar to twirling technique, to explore the affect by Werner states ancilla with arbitrary \(\lambda\) values. Specifically, since it is impractical to realize Werner states with arbitrary \(\lambda\) as an input for practical quantum computer, we run the quantum computer with all the states composing Werner state, and see the statistical combinations of these outcomes according to the weights derived from \(\lambda\). The results are implemented only for the case discriminating Bell \(|\phi^{+}\rangle\) state. However, the results can be extended to other Bell states without loss of generality. The results of analytic analysis in Chapter II.3 and its implementation by IonQ computer is shown on Fig. 7. For each Werner state, Eq. (5), with different \(\lambda\), which are represented as black circles in Fig. 7, we run circuit with 9600 shots, and repeat the process 100 times. The standard deviations for each point are less than 0.5% and smaller than the size of markers. We can see the entangled ancillary state with \(\lambda\) that is larger than 0.6, overcome classical bound which is close to theoretical bound \(\lambda=\frac{2}{3}\). Figure 5: Experimental results using the IonQ quantum computer. (a) Bell state discrimination probability given by the measurement results of ancillary qubits. (b) The probability that Bell state are measured in Bell bases after the Bell discrimination. Figure 6: Experimental truth table for nondestructive entanglement discrimination: i) successful state discrimination with unchanged Bell state (TT), ii) successful state discrimination but Bell state altered (TF), iii) Failed to discriminate the state but Bell state are unchanged (FT), and iv) state discrimination failed and Bell state altered (FF). Figure 7: Success probability depends on value of \(\lambda\) for two ancilla Werner states. Theoretical result (blue curve) is drawn for comparison. The blue curve shows the theoretical prediction of success probability \(P_{\text{succ}}\) based on the \(\lambda\), analyzed in Sec. II-C. Each point of IonQ experimental result is produced by taking 9600 shots, and repeat 100 times. Deviations are small enough to reside inside the dots. Even when \(\lambda=0\), the IonQ result does not reach \(P_{\text{succ}}=1\) due to the noises of the IonQ processor. As \(\lambda\) increases, the result reaches \(\frac{1}{16}\), which is predicted in the theory, according to the maximally mixed ancilla. Conclusions While many quantum communication and network protocols begin with pre-shared entanglement between distant parties, it could have been contaminated during the transmission or by the malicious third party. Therefore, it is of importance to verify the entanglement before the quantum communications. Since the entanglement will be utilized for further quantum information processing, the verification should be done in a nondestructive manner. In this paper, we have shown that the success probability of nondestructive quantum state discrimination of all four Bell states can be at most \(P_{\rm cl}=1/4\) with local operations and classical communications. The limited success probability can be surpassed by utilizing pre-shared entanglement between parties. In particular, we have proposed a scheme for complete nondestructive Bell state discrimination using two ancillary entangled qubit pairs and verify the experimental feasibility by demonstrating a proof-of-principle experiment on an IonQ quantum computer. It would be interesting to extend our scheme to discriminate entanglement of high dimensional and/or multipartite quantum states. ## V Acknowledgements This research was funded by Korea Institute of Science and Technology (2E32241, 2E32801), National Research Foundation of Korea (2023M3K5A1094805, 2023M3K5A1094805) and Institute for Information & communications Technology Planning & Evaluation (IITP) (RS-2023-00222863, 2022-0-00463). H.K. is supported by the KIAS Individual Grant No. CG085301 at Korea Institute for Advanced Study.
2305.17018
Formal Modelling for Multi-Robot Systems Under Uncertainty
Purpose of Review: To effectively synthesise and analyse multi-robot behaviour, we require formal task-level models which accurately capture multi-robot execution. In this paper, we review modelling formalisms for multi-robot systems under uncertainty, and discuss how they can be used for planning, reinforcement learning, model checking, and simulation. Recent Findings: Recent work has investigated models which more accurately capture multi-robot execution by considering different forms of uncertainty, such as temporal uncertainty and partial observability, and modelling the effects of robot interactions on action execution. Other strands of work have presented approaches for reducing the size of multi-robot models to admit more efficient solution methods. This can be achieved by decoupling the robots under independence assumptions, or reasoning over higher level macro actions. Summary: Existing multi-robot models demonstrate a trade off between accurately capturing robot dependencies and uncertainty, and being small enough to tractably solve real world problems. Therefore, future research should exploit realistic assumptions over multi-robot behaviour to develop smaller models which retain accurate representations of uncertainty and robot interactions; and exploit the structure of multi-robot problems, such as factored state spaces, to develop scalable solution methods.
Charlie Street, Masoumeh Mansouri, Bruno Lacerda
2023-05-26T15:23:35Z
http://arxiv.org/abs/2305.17018v2
# Formal Modelling for Multi-Robot Systems Under Uncertainty ###### Abstract **Purpose of Review:** To effectively synthesise and analyse multi-robot behaviour, we require formal task-level models which accurately capture multi-robot execution. In this paper, we review modelling formalisms for multi-robot systems under uncertainty, and discuss how they can be used for planning, reinforcement learning, model checking, and simulation. **Recent Findings:** Recent work has investigated models which more accurately capture multi-robot execution by considering different forms of uncertainty, such as temporal uncertainty and partial observability, and modelling the effects of robot interactions on action execution. Other strands of work have presented approaches for reducing the size of multi-robot models to admit more efficient solution methods. This can be achieved by decoupling the robots under independence assumptions, or reasoning over higher level macro actions. **Summary:** Existing multi-robot models demonstrate a trade off between accurately capturing robot dependencies and uncertainty, and being small enough to tractably solve real world problems. Therefore, future research should exploit realistic assumptions over multi-robot behaviour to develop smaller models which retain accurate representations of uncertainty and robot interactions; and exploit the structure of multi-robot problems, such as factored state spaces, to develop scalable solution methods. **Keywords:** Multi-Robot Systems, Markov Models, Uncertainty ## 1 Introduction The demand for multi-robot systems (MRSs) is increasing, due to their performance, flexibility, and fault tolerance [1; 2]. Successful multi-robot deployments have been completed in a range of domains, such as fulfilment centres [3], fruit fields [4], and roads [5]. For safe and robust multi-robot coordination in the real world, it is often desirable to consider _formal models_ of the MRS, which enable policy synthesis for well-defined objectives, as well as a formal analysis of such policies. In this review paper, we consider formal models that capture the task-level behaviour of the MRS. These model high-level capabilities such as navigation or manipulation, while abstracting the lower-level control required to implement these capabilities. Formal models are used alongside multi-robot planning [6] and reinforcement learning (RL) [7] techniques to synthesise robot behaviour, and alongside model checking [8] and simulation [9] techniques to evaluate task-level metrics of multi-robot performance. However, the success of these techniques is limited by the model's accuracy, in particular its capacity to capture and predict execution-time multi-robot behaviour [10]. For example, if we plan on an inaccurate model, our expectations of robot behaviour during planning diverge from what is observed during execution, which can lead to inefficient execution-time behaviour, or robot failure in the worst case. In this paper, we focus on modelling the _stochasticity_ of MRSs as, in any environment, robot behaviour is affected by the stochastic dynamics of the environment and the other robots. For example, a mobile robot operating in an office may fail to navigate upon a door being closed unexpectedly; or it may be unable to dock at a charging station if another robot is charging for longer than expected. We begin by introducing the types of uncertainty encountered by MRSs, including uncertainty over action outcomes [11], a robot's current state [12], and the duration and start time of robot actions [13; 14]. Next, we review modelling formalisms which capture these sources of uncertainty. We then describe how formal multi-robot models have been used to support advances in the application of planning, RL, model checking, and simulation techniques to MRSs. ## 2 Uncertainty in Multi-Robot Systems In this section, we outline the common forms and sources of uncertainty experienced by MRSs. **Outcome Uncertainty.** Robot uncertainty is most commonly captured over discrete action outcomes [11], such as whether a grasp action is executed successfully. Stochastic outcomes can occur due to robot navigation failure [15], battery depletion [16], or stochastic features of the environment such as hazards [17], resources [18], and doors [19]. **Partial Observability.** In some MRSs, robots only partially observe the environment, which prevents them from knowing each other's states. This is often caused by limited communication and sensing capabilities, such as imperfect localisation [20], limited network range [21], or object occlusion [22]. Under partial observability, robots form a _belief_ over the true state of the environment and other robots using possibly noisy observations obtained from sensors. **Temporal Uncertainty.** Sources of temporal uncertainty affect the duration and start time of robot actions during execution [13; 23; 24]. Temporal uncertainty occurs in almost any robot environment, where action durations are affected by environmental disturbances, such as unknown obstacles or adverse weather conditions. For example, a mobile robot's tire may slip on a carpet while navigating through an office, slowing it down. Further, robots may have to wait for stochastic temporal processes in the environment, such as order arrival in a fulfilment centre, before beginning task execution [25]. **The Effect of Robot Interactions.** A particularly relevant driver of uncertainty in MRSs is the fact robots typically _share resources_, such as space or access to a charging station, and _must interact_ with each other [14]. For example, when multiple mobile robots navigate in the same physical space simultaneously, they may experience _congestion_, which increases uncertainty over action duration [23]. Alternatively, a robot manipulator may be more likely to fail a grasp if another robot is nearby, restricting its movement. ## 3 Formal Multi-Robot Models In this section, we review modelling formalisms for MRSs, which we summarise in Table 1. At their foundation, each of these models consists of _states_, which describe a snapshot of the MRS and environment, and _transitions_ between states, which define the system dynamics. ### Classical Multi-Robot Models _Joint transition systems (JTSs)_ model MRSs with deterministic dynamics [26; 27; 10; 28; 29]. JTS states are often factored into local states for each robot, e.g. their location and battery level, and a shared set of global state features, such as whether doors in the environment are open. JTSs are fully deterministic, and so fail to capture the stochastic dynamics of real robot environments. _Multi-agent Markov decision processes (MMDPs)_ are a natural extension of JTSs to stochastic domains [6]. Similar to JTSs, MMDPs capture robots in a joint state and action space, but MMDP actions have probabilistic outcomes. MMDPs are a common formalism for MRSs, and have been used to model drone fleets [30], warehouse robots [25], and human-robot teams [31]. MMDPs and JTSs assume synchronous execution, i.e. robots execute their actions in lockstep, and all actions have the same duration. Further, the joint state and action spaces yield an exponential blow-up in the number of robots being modelled. In practice, robot action durations are inherently continuous and uncertain, where robot interactions contribute towards this uncertainty [23; 24; 32; 14; 33]. Thus, to accurately capture multi-robot behaviour, we require formalisms which model asynchronous multi-robot execution and uncertainty over action duration. One approach for explicitly doing this is to use continuous-time Markov models, which we discuss later in this section. ### Avoiding the Exponential Scalability of Joint Models The number of MMDP or JTS states and actions increase exponentially in the number of robots [6], which makes optimal solutions for planning [34], RL [35], and model checking [10] intractable. This can be improved by making different assumptions which simplify the model. In fact, there has been a significant research effort to identify realistic assumptions for specific multi-robot problems. _Transition-independent MMDPs (TI-MMDPs)_[36] and _constrained MMDPs (CMMDPs)_[37] assume the transition dynamics of each robot are independent, but couple the MRS through rewards and shared resources, respectively. _Team MMDPs_[38] also treat the transition dynamics independently, modelling robots sequentially in the context of simultaneous task allocation and planning problems. Transition independence assumptions allow for weakly-coupled models that operate outside of the joint state and action space and reduce the model size, thus facilitating the use of more efficient solution methods. However, in cases where execution-time robot interactions affect the outcome and duration of robot actions, the transition-independent models above are unable to accurately reflect the MRS. For many multi-robot problems, robots can act independently for the majority of execution, as interactions are _sparse_. For example, two robots conducting a handover can ignore each other until they are close. _Interaction-driven Markov games (IDMGs)_[39] and _decentralised sparse interaction MDPs (Dec-SIMDPs)_[40; 41] exploit this to reduce the space complexity whilst still accounting for execution-time interactions. IDMGs and Dec-SIMDPs are equivalent, and capture an MRS using an independent MDP per robot, and a set of interaction MMDPs, which define joint MRS behaviour in interaction areas, such as near a doorway. Though interaction MMDPs are joint models, they are significantly smaller than the full MMDP, as they are defined over only a small fraction of the full MMDP state space. However, these models are only useful when interactions are localised to a small, fixed part of the environment. If this does not hold, they become equivalent to the full MMDP. Finally, a commonly used approach to avoid the use of joint models while still considering robot dependencies and execution-time interactions is to model the MRS as a set of single-robot models that are extended to include some knowledge of the other robots. In [25; 42], _spatial task allocation problems (SPATAPs)_ are modelled using single-robot models which aggregate the response of the other robots. The aggregate response is represented as a distribution which predicts whether any robot is present at a given location. This is computed by combining individual distributions over each robot's location, and allows robots to predict which tasks will be handled by other robots during planning. A similar approach is taken in [23], where an MRS is modelled using single-robot _time-varying Markov automata (TVMA)_ which capture the probabilistic effects of congestion caused by the other robots. In this context, congestion is represented as a distribution over the number of robots present in each area of the environment, and distributions of navigation duration under the presence of a specific number of robots are obtained from real-world multi-robot navigation data. To solve multi-robot planning problems, [24] augment single-robot models with a cost function which captures the effects of robot interactions. This cost function is then adjusted iteratively during planning to encourage robot collaboration. ### Partially Observable Multi-Robot Models Partially observable MDPs (POMDPs) are widely used to model partially observable problems, where robots make observations which update their belief over their current state [12]. _Decentralised POMDPs (Dec-POMDPs)_ extend POMDPs to multi-robot settings [43], where each robot has its own set of local observations. Dec-POMDPs have been used for warehouse robotics [44], cooperative package delivery [45], and teams of unmanned aerial vehicles [46]. If the combined local observations of each robot uniquely identify the joint state, Dec-POMDPs are reduced to Dec-MDPs, which are easier to solve [43]. However, these are still joint models, and optimal solvers for both Dec-POMDPs and Dec-MDPs have even higher time complexity than MMDP solvers [43]. To reduce the space complexity related to the joint modelling in Dec-POMDPs, [47; 48] consider decoupling them into local POMDPs for each robot. For each of these local POMDPs, they compute a distribution which captures how external state factors influence its local state. These external state factors include the states of the other robots. This is then used to marginalise out the external state factors to construct single-robot POMDPs. This _influence-based abstraction_ produces smaller models. However, computing influence distributions is intractable in general [48]. Another class of relevant POMDP-based models are _macro action Dec-POMDPs (MacDec-POMDPs)_[49] and _decentralised partially observable semi-MDPs (Dec-POSMDPs)_[45], which consider _macro actions_ which execute a series of primitive low-level actions, such as moving one grid cell forward. This hierarchical paradigm is based on the options framework [50] for MDPs and has two main benefits. First, it reduces model size by leveraging existing behaviour such as navigation, and modelling behaviour at the macro action level, rather than each time step. Second, the use of temporally extended actions seamlessly enables asynchronous action execution. Each MacDec-POMDP and Dec-POSMDP has an underlying Dec-POMDP which captures the low-level actions that form the macro actions. For MacDec-POMDPs, the underlying Dec-POMDP and the policies for each macro action are assumed to be known [51]. MacDec-POMDP policies can then be evaluated by unrolling the macro actions on the low-level Dec-POMDP. Unlike MacDec-POMDPs, Dec-POSMDPs capture macro actions using distributions over their completion time, where Dec-POSMDP policies can be evaluated through simulation. ### Continuous-Time Multi-Robot Models Several models have been proposed to take into account uncertainty over action duration in the context of MRSs which are evolving asynchronously. These make use of continuous-time distributions which capture the stochasticity in robot action durations. _Continuous-time MDPs (CTMDPs)_ extend MDPs to include durative transitions represented as exponential delays [52], and have been used to model multi-robot data collection problems [53]. To model asynchronous multi-robot execution, CTMDPs can be defined over a joint state and action space, similar to MMDPs. Thus, as with MMDPs, they scale exponentially in the number of robots. To mitigate this, [53] constructs single-robot CTMDPs assuming transition independence, similar to [36; 37]. The duration of each action in a CTMDP is modelled with a single exponential distribution. This is a convenience which allows for simpler solution approaches which exploit the memoryless property of the exponential distribution, but limits the accuracy with which we can capture robot action durations. Many multi-robot models can capture _heterogeneous_ MRSs (see Table 1), where robots have different capabilities and resource usage etc. This is often achieved using local action spaces or reward functions for each robot. _Generalised stochastic Petri nets (GSPNs)_[54] are a modelling formalism for _homogeneous_ MRSs, i.e. the robots are identical, where robots are represented _anonymously_ as tokens. Further, as in CTMDPs, durations are restricted to exponentials. GSPNs remain exponential in the team size, but robot anonymity provides a practical reduction in the number of states. GSPNs have been used to model teams of football robots [55], autonomous haulers [56], and monitoring robots [57]. _Generalised semi-MDPs (GSMDPs)_ can capture concurrent execution and stochastic durations, and have been applied to MRSs in [32; 58], but are complex to define and hard to solve, as GSMDPs allow for arbitrary duration distributions. _Multi-robot Markov automata (MRMA)_[14] also allow for arbitrary duration distributions to capture asynchronous multi-robot execution in continuous time. Markov automata (MA) extend MDPs and CTMDPs by explicitly separating instantaneous robot action choice and the duration of robot actions [59]. MRMA are joint models, where robot action durations are represented as phase-type distributions (PTDs), which are sequences of exponentials capable of capturing any nonnegative distribution to an arbitrary level of precision [60]. In an MRMA, there is a different duration distribution for each spatiotemporal situation an action may be executed under, referred to as the _context_, which captures the effects of robot interactions on action execution. By separating robot decision making from action duration, robot interactions can be detected at the instant an action is triggered by analysing the joint MRMA state. MRMA are connected to other continuous-time multi-robot models. First, GSPN semantics can be described with an MA [61]. Second, a standard solution for GSMDPs involves converting all duration distributions into PTDs [60], which produces a model similar to an MRMA [58]. However, MRMA are simpler to define, and can be solved directly [62], as all durations are exponentials/PTDs by definition. ## 4 Model Applications In this section, we discuss how the multi-robot models in Table 1 have been solved and analysed for multi-robot planning, RL, model checking, and simulation. We summarise this discussion in Table 2. Note that in Table 2 we do not list foundational works which apply to more general models, such as heuristic search approaches for MDPs which can be applied to MMDPs [34], or MA model checking techniques which can be applied to MRMA [62]. ### Planning Multi-robot planning techniques synthesise robot behaviour given a formal model of the system. Many multi-robot models can be solved with standard techniques. MMDPs can be solved exactly using MDP solvers such as value or policy iteration [100; 101]. However, these methods solve for all states, making them intractable for joint multi-robot models. Heuristic and sampling-based methods such as labelled real-time dynamic programming [102] or Monte-Carlo tree search [103] improve upon the limited scalability of exact solvers by restricting search to promising areas of the state space. Despite reducing the explored states, heuristic algorithms are slow to converge on large models, but often provide anytime behaviour such that valid solutions are synthesised quickly, and improved with time. The poor scalability of MMDP planning motivates planning on simplified models. For TI-MMDPs [36], transition independence allows for compact representations of reward dependencies in conditional return graphs, which admits efficient solutions. For Dec-SIMDPs and IDMGs, the single-robot MDPs and interaction MMDPs can be solved separately using standard solvers such as value iteration [39]. Similarly, the SPATAP models in [42] are single-robot MDPs which capture the effects of the other robots, and can be solved separately. CMMDP approaches typically exploit the fact that only the resource constraint couples the agents to scale to larger problems. Planning for CMMDPs has considered a range of constraints \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **Model** & **Planning** & \begin{tabular}{c} **Reinforcement** \\ **Learning** \\ \end{tabular} & \begin{tabular}{c} **Model** \\ **Checking** \\ \end{tabular} & **Simulation** \\ \hline JTS [10] & [26; 27; 28; 29] & - & [26; 27; 28; 29] & - \\ \hline MMDP [6; 30; 64; 63; 64] & [65; 66] & [63; 64] & - \\ \hline TI-MMDP [36] & [36] & - & - & - \\ \hline CMMDP [37] & [18; 67; 68; 69; 70; 71; 72] & [73; 74] & - & - \\ \hline Team MMDP [38] & [38] & - & [38] & - \\ \hline Dec-SIMDP/IDMG [39; 40] & [39; 40; 41] & [75; 76] & - & - \\ \hline SPATAP Model [42] & [25; 42] & - & - & - \\ \hline TVIA per Robot [23] & [23] & - & - & - \\ \hline Dec-POMDP [43] & [77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95] & - & - \\ \hline MacDec-POMDP [49] & [44; 49; 89; 90; 91; 92; 94; 95] & - & - & - \\ \hline Dec-POSMDP [45] & [77; 96; 45] & - & - & - \\ \hline CTMDP [52] & [53; 97] & - & - & - & - \\ \hline GSPN [54] & [56; 57; 98; 99] & - & - & [55; 99] \\ \hline GSMDP [58] & [32] & - & - & - \\ \hline MRMA [14] & - & - & - & [14] \\ \hline \end{tabular} \end{table} Table 2: Applications of the models in Table 1 for multi-robot/multi-agent problems. over resource consumption, such as bounding its worst-case [67], considering a chance-constraint [68; 71], and bounding its conditional value at risk [72]. MMDPs can be solved tractably if they are sufficiently small. Therefore, in [63] robots are grouped into clusters based on robot dependencies, and each cluster is solved as a separate MMDP. Similarly, in [64] robots are incrementally added to an MMDP to control scalability. Recent work [30] has begun to address the poor scalability of MMDP planning. There, an anytime planner for MMDPs based on Monte Carlo tree search is presented, where robot dependencies are exploited to decompose the value function into a set of factors from which the optimal joint action can be computed. This approach scales to previously intractable problems. Solution methods for continuous-time multi-robot models differ depending on the objective. To solve CTMDPs for time-abstract objectives, such as expected untimed reward, MDP solvers are applied to an embedded time-abstract MDP. For timed objectives, MDP solvers are instead applied to a uniformised MDP, where each state has the same expected sojourn time [104; 58; 105]. Similarly, GSPNs can be converted to an MDP [56] or an MA [57] dependent on the objective and solved with standard techniques. For MRMA, we can plan using MA solution methods [62]. Dec-POMDPs can be solved centrally to synthesise local policies for decentralised execution, which map from local action-observation histories to actions [77; 78; 79; 46]. With this, local Dec-POMDP policies are robust to communication limitations and unreliable sensors. Dec-POMDP solutions can be adapted to MacDec-POMDPs and Dec-POSMDPs to synthesise policies over macro actions. In [89], the space of macro-action policies is searched exhaustively, where efficient simulators improve the scalability of policy evaluation [44]. This approach scales poorly, which is addressed in [90], where a heuristic search method optimises finite state controllers for each robot. However, MacDec-POMDP and Dec-POSMDP solutions have not been shown to scale beyond teams of around four robots [45; 49]. ### Reinforcement Learning (RL) An alternative approach to policy synthesis is RL [35]. Planners synthesise behaviour using a model of the system, whereas RL approaches learn behaviour using data sampled from the environment [34; 35]. Multi-robot RL problems are formulated assuming an underlying multi-robot model which is unknown prior to training. Fully observable, centralised problems can be formulated as an MMDP [65; 66] and solved using standard RL techniques such as deep Q-learning [106]. However, these techniques do not scale to multi-robot problems due to the exponential increase in the state and action space [66; 80]. In many settings, decentralised policies are required due to limited communication or partial observability [81; 80]. Here, multi-robot RL can be formulated as a Dec-POMDP and solved under the paradigm of centralised training with decentralised execution [107], which allows additional state information not available during execution to be used during training, such as the joint state. One example of this paradigm is QMix [80], which uses a mixing network to estimate the joint Q value from single-robot Q values. RL techniques for Dec-POMDPs are still slow to converge however, and so MacDec-POMDPs can be used to exploit existing behaviours and improve the efficiency of learning [92; 93; 94; 95]. ### Model Checking Model checking techniques evaluate the behaviour induced by robot policies by systematically checking if a property is satisfied in a formal robot model [10]. Properties are often specified with temporal logics such as linear temporal logic (LTL) or continuous stochastic logic (CSL). Similar to planning, many of the multi-robot models in Table 1 can be verified using techniques for more general models. For example, LTL formulae can be verified on JTSs and MMDPs using techniques for transition systems and MDPs [10]. However, exact LTL model checking approaches compute a product of the model and an automaton that captures the LTL formula, which significantly increases the state space, making them unsuitable for multi-robot problems. MRMA can be model checked against CSL formulae using model checking techniques for MA [62]. This also applies to GSPNs, which can be represented as an MA with identical semantics [61]. Similar CSL model checking techniques are available for CTMDPs [108]. Model checking and planning are often combined to synthesise guaranteed multi-robot behaviour. For LTL specifications, we can plan over a joint product automaton, however this quickly becomes intractable. To overcome this, [38] concatenate single robot product automata through switch transitions in a team MMDP to reduce the state space. For MMDPs, in [64] robots are added incrementally to a product automaton until the full problem is solved, or a fixed computational budget is exceeded. Alternatively, in [29] the product automaton is explored incrementally through sampling for MRSs modelled as a JTS. Combined planning and model checking techniques have been used for multi-robot data gathering [26; 27], monitoring [28], and mobility-on-demand [64]. Statistical model checking (SMC) techniques evaluate properties by sampling through a model given a set of robot policies, which avoids enumerating the state space [109], and bridges the gap between model checking and simulation techniques, which we discuss later in this section. In [8], SMC is used to evaluate quantitative properties of an MRS. SMC techniques can be applied to many of the models in Table 1. For example, we can use SMC techniques for MA [110] to evaluate bounded or unbounded properties on an MRMA. A drawback of SMC is a possible failure to explore states reached with low probability, which can render SMC unsuitable for safety critical systems [110]. ### Simulation Simulators evaluate multi-robot behaviour by executing a set of robot policies in an abstracted environment model. Using formal multi-robot models, we can create a discrete-event simulator (DES) by sampling stochastic outcomes and durations, and resolving non-determinism using robot policies. DESs mitigate the complexity of physics-based simulators such as Gazebo [111] by abstracting away low-level robot dynamics [112], allowing simulations to run magnitudes faster than real time. GSPNs, or variants thereof, have been used to simulate teams of football robots [55] and human-robot manufacturing teams [99], respectively. In [14], a DES called CAMAS (context-aware multi-agent simulator) samples through an MRMA to evaluate task-level metrics of multi-robot performance under the effects of robot interactions, such as the time to complete a set of tasks. ## 5 Conclusions In this paper, we reviewed modelling approaches for capturing the task-level behaviour of MRSs. We focused on stochastic models of multi-robot execution, and introduced the different types of uncertainty encountered by MRSs. Further, we discussed how these models have been used for multi-robot planning, RL, model checking, and simulation. Recent research has focused on constructing models which accurately capture the effects of uncertainty and robot interactions, or constructing models small enough to be solved efficiently. These two objectives are opposing, as to accurately capture multi-robot execution, we often require joint models which are frequently intractable to solve or analyse. Therefore, future research should focus on developing smaller multi-robot models which still accurately capture uncertainty and robot interactions. This may be achieved by identifying realistic assumptions over the sources of uncertainty and robot interactions, such as interactions only occurring in small portions of the state space. Exploiting these assumptions allows for smaller models which can be solved efficiently without sacrificing model accuracy. An alternative avenue for research is to exploit the structure of multi-robot problems, such as factored state spaces and dependencies between robots, to develop scalable solution methods for multi-robot models. Acknowledgments.Charlie Street and Masoumeh Mansouri are UK participants in Horizon Europe Project CONVINCE, and supported by UKRI grant number 10042096. Bruno Lacerda is supported by the EPSRC Programme Grant 'From Sensing to Collaboration' (EP/V000748/1). ## Declarations Conflict of Interest.The authors declare no competing interests. Human and Animal Rights and Informed Consent.This article does not contain any studies with human or animal subjects performed by any of the authors.
2304.11998
Branching exponents of synthetic vascular trees under different optimality principles
The branching behavior of vascular trees is often characterized using Murray's law. We investigate its validity using synthetic vascular trees generated under global optimization criteria. Our synthetic tree model does not incorporate Murray's law explicitly. Instead, we assume it holds implicitly and investigate the effects of different physical constraints and optimization goals on the branching exponent that is now allowed to vary locally. In particular, we include variable blood viscosity due to the F{\aa}hr{\ae}us--Lindqvist effect and enforce an equal pressure drop between inflow and the micro-circulation. Using our global optimization framework, we generate vascular trees with over one million terminal vessels and compare them against a detailed corrosion cast of the portal venous tree of a human liver. Murray's law is implicitly fulfilled when no additional constraints are enforced, indicating its validity in this setting. Variable blood viscosity or equal pressure drop leads to deviations from this optimum, but with the branching exponent inside the experimentally predicted range between 2.0 and 3.0. The validation against the corrosion cast shows good agreement from the portal vein down to the venules. Not enforcing Murray's law explicitly reduces the computational cost and increases the predictive capabilities of synthetic vascular trees. The ability to study optimal branching exponents across different scales can improve the functional assessment of organs.
Etienne Jessen, Marc C. Steinbach, Charlotte Debbaut, Dominik Schillinger
2023-04-24T11:02:37Z
http://arxiv.org/abs/2304.11998v1
# Branching Exponents of Synthetic Vascular Trees under Different Optimality Principles ###### Abstract _Objective:_ The branching behavior of vascular trees is often characterized using Murray's law. We investigate its validity using synthetic vascular trees generated under global optimization criteria. _Methods:_ Our synthetic tree model does not incorporate Murray's law explicitly. Instead, we assume it holds implicitly and investigate the effects of different physical constraints and optimization goals on the branching exponent that is now allowed to vary locally. In particular, we include variable blood viscosity due to the Fahraeus-Lindqvist effect and enforce an equal pressure drop between inflow and the micro-circulation. Using our global optimization framework, we generate vascular trees with over one million terminal vessels and compare them against a detailed corrosion cast of the portal venous tree of a human liver. _Results:_ Murray's law is implicitly fulfilled when no additional constraints are enforced, indicating its validity in this setting. Variable blood viscosity or equal pressure drop leads to deviations from this optimum, but with the branching exponent inside the experimentally predicted range between 2.0 and 3.0. The validation against the corrosion cast shows good agreement from the portal vein down to the venules. _Conclusion:_ Not enforcing Murray's law explicitly reduces the computational cost and increases the predictive capabilities of synthetic vascular trees. _Significance:_ The ability to study optimal branching exponents across different scales can improve the functional assessment of organs branching exponents Fahraeus-Lindqvist effect human liver Murray's law synthetic vascular trees vascular corrosion cast ## 1 Introduction The cardiovascular system is responsible for transporting blood to and from all cells in the human body, leading to hierarchical networks of vessels, called vascular trees, inside each organ. According to Murray [1], this hierarchy obeys scaling relations based on the minimization of the total energy expenditure of the system. Many factors influence and constrain this minimization process, such as the type and shape of the organ supplied, the demand for the organ's cells, and the existence of vascular diseases. The goals and constraints guiding their structural development and influence on the vascular system have yet to be entirely understood, even though extensive work has been carried out for over a century. Thus the analysis of vascular diseases based on the anatomy and physiology of the vascular structure remains a challenge. Murray first described a minimization problem for vascular segments in 1926 [1, 2]. Here, a tree is approximated as a bifurcating network consisting of rigid tubes, and the physical principles for fluid flow follow Poiseuille's law. The goal is to minimize the total power of the tree network with its minimum characterized by _Murray's law_. It describes the relationship of the radius of a parent vessel \(r_{0}\) against the radii of its children's vessels \((r_{1},r_{2})\) as a power law with \[r_{0}^{\gamma}=r_{1}^{\gamma}+r_{2}^{\gamma}. \tag{1}\] The branching exponent \(\gamma\) became an essential parameter for characterizing the branching behavior of vascular trees. In Murray's original formulation, \(\gamma=3.0\) is constant across the entire network. An extensive number of studies have been conducted to investigate Murray's law experimentally [3, 4, 5]. In general, exponents between 2.0 and 3.0 were measured. In [6], exponents were observed even going over the theoretical limit of 3.0 with \(\gamma=3.2\). Multiple theoretical studies have analyzed the possible factors contributing to these branching behaviors. An extension to Murray's law was proposed by Uylings [7], which incorporated the effects of turbulent flow into the minimization problem. Results show branching exponents as low as 2.33 for turbulent flow. In [8], a vascular model was investigated, which considered the role of elastic tubes. Compared to rigid tubes, the effect of pulsatile flow lowered the optimal value to 2.3. Zhou, Kassab, and Molloi [9, 10] generalized Murray's law hypothesis to an entire coronary arterial tree by defining a vessel segment as a stem and the tree distal to the stem as a crown. They showed that \(\gamma\) deviates from 3.0 even for steady-state flow and depends on the ratio between metabolic demand and viscous power dissipation. An alternative approach to investigate branching exponents is to construct vascular trees synthetically. The most well-known generation method here is constrained constructive optimization (CCO) [11]. The local optimization approach is directly based on Murray's minimization principles and allows to investigate different, albeit constant, values for \(\gamma\), like 2.55 [12] or 3.0 [13]. Another approach, based on Simulated Annealing (SA), included the branching exponent as an optimization parameter [14]. Results show that the vascular topology and the metabolic demand significantly influence the value of the branching exponent. Recently, the authors extended the CCO approach to finding a synthetic tree optimal both in (global) geometry and topology [15]. Finding the optimal geometry is cast into a nonlinear optimization problem (NLP), which allows the investigation of various possible goal functions and constraints. In this paper, we utilize this flexibility and go beyond previous studies by allowing the branching exponent \(\gamma\) to vary locally. Furthermore, we include a blood viscosity law based on the Fahraeus-Lindqvist effect [16] and enforce equal pressure drop to terminal vessels. The goal is to investigate the change in branching exponents under these influences. We start by introducing the relevant definitions and assumptions to generate synthetic trees. We then cast our goals and constraints into NLPs and introduce our optimization framework in more detail. Finally, we generate full portal venous trees of the human liver with up to one million terminal vessels and compare them against a vascular corrosion cast of a human liver [17, 18]. ## 2 Methods ### Definitions and assumptions We represent a vascular tree as a directed branching network \(\mathbb{T}=(\mathbb{V},\mathbb{A})\) with nodes \(u\in\mathbb{V}\) and segments \(a\in\mathbb{A}\). Each segment \(a=uv\) connects a proximal node \(x_{u}\) with a distal node \(x_{v}\). It approximates a vessel as a rigid and straight cylindrical tube and is defined by its radius \(r_{a}\), length \(\ell_{a}=\|x_{u}-x_{v}\|\), volumetric flow \(Q_{a}\) and apparent blood viscosity \(\eta_{a}\). The distal nodes of _terminal segments_ are terminal nodes (_leaves_) \(v\in\mathbb{L}\), and the proximal node of the (single) _root segment_ is the root node \(x_{0}\). A synthetic vascular tree perfuses blood at a steady state from the root segment down to the terminal segments inside a given (non-convex) perfusion volume \(\Omega\subset\mathbb{R}^{3}\), schematically shown in Fig. 1 As in Murray's original paper [1], we assume laminar flow and approximate blood as an incompressible homogeneous Newtonian fluid. We express the hydrodynamic resistance \(R_{a}\) of segment \(a\) by Poiseuille's law with \[R_{a}=\frac{8\eta_{a}}{\pi}\frac{\ell_{a}}{r_{a}^{4}}\quad\forall a\in\mathbb{ A}. \tag{2}\] The pressure drop over a segment can now be computed as \[\Delta p_{a}=R_{a}Q_{a}\quad\forall a\in\mathbb{A}, \tag{3}\] and the pressure at a node \(v\) follows with \[p_{v}=p_{u}-\Delta p_{a}\quad\forall uv\in\mathbb{A}. \tag{4}\] We further assume that the (known) perfusion flow \(Q_{\text{perf}}\) is homogeneously distributed among all \(N\) terminal segments, leading to a terminal flow value \(Q_{\text{term}}=Q_{\text{perf}}/N\). All remaining flow values can then be computed using Kirchhoff's law with \(Q_{uv}=\sum_{uv\in\mathbb{A}}Q_{vw}\forall v\in\mathbb{V}\setminus(0\cup \mathbb{L})\). We aim at generating vessels down to the smallest arterioles/venues with typical radii in the range of \(0.015\,\mathrm{mm}\) to \(0.1\,\mathrm{mm}\). The Fahraeus-Lindqvist effect [19] should be accounted for at this scale. It describes how the blood viscosity decreases as the vessel diameter decreases. The tendency of red blood cells to migrate toward the vessel center is largely responsible for this effect. In turn, this forces plasma toward the walls and decreases peripheral friction. At the smallest vessels with radii approaching the radii of red blood cells, the Figure 1: Schematic of a vascular tree and its relation to nodes and segments. Red circles denote a node, and white rectangles a segment. This tree has a given inflow \(Q_{1}=Q_{\text{perf}}\) and equal terminal outflow \(Q_{2}=Q_{3}=Q_{\text{term}}\) through each of the outlets (leaves) viscosity sharply rises again. Pries et al. [16] derived an empirical relationship for this behavior with \[\eta(r_{a}) =\eta_{p}\big{(}\kappa+\kappa^{2}\big{(}\eta_{45}-1\big{)}\big{)}, \tag{5}\] \[\eta_{45} =6\exp(-170r_{a}/\text{mm})-\] (6) \[2.44\exp(-8.08(r_{a}/\text{mm})^{0.645})+3.2,\] (7) \[\kappa =\frac{r_{a}^{2}}{(r_{a}-0.00055\text{mm})^{2}}, \tag{8}\] where \(\eta_{p}\) is the viscosity of the plasma, which we set to \(\eta_{p}=1.125\,\text{cP}\). \(\eta_{45}\) is the relative apparent blood viscosity for a discharge hematocrit of 0.45. This relationship is depicted in Fig. 2 for the relevant radii between \(0.015\,\text{mm}\) and \(10\,\text{mm}\). ### Design goals and constraints #### 2.2.1 Murray's minimization problem The original minimization formulation by Murray states that the total power of a vascular tree consists of the metabolic power required to sustain blood \(P_{\text{vol}}\) and the viscous power \(P_{\text{vis}}\) required to pump blood from the root down to the micro-circulation. The cost function of a tree is then defined as \[f_{\text{T}}=P_{\text{vol}}+P_{\text{vis}}=\sum_{a\in\mathbb{A}}m_{b}\pi\ell_{a }r_{a}^{2}+\frac{8\eta_{a}}{\pi}\frac{\ell_{a}}{r_{a}^{4}}Q_{a}^{2}, \tag{9}\] where \(m_{b}\) is the metabolic demand of blood in \(\upmu\text{W}\,\text{mm}^{-3}\). As described in [15], we can now include the nodal positions \(x\), the length \(\ell\) and radii \(r\) as well as the blood viscosity \(\eta\) in the vector of optimization variables \(y\), leading to \(y_{1}=(x,\ell,r,\eta)\). We add physical lower bounds \(\ell^{-},\,r^{-}\) and \(\eta^{-}\) and, for numerical efficiency, upper bounds \(\ell^{+},\,r^{+}\) and \(\eta^{+}\). The best geometry is then found in the rectangle defined as \[Y_{1}=\mathbb{R}^{3|\nabla|}\times[\ell^{-},\ell^{+}]^{\mathbb{A}}\times[r^{ -},r^{+}]^{\mathbb{A}}\times[\eta^{-},\eta^{+}]^{\mathbb{A}}. \tag{10}\] Figure 2: Change in apparent blood viscosity due to the Fahraeus–Lindqvist effect as approximated by Pries et al. [16] Our NLP "_Power minimization_" finally reads: \[\min_{y_{1}\in Y_{1}} \sum_{a\in\mathbb{A}}m_{b}\pi\ell_{a}r_{a}^{2}+8\eta/\pi Q_{a}^{2} \ell_{a}/r_{a}^{4}\] (11) s.t. \[0 =x_{u}-\bar{x}_{u}, u\in\mathbb{V}_{0}\cup\mathbb{L} \tag{12}\] \[0 =\ell_{uv}^{2}-\|x_{u}-x_{v}\|^{2}, uv\in\mathbb{A}\] (13) \[0 =\eta_{a}-\eta(r_{a}) a\in\mathbb{A} \tag{14}\] Eq. (12) fixes the position of terminal nodes and Eq. (13) ensures consistency between nodal positions and segment length. The third constraint in Eq. (14) enforces the Fahraeus-Lindqvist effect as defined in Eq. (5). #### 2.2.2 Enforcing equal pressure drop In Murray's original formulation, no consideration of the resulting pressure values at terminal segments was given. This terminal pressure is a crucial parameter for the regulation of blood flow and blood velocity at the microcirculatory domains. Since we assume these domains are roughly homogeneous across the organ and have equal demand, the pressure should not differ significantly. Therefore we enforce equal pressure at each terminal segment by adding the pressure \(p_{v}\) at each node as a new unknown in our (second) NLP with variables \(y_{2}=(x,\ell,r,\eta,p)\), leading to \[Y_{2}=\mathbb{R}^{4|\gamma|}\times[\ell^{-},\ell^{+}]^{\mathbb{A}}\times[r^{ -},r^{+}]^{\mathbb{A}}\times[\eta^{-},\eta^{+}]^{\mathbb{A}}. \tag{15}\] Secondly, we constrain the pressure drop between the root and the terminal nodes as a prescribed constant value \(\Delta_{p}\). Since the viscous power at each segment \(a\) is directly proportional to the pressure drop by a factor of \(Q_{a}\), the total viscous power \(P_{\text{vis}}\) becomes a constant. Thus, we can remove it from the cost function. Finally, we can drop the constant factor \(m_{b}\), leading to a minimization goal proportional to the tree volume \(V_{\mathbb{T}}\). This formulation is used in most synthetic tree studies, e.g., [11, 12, 14, 15, 20]. Our NLP "_Volume minimization_" then reads: \[\min_{y_{2}\in\mathbb{T}_{2}} \sum_{a\in\mathbb{A}}\pi\ell_{a}r_{a}^{2}\] (16) s.t. \[0 =x_{u}-\bar{x}_{u}, u\in\mathbb{V}_{0}\cup\mathbb{L} \tag{17}\] \[0 =\ell_{uv}^{2}-\|x_{u}-x_{v}\|^{2}, uv\in\mathbb{A}\] (18) \[0 =p_{u}-p_{v}-(8\eta/\pi)Q_{uv}\ell_{uv}/r_{uv}^{4}, uv\in\mathbb{A}\] (19) \[0 =p_{u}, u\in\mathbb{L}\] (20) \[0 =p_{0}-\Delta_{p},\] (21) \[0 =\eta_{a}-\eta\left(r_{a}\right) a\in\mathbb{A} \tag{22}\] **Remark 1:** Murray's law, as stated in Eq. (1), is not incorporated explicitly. Instead, we assume it holds implicitly and compute the associated branching exponent \(\gamma\) using the Newton-Raphson method after the optimization is finished. #### 2.2.3 Additional optimization variants To better isolate the individual influence of different factors, we define additional variants of our two minimization problems. Firstly, we simplify both problems to a constant apparent viscosity \(\eta_{\text{const}}=3.6\text{c}\text{P}\), removing \(\eta\) from the vector of optimization variables and dropping the corresponding constraints Eq. (14) and Eq. (22). Secondly, we investigate the influence of the metabolic demand \(m_{b}\) and the total pressure drop \(\Delta_{p}\). We consider values between \(0.1\,\mathrm{\SIUnitSymbolMicro W}\,\mathrm{mm}^{-3}\) and \(1.0\,\mathrm{\SIUnitSymbolMicro W}\,\mathrm{mm}^{-3}\) for the metabolic demand, but note that estimates of this parameter vary significantly [21]. For the total pressure drop, we set the terminal pressure to \(p_{\mathrm{term}}=6\,\mathrm{mmHg}\) and vary the root pressure between \(10\,\mathrm{mmHg}\) and \(14\,\mathrm{mmHg}\). Finally, for each variant, a separate tree is generated, where Murray's law (see Eq. (1)) is enforced directly with a single exponent \(\gamma_{\mathrm{opt}}\), included in the optimization variables. All variants include the same root flow \(Q_{\mathrm{perf}}=1.11/\mathrm{min}\). ### Generation framework We generate our synthetic trees using the framework introduced in [15], which we summarize in the following. First, we generate \(N_{\mathrm{topo}}\) terminal nodes on a regular cubic grid inside our organ's volume. The root position is manually set and connected to the geometric center of the volume, which in turn is connected to all terminal nodes. We swap segments to explore new topologies from this initial (fan-shaped) tree. A _swap_ detaches a segment from its parent and connects it with another segment. After each swap, the geometry is optimized by solving the corresponding NLP. The newly created topology is accepted on the basis of an SA approach. After topology optimization, we grow the tree using a modified CCO approach. Here, we optimize the global geometry each time after adding \(N_{\mathrm{geo}}\) new terminal nodes and then increase \(N_{\mathrm{geo}}\) heuristically based on the current density of the tree. Notably, we drop the local optimization of branching positions and set them to their flow-weighted mean, similar to [22]. Due to our repeated global geometry optimization, this simplification had no significant impact on the final tree structure. In the last step of the optimization, we delete all segments that reached the lower bound \(\ell^{-}\) (_degenerate segments_), possibly creating \(n\)-furcations (\(n\geq 3\)). We then classify the hierarchy throughout the finished tree by assigning each segment an order number corresponding to the Strahler ordering method [23]. Continuous segments of the same Strahler order correspond to one _vessel_. Additionally, we employ an ordering scheme based on [4] to allow direct comparison (in reverse order) to the _generation_ notation used for the vascular corrosion cast in [18]. **Remark 2:** The complete optimization framework is only applied once to obtain a common topology. We solve each NLP variant with this topology to get the corresponding global geometry. We choose this method to focus on the geometry changes and to allow a direct comparison of branching exponents and radii at the same branch types. ## 3 Results ### Overall structure of the synthetic portal vein The full synthetic portal vein tree for the case of volume minimization with variable viscosity is depicted in Fig. 3. The left side shows the complete tree inside the non-convex liver domain with two zoom levels. The tree splits into four major branches, which further split into 8 main branches. These results align with previous results of a sparser tree in [15]. For a detailed comparison between the different optimization variants discussed in Section 2.2.3, we summarized the results in Table 1. Here, a column represents the results of a single variant, with the first four columns corresponding to the NLP "Power minimization" and the last four columns corresponding to the NLP "Volume minimization". For "Power minimization", we included the results for metabolic demands \(m_{b}\) of \(0.1\,\mathrm{\SIUnitSymbolMicro W}\,\mathrm{mm}^{-3}\) and \(1.0\,\mathrm{\SIUnitSymbolMicro W}\,\mathrm{mm}^{-3}\). Similarly, for "Volume minimization", we included the results for root pressures \(p_{\mathrm{root}}\) of \(10\,\mathrm{mmHg}\) and \(14\,\mathrm{mmHg}\). The last row indicates the results of each variant after enforcing a single (optimal) branching exponent \(\gamma_{\mathrm{opt}}\). For "Power minimization", an increase in metabolic demand \(m_{b}\) from \(0.1\,\mathrm{\SIUnitSymbolMicro W}\,\mathrm{mm}^{-3}\) to \(1.0\,\mathrm{\SIUnitSymbolMicro W}\,\mathrm{mm}^{-3}\) leads to an increase in viscous power \(P_{\mathrm{vis}}\), shown in the first row of Table 1, by around \(364\,\mathrm{\char 37}\) and a reduction in volume \(V_{\mathrm{T}}\), shown in the second row Table 1, by around \(53\,\mathrm{\char 37}\). Similarly, for the "Volume minimization", an increase in the root pressure from \(10\,\mathrm{mmHg}\) to \(14\,\mathrm{mmHg}\) leads to a \(320\,\mathrm{\char 37}\) increase in viscous power and a \(51\,\mathrm{\char 37}\) reduction in volume. The Fahraeus-Lindqvist effect had a minor influence. It decreased the viscous power by around \(1\,\mathrm{\char 37}\) and the total volume by around \(4\,\mathrm{\char 37}\) in all cases. ### Vessel radii The root radius \(r_{\mathrm{root}}\) is shown in the third row of Table 1. It decreased by \(31\,\mathrm{\char 37}\) after the metabolic demand \(m_{b}\) was increased for "Power minimization". Similarly, in the case of "Volume minimization", it decreased by \(30\,\mathrm{\char 37}\) after the root pressure \(p_{\mathrm{root}}\) was increased to \(14\,\mathrm{mmHg}\). In all cases, the Fahraeus-Lindqvist effect had a limited influence on the root radius with changes less than \(0.1\,\mathrm{\char 37}\). However, a significant decrease in radius can be observed for vessels between Strahler order 1 and 6, shown in Fig. 4. In both NLP cases, this decrease was highest at the terminal vessels with \(2\,\mathrm{\char 37}\) for "Power minimization" (Fig. 4a) and \(1.5\,\mathrm{\char 37}\) for "Volume minimization" (Fig. 4b). A notable difference between both cases is the variance of radii at each Strahler order. In the case of "Power minimization", the highest variance is observed at Strahler order 6, whereas the terminal radii are constant. In the case of "Volume minimization", the highest variance is at the terminal segments and decreases as the Strahler order increases. ### Pressure drop After "Power minimization", the terminal pressures are not constant across the tree but exhibit a wide range of values, see Table 1 row 4 column 1-4. This range widens further for higher metabolic demands and also increases the mean total pressure drop from the root to the terminal segments, shown in Fig. 5. In Fig. 6a, Figure 3: Complete synthetic vascular tree of the portal vein of a human liver with \(1,000,000\) terminal vessels (volume minimization with variable viscosity and \(p_{\mathrm{root}}=10\,\mathrm{mmHg}\)). Two zoom levels highlight the hierarchical structure at different scales. The radii are between \(5.1\,\mathrm{mm}\) (root vessel) and \(0.017\,\mathrm{mm}\) (smallest terminal vessel) the pressure values at different Strahler orders are shown for \(m_{b}=$0.1\,\mathrm{\SIUnitSymbolMicro W}\,\mathrm{mm}^{-3}$\). Pressure values drop and variances increase with decreasing Strahler order. Including the Fahraeus-Lindqvist effect leads to slightly higher pressure values for the Strahler orders 1 to 6. The terminal pressures after "Volume minimization" are fixed to \(p_{\mathrm{term}}=$6\,\mathrm{mm}\mathrm{H}\mathrm{g}$\) as enforced by Eq. (19) - Eq. (21). The effect of these constraints is highlighted in Fig. 5(b) for root pressure \(p_{\mathrm{root}}=$10\,\mathrm{mm}\mathrm{H}\mathrm{g}$\). In contrast to "Power minimization", variances are significantly higher at the intermediate Strahler orders 3 to 11. Furthermore, the influence of the Fahraeus-Lindqvist effect is more pronounced, decreasing the pressure values between Strahler order 2 to 10. ### Branching behavior The resulting branching exponents of all variants are summarized in row 5 of Table 1. For "Power minimization" with constant viscosity \(\eta_{\mathrm{const}}\), the exponents are constant with \(\gamma=3.0\) across all branches regardless of metabolic demand \(m_{b}\). In contrast, the inclusion of the Fahraeus-Lindqvist effect leads to deviations from 3.0, with branching exponents reaching a minimum of 2.9. For "Volume minimization," exponents are not \begin{table} \begin{tabular}{l r r r r r r r r} \hline \hline & \multicolumn{6}{c}{Power minimization} & \multicolumn{6}{c}{Volume minimization} \\ \cline{2-10} & \multicolumn{2}{c}{\(m_{b}=$0.1\,\mathrm{\SIUnitSymbolMicro W}\,\mathrm{mm}^{-3}$\)} & \multicolumn{2}{c}{\(m_{b}=$1.0\,\mathrm{\SIUnitSymbolMicro W}\,\mathrm{mm}^{-3}$\)} & \multicolumn{2}{c}{\(p_{\mathrm{root}}=$10.0\,\mathrm{mm}\mathrm{H}\mathrm{g}$\)} & \multicolumn{2}{c}{\(p_{\mathrm{root}}=$14.0\,\mathrm{mm}\mathrm{H}\mathrm{g}$\)} \\ \cline{2-10} Parameter & \(\eta_{\mathrm{const}}\) & \(\eta(r)\) & \(\eta_{\mathrm{const}}\) & \(\eta(r)\) & \(\eta_{\mathrm{const}}\) & \(\eta(r)\) & \(\eta_{\mathrm{const}}\) & \(\eta(r)\) \\ \hline \(P_{\mathrm{vis}}\) in mW & 2.67 & 2.65 & 12.34 & 12.23 & 3.33 & 3.33 & 13.99 & 13.99 \\ \(V_{\mathrm{T}}\) in mm\({}^{3}\) & 53,400.60 & 52,283.94 & 24,786.36 & 24,042.08 & 50,413.84 & 49,158.58 & 24,587.02 & 23,154.13 \\ \(r_{\mathrm{root}}\) in mm & 5.35 & 5.35 & 3.64 & 3.63 & 5.10 & 5.09 & 3.56 & 3.54 \\ \(p_{\mathrm{term}}\) in mmHg\([10.34,11.68][10.41,11.73][1.15,10.67][1.74,10.98]\) & 6.00 & 6.00 & 6.00 & 6.00 & 6.00 \\ \(\gamma\) & 3.00 & \([2.90,3.00]\) & 3.00 & \([2.90,3.00][1.75,3.01][1.76,3.17][1.43,3.00][1.46,3.08]\) \\ \(\gamma_{\mathrm{opt}}\) (constant) & 3.00* & 2.91* & 3.00* & 2.92* & 2.84* & 2.76* & 2.82* & 2.74* \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of the results between the different variants of our two minimization problems as introduced in Section 2.2.3. The branching exponents \(\gamma_{\mathrm{opt}}\) in the last row (marked with *) are the results of separate runs for each variant, where a single constant branching exponent was enforced. Figure 4: Influence of Fahraeus–Lindqvist effect on vessel radii for different Strahler orders constant even for constant blood viscosity. Instead, most values fall between 2.0 and 3.0, with the smallest outliers having values of 1.43. For a more detailed comparison, the probability density function of the branching exponents for both optimization cases is compared in Fig. 7. The influence of the Fahraeus-Lindqvist effect shifts most branching exponents from a constant 3.0 to 2.9 during "Power minimization" (Fig. 6(a)). During "Volume minimization" with constant blood viscosity, most exponents are at 3.0 (Fig. 6(b) in red) and are shifted to 2.9 when including the Fahraeus-Lindqvist effect (Fig. 6(b) in green). Fig. 8 highlights the distribution of mean branching exponents across different branch types. Each cell \((i,j)\) corresponds to a branch with child segments of Strahler order \(i\) and \(j\). In Fig. 8(a) the effect of variable blood viscosity on "Power minimization" is depicted. The branching exponents decrease if the Strahler order of either child decreases, leading to the smallest branching exponent of 2.9 at branches with two terminal segments. If both child segments have a Strahler order over 8, the mean branching expo Figure 5: Density plot of the total pressure drop from root vessel to terminal vessels for the power minimization under different metabolic demand Figure 6: Influence of Fahraeus–Lindqvist effect on pressure values of the distal nodes of vessels for different Strahler orders of 3.0. The effect of enforcing equal terminal pressure is shown in Fig. 8(b). Here, the higher the difference between the Strahler orders of both children is, the smaller the branching exponent is. Again, the smallest mean branching exponents are observed at branches connecting two terminal segments with a value of 2.76. Fig. 8(c) shows the accumulated effect of both constraints with a minimum mean exponent of 2.7, again at terminal branches. ### Comparison to vascular corrosion cast We now directly compare our synthetic trees against a corrosion cast of the portal vein of the human liver [18]. In Fig. 9a, the radii per generation (radius-adjusted Strahler order in reverse) are compared between the "Power minimization" with \(m_{b}=0.1\,\mathrm{\SIUnitSymbolMicro W}\,\mathrm{mm}^{-3}\), the volume minimization with \(p_{\mathrm{root}}=10\,\mathrm{mmHg}\) and the corrosion cast data, including measurements and a best-fit trend line, based on the least sum of square errors. Figure 8: Mean values of branching exponent \(\gamma\) for different types of branches, e.g., the cell \((1,2)\) corresponds to branches, where child vessels of Strahler order 1 and 2 meet. The results are symmetric. White cells correspond to branch types, which do not occur in the tree topology. **(a)** Power minimization with variable viscosity, **(b)** Volume minimization with constant viscosity, **(c)** Volume minimization with variable viscosity Figure 7: Effect of enforcing variable viscosity and equal pressure onto the branching exponents The synthetic trees' radii fit the data and trend line well for the generations 5 to 15. Notably, they significantly deviate for the first four generations, especially against the measurements with errors of around 25%. In contrast, the number of vessels in Fig. 8(b) of both synthetic trees fit the data of the corrosion cast well for all generations. ## 4 Discussion The exclusion of Murray's law simplifies our two optimization problems and automatically allows the branching exponents to vary locally. Both problems can generate synthetic trees with similar geometry and topology using appropriate parameters. Power minimization with a metabolic factor of \(m_{b}=0.1\,\mathrm{\SIUnitSymbolMicro W}\,\mathrm{mm}^{-3}\) leads to more realistic radii than a value of \(1.0\,\mathrm{\SIUnitSymbolMicro W}\,\mathrm{mm}^{-3}\), which is in line with estimated values around \(0.3-0.4\,\mathrm{\SIUnitSymbolMicro W}\,\mathrm{mm}^{-3}\) for venous trees [21, 24]. Similarly, a root pressure of \(p_{\mathrm{root}}=14\,\mathrm{mmHg}\) results in a pressure drop of \(\Delta_{p}=8\,\mathrm{mmHg}\), which is related to portal hypertension. Such a high pressure drop leads to unrealistic small radii in comparison to a more realistic root pressure of \(p_{\mathrm{root}}=10\,\mathrm{mmHg}\). Under power minimization, the radii are at their individual local optima, i.e., the radius of each segment can be solved independently as the minimum of the metabolic demand and the viscous power dissipation. This observation is in line with the findings of Murray and explains the constant branching exponent of 3 in Table 1. The variations in radii for Strahler orders 2 to 12 in Fig. 3(a) are based entirely on the branching type, which is completely defined by the flow values of the child segments. For volume minimization, no such simplification can be made, as the constraint of equal pressure creates dependencies between segments on the same path to the root. This constraint also forces radii to deviate from their local optima, which means a deviation from the branching exponent \(\gamma=3.0\). The highest deviations are found at branches between two terminal segments, because they can be adjusted to a given pressure drop without significantly increasing the tree's volume. Given the same length, they also constitute a higher pressure drop than segments with bigger radius. The inclusion of the Fahraeus-Lindqvist effect reduces the viscosity of the smaller vessels and, in turn, increases their pressure drop. The corresponding branching exponents also deviate from \(\gamma=3.0\), as can be observed in Fig. 8. In contrast, the effect on bigger vessels is negligible and results in constant exponents \(\gamma=3.0\), as seen in Fig. 8(a), for branches where both children have Strahler orders over 8. Figure 9: Comparison of synthetic trees (with variable viscosity) against corrosion cast data [18] While the generated trees generally fit the corrosion cast data well, the underestimation of the largest radii (generation 1 to 4) is significant. Since the vessels of the portal venous tree are more elliptical than circular, the radii of the vascular corrosion cast were estimated based on their cross-sectional area. Perhaps the synthetic trees' radii may correspond better with estimations based on other criteria such as a maximal inscribed circle. Another explanation is that the total energy dissipation is likely higher than our models predict. This underestimation could be due to ignoring the effect of turbulent flow [8] and simplified geometric modelling of the branching of vessels. The effect of pulsatile flow, however, can be neglected because the blood comes not directly from the heart but first flows through the digestive tract, leaving only a limited amount of pulsatility inside the portal venous tree. ## 5 Conclusion Our optimization framework can handle complex constraints and goal functions while generating synthetic trees up to but not including the capillary level of the microcirculation. We used our framework to investigate the local branching behavior for different constraints and goal functions. Branching exponents automatically lie in the experimentally predicted range between 2.0 and 3.0. Even small changes to Murray's original optimization problem, like the inclusion of variable blood viscosity, significantly affect the optimal branching exponents of vessels. The topology and geometry of our synthetic trees closely follow the vascular corrosion cast of a human portal venous tree, with significant deviations only in the largest vessels. Without enforcing any pressure constraint, terminal pressures vary by up to 1.0 mmHg, leading to highly heterogeneous boundary conditions for the microcirculation. In contrast, enforcing equal terminal pressure in its current form leads to pressure variations up to 2.0 mmHg in the intermediate vessels of the mesocirculation. Both these results need to be critically evaluated and compared against measurements of real vascular trees. One approach to improve the pressure constraint could be to prescribe a certain physical range using inequality constraints rather than a fixed value. In the future, we plan to include pulsatile and turbulent flow effects in a closed form, improving our framework's computational ability. Furthermore, shear stress, a critical parameter for vascular growth, must also be incorporated into the model. A more mature version of this model, specifically the ability to predict optimal branching exponents under different constraints, could have many potential applications in the medical field. An example would be to predict and relate the branching behavior across the scales to vascular diseases. These predictions could improve the interpretation of medical images by giving valuable input to the functional assessment of organs. ## Acknowledgment The results presented in this paper were obtained as part of the ERC Starting Grant project "ImageToSim" that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 759001). The authors gratefully acknowledge this support.
2307.08999
Oracle Efficient Online Multicalibration and Omniprediction
A recent line of work has shown a surprising connection between multicalibration, a multi-group fairness notion, and omniprediction, a learning paradigm that provides simultaneous loss minimization guarantees for a large family of loss functions. Prior work studies omniprediction in the batch setting. We initiate the study of omniprediction in the online adversarial setting. Although there exist algorithms for obtaining notions of multicalibration in the online adversarial setting, unlike batch algorithms, they work only for small finite classes of benchmark functions $F$, because they require enumerating every function $f \in F$ at every round. In contrast, omniprediction is most interesting for learning theoretic hypothesis classes $F$, which are generally continuously large. We develop a new online multicalibration algorithm that is well defined for infinite benchmark classes $F$, and is oracle efficient (i.e. for any class $F$, the algorithm has the form of an efficient reduction to a no-regret learning algorithm for $F$). The result is the first efficient online omnipredictor -- an oracle efficient prediction algorithm that can be used to simultaneously obtain no regret guarantees to all Lipschitz convex loss functions. For the class $F$ of linear functions, we show how to make our algorithm efficient in the worst case. Also, we show upper and lower bounds on the extent to which our rates can be improved: our oracle efficient algorithm actually promises a stronger guarantee called swap-omniprediction, and we prove a lower bound showing that obtaining $O(\sqrt{T})$ bounds for swap-omniprediction is impossible in the online setting. On the other hand, we give a (non-oracle efficient) algorithm which can obtain the optimal $O(\sqrt{T})$ omniprediction bounds without going through multicalibration, giving an information theoretic separation between these two solution concepts.
Sumegha Garg, Christopher Jung, Omer Reingold, Aaron Roth
2023-07-18T06:34:32Z
http://arxiv.org/abs/2307.08999v1
# Oracle Efficient Online Multicalibration and Omniprediction ###### Abstract A recent line of work has shown a surprising connection between multicalibration, a multi-group fairness notion, and omniprediction, a learning paradigm that provides simultaneous loss minimization guarantees for a large family of loss functions [GKR\({}^{+}\)22, GHK\({}^{+}\)23, GKR23, GHHK\({}^{+}\)23]. Prior work studies omniprediction in the batch setting. We initiate the study of omniprediction in the online adversarial setting. Although there exist algorithms for obtaining notions of multicalibration in the online adversarial setting [GJN\({}^{+}\)22], unlike batch algorithms, they work only for small finite classes of benchmark functions \(\mathcal{F}\), because they require enumerating every function \(f\in\mathcal{F}\) at every round. In contrast, omniprediction is most interesting for learning theoretic _hypothesis classes_\(\mathcal{F}\), which are generally continuously (or at least exponentially) large. We develop a new online multicalibration algorithm that is well defined for infinite benchmark classes \(\mathcal{F}\) (e.g. the set of all linear functions), and is oracle efficient -- i.e. for any class \(\mathcal{F}\), the algorithm has the form of an efficient reduction to a no-regret learning algorithm for \(\mathcal{F}\). The result is the first efficient online omnipredictor -- an oracle efficient prediction algorithm that can be used to simultaneously obtain no regret guarantees to all Lipschitz convex loss functions. For the class \(\mathcal{F}\) of linear functions, we show how to make our algorithm efficient in the worst case (i.e. the "oracle" that we need is itself efficient even in the worst case). We show how our results extend beyond mean multicalibration to quantile multicalibration, with applications to oracle efficient multivalid conformal prediction. Finally, we show upper and lower bounds on the extent to which our rates can be improved: our oracle efficient algorithm actually promises a stronger guarantee called "swap-omniprediction", and we prove a lower bound showing that obtaining \(O(\sqrt{T})\) bounds for swap-omniprediction is impossible in the online setting. On the other hand, we give a (non-oracle efficient) algorithm which can obtain the optimal \(O(\sqrt{T})\) omniprediction bounds without going through multicalibration, giving an information theoretic separation between these two solution concepts. We leave the problem of obtaining \(O(\sqrt{T})\) omniprediction bounds in an oracle efficient manner as our main open problem. ###### Contents * 1 Introduction * 1.1 Our Results and Techniques * 1.2 Additional Related Work * 2 Preliminaries * 2.1 Notation * 2.2 Setting * 2.3 Omniprediction * 2.4 Multicalibration * 3 Oracle-efficient \(L_{2}\)-Swap-Multicalibration * 3.1 Online Regression Oracles and Modifications * 3.2 Contextual Swap Regret * 3.3 From Contextual Swap Regret to \(L_{2}\)-Swap-Multicalibration * 4 Online (Swap-)Multicalibration to Online (Swap-)Omniprediction * 5 Tightness of Our Results: A Separation Between Swap Omniprediction and Omniprediction * 5.1 An \(O(\sqrt{T})\) Upper Bound for Omniprediction * 5.2 A Lower Bound for Swap Omniprediction * 6 An Extension: Online Oracle-efficient Quantile Multicalibration and Multivalid Conformal Prediction * 6.1 Sketch: Minimizing \(L_{2}\)-Swap-Quantile-Multivalidity Error * 7 Discussion and Conclusion * 8 Acknowledgements * A Missing Details from Section 2 * B Missing Details from Section 3 * C Missing Details from Section 4 * D Concentration Lemmas * D.1 Mean Case * D.2 Quantile Case * E An Extension: Online Oracle-efficient Quantile Multicalibration and Multivalid Conformal Prediction * E.1 Connection between Multivalidity and Quantile Errors * E.2 Final Bounds * E.3 Conformal Learner's Multivalidity Error Against Linear Functions * F Better Online \(L_{2}\)-multicalibration for Finite \(\mathcal{F}\) Introduction OmnipredictionAn omnipredictor, informally, is a prediction algorithm that predicts a sufficient statistic for optimizing a wide range of loss functions in a way that is competitive with some benchmark class of models \(\mathcal{F}\). Gopalan et al. introduced the concept of omnipredictors in [10] and showed that a regression model that is appropriately _multicalibrated_ with respect to some benchmark class of models \(\mathcal{F}\) is an omnipredictor with respect to all Lipschitz convex loss functions and the benchmark class \(\mathcal{F}\). In other words, for any Lipschitz convex loss function \(\ell\), the regression function can be efficiently and locally post-processed (in a way specific to \(\ell\)) to obtain loss that is competitive with the best model \(f\in\mathcal{F}\) for that loss function. Here "locally" means that the post-processing depends only on the prediction made for a particular point \(x\), independently of the rest of the model, and so can be done much more efficiently than training a new model specifically for the loss function \(\ell\). They also gave an oracle efficient algorithm for training such a regression function to satisfy the requisite notion of multicalibration in the batch setting, that efficiently reduced to agnostic learning over the class of models \(\mathcal{F}\). Again in the batch setting, Globus-Harris et al. [11] gave a simplified algorithm that efficiently reduced to squared error regression over \(\mathcal{F}\). MulticalibrationInformally, multicalibration as introduced by [13], is a requirement on a regression function that it be statistically unbiased conditional both on its own prediction and on membership in any one of a large collection of intersecting subsets of the data space. The notion of a "subset" was generalized to a class of real-valued functions \(\mathcal{F}\) by [10, 10], which yields the following requirement. A model \(m:\mathcal{X}\to\mathbb{R}\) is (exactly) multicalibrated on a distribution \(\mathcal{D}\) over labelled examples \(\mathcal{X}\times\{0,1\}\) with respect to a class of real valued functions \(\mathcal{F}\) if for every \(v\) in the range of \(m\), and for every \(f\in\mathcal{F}\): \[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[f(x)(y-v)|m(x)=v]=0\] There are a variety of ways to define approximate multicalibration that we discuss in Section 2. In the batch setting, obtaining approximate multicalibration with respect to \(\mathcal{F}\) is reducible to solving learning problems over \(\mathcal{F}\) -- either classification problems [13, 10] or squared error regression problems [11]. Gupta et al. [10] and subsequent work [14, 15] have given algorithms to obtain multicalibration in the online adversarial setting, in which there is no data distribution, and the goal is to make predictions \(m_{t}(x_{t})\) at each round \(t\) such that the resulting predictions are multicalibrated in hindsight on the empirical distribution on realized examples \((x_{t},y_{t})\) that can be chosen adaptively by an adversary. But compared to their batch analogues, these online algorithms have a major deficiency that is especially relevant to the omniprediction application: They are defined only for finite classes \(\mathcal{F}\) and have per-round running time that is linear in \(|\mathcal{F}|\). That is, unlike batch algorithms for multicalibration, they are not reductions to learning problems over \(\mathcal{F}\), and hence are only efficient for small finite classes \(\mathcal{F}\). In the context of omniprediction, \(\mathcal{F}\) is a benchmark class of models (e.g. linear functions or more complex models), and so is typically continuously large (and any reasonable discretization would be exponentially large). These prior algorithms also promise approximate multicalibration in the \(L_{\infty}\) metric (rather than in the \(L_{1}\) metric in which multicalibration error bounds translate to omnprediction bounds, and so prior work does not give online omniprediction at optimal rates.) Thus prior work on online multicalibration has at best limited implications for online omniprediction. ### Our Results and Techniques Oracle Efficient Online Multicalibration and OmnipredictionOur main contribution is to define the problem of online omniprediction, and to give an oracle efficient algorithm that satisfies it. Informally speaking, an online omnipredictor is an algorithm that ingests a sequence of adversarially chosen contexts \(x_{t}\), and makes a sequence of forecasts \(\hat{p}_{t}\) about the unknown binary label \(y_{t}\). This single stream of forecasts \(\hat{p}_{t}\) will be simultaneously used by a large collection of learners who are each concerned with optimizing a different loss function \(\ell\). For each \(\ell\), the \(\ell\)-learner will transform the prediction \(\hat{p}^{t}\) into an action \(a_{t}^{\ell}=k^{\ell}(\hat{p}_{t})\) via some one-dimensional post-processing function \(k^{\ell}:[0,1]\to\mathbb{R}\). The \(\ell\)-learner then experiences loss \(\ell(y_{t},k^{\ell}(\hat{p}_{t}))\). The forecasting algorithm is an omnipredictor with respect to the class of loss functions \(\ell\) and a benchmark class \(\mathcal{F}\) if simultaneously, all of the \(\ell\)-learners have diminishing regret to the best model in \(\mathcal{F}\) (which might be different for each \(\ell\)-learner): \[\sum_{t=1}^{T}\ell(y^{t},k^{\ell}(\hat{p}_{t}))\leq\min_{f\in\mathcal{F}}\sum_ {t=1}^{T}\ell(y_{t},f(x_{t}))+o(T)\] To solve this problem, we give a new online multicalibration algorithm that is oracle efficient -- it is an efficient reduction to the problem of no-regret learning over \(\mathcal{F}\) with respect to the squared error loss. Our algorithm relies on a characterization of multicalibration recently shown in the batch setting independently by Globus-Harris et al. [1] and Gopalan et al [1]. Informally, the characterization states that a model \(m\) is multicalibrated with respect to \(\mathcal{F}\) if and only if it satisfies the following "swap-regret" like condition with respect to \(\mathcal{F}\) for all \(v\) in the range of \(m\)1: Footnote 1: The connection holds as stated only if \(\mathcal{F}\) is closed under affine transformations — but this is the case for many natural classes \(\mathcal{F}\) like linear and polynomial functions, regression trees, etc. \[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(m(x)-y)^{2}|m(x)=v]\leq \min_{f\in\mathcal{F}}\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f(x) -y)^{2}|m(x)=v]\] We generalize this characterization to the sequential prediction setting, and use it to give an algorithm for sequential multicalibration via reduction to the standard no-regret learning problem of obtaining no (external) regret with respect to the squared error loss to the best model in \(\mathcal{F}\). That is to say, whenever we have an efficient algorithm for solving online squared error regression over \(\mathcal{F}\), we also have an efficient algorithm to obtain online multicalibration with respect to \(\mathcal{F}\). Efficient algorithms for online squared error regression exist for the class of linear functions [12, 13, 1], and online gradient descent empirically solves this problem quite well over parametric families of models even when the problem is hard in the worst case [1, 14]. Our algorithmic reduction is based on a tight reduction from external to swap regret recently given by [10], which is itself based on an earlier reduction of Blum and Mansour [1]. When instantiated with the online regression algorithm for linear functions of [1], our algorithm efficiently obtains \(L_{2}\)-multicalibration at a rate of \(O(T^{-1/4})\) and is an online omnipredictor for the class of all Lipschitz convex loss functions with a regret bound of \(O(T^{-1/8})\). **Theorem 1.1** (Informal, See Corollary 3.1 and 4.1).: _Given an online regression oracle for \(\mathcal{F}\), we have an efficient algorithm that achieves sublinear multicalibration error with respect to \(\mathcal{F}\) and omniprediction with respect to all Lipschitz convex loss functions, against any adaptive adversary._ **Theorem 1.2** (Informal, See Corollary 3.2 and 4.2).: _For the set of linear functions \(\mathcal{F}_{lin}\) with bounded norm, we have an efficient algorithm that achieves \(O(T^{-1/4})\) multicalibration error and \(O(T^{-1/8})\)-omniprediction with respect to all Lipschitz convex loss functions, against any adaptive adversary._ Separations and Lower BoundsTo what extent can we hope for online omnipredictors with better rates? Is \(O(T^{-1/2})\) possible? We give several answers. First, by combining a recent characterization of 1 dimensional proper scoring rules for binary outcomes [10] with the "Adversary Moves First" multi-objective no regret framework introduced by [10], we give a (non-oracle-efficient) algorithm for obtaining online omniprediction at a rate of \(O(T^{-1/2})\) for finite boolean classes \(\mathcal{F}\). Second, we note that our oracle efficient online multicalibration algorithm actually obtains the stronger guarantee of _swap_-multicalibration--which implies the correspondingly stronger guarantee of swap omniprediction (see Section 2 for definitions). Using a lower bound of [13] on obtainable online calibration bounds in the \(L_{1}\) metric, we show that no algorithm (whether or not it is oracle efficient) can obtain \(O(T^{-1/2})\) rates for swap omniprediction, which results in an information-theoretic separation between the best obtainainable rates for omniprediction and swap-omniprediction in the online setting. Whether or not there exist _oracle efficient_ algorithms that can obtain omniprediction at the optimal \(O(T^{-1/2})\) rate is our main open problem. **Theorem 1.3** (Informal, See Corollary 5.3).: _There exists an adaptive adversary such that against any forecaster, the swap-omniprediction rate is \(\Omega(T^{-0.472})\) in expectation._ **Theorem 1.4** (Informal, See Corollary 5.2 and 5.3).: _For any finite family of boolean predictors \(\mathcal{F}:\mathcal{X}\rightarrow\{0,1\}\), we have an algorithm that achieves omniprediction at a rate of \(O(T^{-1/2})\) against all bounded bi-monotone loss functions._ Extension to Quantile Multicalibration and Multivalid Conformal PredictionFinally, we observe that our core techniques are not specific to (mean) multicalibration and online squared error regression. In Section 6 and Appendix E, we show how to extend our oracle efficient algorithm for online mean multicalibration to an oracle efficient algorithm for online _quantile_ multicalibration [12, 13]. Just as our algorithm for mean multicalibration over \(\mathcal{F}\) reduces to an online regression oracle for the squared error loss over \(\mathcal{F}\), it is possible to derive an algorithm for online quantile multicalibration via reduction to an online regression oracle for pinball loss over \(\mathcal{F}\). An application of oracle efficient quantile multicalibration is an oracle efficient online multivalid conformal prediction algorithm. Multivalid conformal prediction, introduced and studied in [12, 13] gives a method for attaching prediction sets to arbitrary black-box prediction models that cover the true labels with some target probability (say 95%). The coverage guarantees must hold not just marginally, but also conditionally in membership in an arbitrary collection of intersecting groups specified by functions \(\mathcal{F}\). [13] gave a simple algorithm for obtaining multivalid coverage in the online setting, with per-round running time scaling linearly with \(|\mathcal{F}|\): we give the first oracle efficient algorithm for multivalid conformal prediction. ### Additional Related Work The modern framing of multicalibration was introduced by Hebert-Johnson et al [10]. Similar ideas (without the concern for computational efficiency) date back to Dawid [14]. Dawid proposed a notion of computable calibration as a foundation for empirical probability, that required sequential predictions to be calibrated not just overall, but also on every computable subsequence; he showed that any two methods for producing an infinite sequence of computably calibrated forecasts must agree with each other on all but a finite subsequence. The existence of an algorithm capable of producing calibrated forecasts in an online adversarial setting was first established by Foster and Vohra [15], which started the large literature in economics on "expert testing"--see [12] for a survey. We highlight several results from this literature. Lehrer shows the existence of an online forecasting rule that can guarantee calibration on all computable subsequences defined independently of the forecasters predictions (which is the analogue of _multi-accuracy_, as defined in [10] and [11]). Sandroni et al. [16] extend this result to show the existence of an online forecasting rule that can guarantee calibration on all computable subsequences, even those that can depend on the forecaster's predictions, which allows it to capture multicalibration. Sandroni gave a very general result showing that _any_ test of a sequential forecaster (not just multicalibration style tests) that could be passed on every distribution by correct probability forecasts could also be passed in a sequential adversarial setting [17]. All of these results are proven using minimax theorems, and hence are not constructive or computational. In fact, Fortnow and Vohra show that (subject to cryptographic assumptions), there do exist tests that can be passed by nature but cannot be passed in an adversarial setting by any polynomial time forecaster [15]. Foster and Kakade [14] give the first constructive multicalibration style algorithm that we are aware of: they give an algorithm that can produce "smoothly2" multicalibrated predictions against in a sequential setting against an adversary with respect to some benchmark class \(\mathcal{F}\), with both running time and calibration error scaling polynomially with \(|\mathcal{F}|\). Gupta et al. [12] give an online algorithm for obtaining multicalibration in the sequential prediction setting for a collection of benchmarks \(\mathcal{F}\) that has calibration error scaling only logarithmically with \(|\mathcal{F}|\) -- but still has running time scaling linearly with \(|\mathcal{F}|\). They also give online sequential algorithms for variants of multicalibration generalized from means to moments (c.f. [12]) and quantiles (c.f. [13, 14]). We remark that obtaining multicalibration in the online adversarial setting is a strictly harder problem than obtaining multicalibration in the batch setting: online multicalibration algorithms can be generically converted into batch multicalibration algorithms using e.g. the online-to-batch reduction presented in [13], but the reverse is not true. Kleinberg et al. [15] recently studied "U-Calibration", which can be viewed as a 0-dimensional analogue of omniprediction in which predictions must be made without any available features \(x\) -- and hence need only to compete with constant benchmark functions. They gave algorithms that obtain \(O(\sqrt{T})\) rates in the online setting; we use one of their structural characterizations of proper scoring rules to give (non-oracle-efficient) \(O(\sqrt{T})\) rates for omniprediction. Several recent papers [16, 17] have found surprising applications of multicalibration. In particular, Gopalan et al. [16] defined omniprediction and showed that approximately multicalibrated predictors are omnipredictors, and Kim et al. [18] showed that approximate multicalibration with respect to a class of functions \(\mathcal{F}\) encoding likelihood ratios between distributions implies a kind of out-of-distribution generalization. Gopalan et al. [16] gave an algorithm based on agnostic boosting for classification for obtaining a sufficiently strong notion of multicalibration in the batch setting via reduction to agnostic learning, and Globus-Harris et al. [19] gave a characterization of this form of multicalibration and its connection to squared error accuracy guarantees in terms of boosting for regression, and gave an algorithm (in the batch setting) via reduction to squared error regression. Gopalan, Kim, and Reingold [16] defined stronger notions of multicalibration and omniprediction called _swap multicalibration_ and _swap omniprediction_, and showed an equivalence between them. We make use of a connection proved in both [19] and [16] connecting multicalibration to a contextual notion of (squared error) swap regret. Multicalibration is concerned with mean estimations: analogous notions of multicalibration for other distributional properties have been studied as well: Jung et al. [12] define and give algorithms for moment multicalibration, and [13, 14, 15] define and give algorithms for quantile multi-calibration with applications to conformal prediction. Recently Noarov and Roth [17] gave a complete characterization of which distributional properties (multi)calibrated predictors can be learned for, and which they cannot: Multicalibration is possible for a property if and only if the property is the minimizer of some regression loss function. We use this characterization to give oracle efficient algorithms for online quantile multicalibration, by reduction to no regret algorithms for the corresponding regression loss. We point the reader to [19] for an introductory treatment of much of this work. Our focus is on oracle efficient algorithms for an online learning problem by reduction to squared-error online regression algorithms; there has been a similar recent focus in the contextual bandits literature [1, 18] based on the observation that online regression is often a practically solvable problem (even in settings in which it is hard in the worst case). A more ambitious goal would be to reduce to batch learning oracles -- but it is not yet understood when it is possible to reduce from online learning to batch learning in an oracle efficient way even to obtain standard external regret guarantees. It is known that this is possible under certain restrictive structural conditions -- e.g. when the benchmark class has a small "separator set" [17, 18] -- but it is also known that it is not possible to do this in general, even when the benchmark class is online learnable [19]. By reducing to online learning oracles (as [1] and [18] do), we circumvent this difficulty. There are various notions of regret in the online learning literature that optimize for several objectives simultaneously: for example, adaptive regret [1, 2] seeks to simultaneously minimize regret over all contiguous subsequences, regret in the sleeping experts setting seeks to minimize regret to each expert in the subsequence of rounds for which it is "active" [14, 15, 16], internal and swap regret seek to minimize regret on subsequences on which the learner played each action [15, 16], multigroup regret seeks to minimize regret on subsequences corresponding to datapoints in particular groups [1, 2] and all of these notions can be subsumed into general notions of subsequence regret which seeks to minimize regret over all of an arbitrary collection of subsequences that can be defined as a function of the predictions made by the algorithm [13, 17]. All of these notions seek to minimize the same loss function on different (sub)sequences of examples: in contrast, our goal is to simultaneously minimize _different_ loss functions on the same sequence of examples. We make use of techniques developed in the context of swap regret [1, 2]. ## 2 Preliminaries ### Notation Let \(\mathcal{X}\) denote the feature domain and \(\mathcal{Y}\) the label domain: we focus on the binary label setting \(\mathcal{Y}=\{0,1\}\). Let \(f:\mathcal{X}\to\mathbb{R}\) denote some predictor and \(\mathcal{F}\) to denote a family of predictors. We write \[\mathcal{F}_{B}=\{f\in\mathcal{F}:f(x)^{2}\leq B\quad\forall x\in\mathcal{X}\}\] to denote the set of predictors in \(\mathcal{F}\) whose squared value is at most \(B\). Let \(\ell:\mathcal{Y}\times\mathbb{R}\to\mathbb{R}^{\geq 0}\) be a loss function that takes in the true label \(y\in\mathcal{Y}\) and an action \(\hat{y}\in\mathbb{R}\), and write \(\mathcal{L}\) to denote a family of loss functions. We write \(\text{Ber}(p)\) to denote the Bernoulli distribution with parameter \(p\). Given a positive integer \(T\in\mathbb{N}^{>0}\), we write \([T]\) to denote \(\{1,\dots,T\}\). We overload the notation and write \([\frac{1}{m}]\) for some positive integer \(m\in\mathbb{N}^{>0}\) to denote a discretization of the unit interval \(\{0,\frac{1}{m},\dots,\frac{m-1}{m},1\}\). We write \(\Delta A\) to denote the probability simplex over the elements in A: for example, we write \(\Delta[m]=\{q\in[0,1]^{m}:\sum_{j=1}^{m}q_{j}\leq 1\}\) to denote the simplex over \([m]\). Given a vector or a list \(v\), we write \(v[i]\) to denote the \(i\)th coordinate or element of \(v\). We use a bold variable to refer to a random variable and non-bold variable to refer to its realization: for example, we sample realization \(x\) from its random variable \(\mathbf{x}\). ### Setting Fix a class of loss functions \(\mathcal{L}\). For each loss function \(\ell\in\mathcal{L}\), we have a corresponding \(\ell\)-learner who is equipped with a post-processing function \(k^{\ell}(\hat{p})\) that determines which action the \(\ell\)-learner will choose given a forecast \(\hat{p}\in\mathcal{P}\) where \(\mathcal{P}=[\frac{1}{m}]\) denotes the finite set of possible forecasts. **Definition 2.1** (Post-Processing).: _Given some loss function \(\ell\), a post-processing function \(k^{\ell}:\mathcal{P}\to[0,1]\) chooses the optimal action \(a\) according to the belief that distribution over \(y\) is \(\text{Ber}(\hat{p})\):_ \[k^{\ell}(\hat{p})=\operatorname*{argmin}_{a\in[0,1]}\operatorname*{\mathbb{E }}_{y\sim\text{Ber}(\hat{p})}[\ell(y,a)]=\operatorname*{argmin}_{a}\left(1- \hat{p}\right)\ell(0,a)+\hat{p}\ell(1,a).\] Online prediction proceeds in rounds which we index by \(t\) for a finite horizon \(T\). In each round, an interaction between a forecaster, adversary, and learner proceeds as follows. In each round \(t\in[T]\), 1. The adversary chooses the feature vector \(x_{t}\in\mathcal{X}\) and \(p_{t}\) with which the label \(y_{t}\) is chosen to be a positive label: i.e. \(y_{t}\sim\text{Ber}(p_{t})\). 2. Upon observing the feature vector \(x_{t}\), the forecaster makes a forecast \(\hat{p}_{t}\in\mathcal{P}\)_randomly_. We sometimes refer to \(\mathcal{P}\) as the level sets of the forecaster. 3. For each loss \(\ell\in\mathcal{L}\), \(\ell\)-learner uses the post-processing function \(k^{\ell}:\mathcal{P}\to\mathbb{R}\) to choose its action \(a_{t}^{\ell}:=k^{\ell}(\hat{p}_{t})\) and suffers loss \(\ell(y_{t},k^{\ell}(\hat{p}_{t}))\). 4. The realized label \(y_{t}\) is revealed to the forecaster. The forecaster \(\mathscr{F}\)'s interaction with the adversary from round \(1\) to \(T\) results in a _history_\(\psi_{1:T}=\{(x_{t},y_{t},\hat{p}_{t})\}_{t=1}^{T}\). We write \(\psi_{1:t-1}\circ(x_{t},y_{t},\hat{p}_{t})\) to denote a concatenation of \((x_{t},y_{t},\hat{p}_{t})\) to the previous history \(\psi_{1:t-1}\). We denote the domain of history between the forecaster and the adversary as \(\Psi^{*}\). In order to allow the forecaster to keep track of some internal state from round to round, we write \(\theta_{t}\) to denote its additional internal state at round \(t\). We write \(\Theta\) to denote the domain of the forecaster's internal state. And we write the forecaster's internal _transcript_ as \(\pi_{1:T}=\{(x_{t},y_{t},\hat{p}_{t},\theta_{t})\}_{t=1}^{T}\) and its domain as \(\Pi^{*}\). Forecaster \(\mathscr{F}:\Pi^{*}\times\mathcal{X}\to\Delta\mathcal{P}\) is a mapping from a tuple of (internal transcript, internal state, new feature vector) to a distribution over possible forecasts. Similarly, Adversary \(\operatorname{Adv}:\Psi^{*}\to(\mathcal{X}\times\Delta\mathcal{Y})\) is a mapping from a history to a feature vector and a distribution over \(\mathcal{Y}\). For any \(p\in\mathcal{P}\) and \(y\in\mathcal{Y}\), we write \[S(\pi_{1:T},p) =\{t\in[T]:\hat{p}_{t}=p\}\] \[S(\pi_{1:T},p,y) =\{t\in[T],\hat{p}_{t}=p,y_{t}=y\}\] to denote all the days on which the forecast was \(p\in\mathcal{P}\) and all the days on which the forecast was \(p\) and the realized label was \(y\in\mathcal{Y}\). When the transcript \(\pi_{1:T}\) is clear from the context, we write \(S(p)\) and \(S(p,y)\). Similarly, we write \[n(\pi_{1:T},p) =|S(\pi_{1:T},p)|\] the number of rounds the forecast was \[p\] \[\mu(\pi_{1:T},p) =\frac{1}{|S(\pi_{1:T},p)|}\sum_{t\in S(\pi_{1:T},p)}y_{t}\] the empirical average of \[y\]'s over \[S(p)\] \[\bar{f}(\pi_{1:T},p) =\frac{1}{|S(\pi_{1:T},p)|}\sum_{t\in S(\pi_{1:T},p)}f(x_{t})\] the empirical average of \[f(x)\]'s over \[S(p)\] \[\bar{f}(\pi_{1:T},p,y) =\frac{1}{|S(\pi_{1:T},p,y)|}\sum_{t\in S(\pi_{1:T},p,y)}f(x_{t})\] the empirical average of \[f(x)\]'s over \[S(p,y)\] As before when the transcript is clear from the context, we write \(n(p),\mu(p),\bar{f}(p)\), and \(\bar{f}(p,y)\). ### Omniprediction If the forecast \(\hat{p}_{t}\) exactly matches the true underlying \(p_{t}\) in each round \(t\in[T]\), each \(\ell\)-learner would experience no regret to every comparison strategy \(f:\mathcal{X}\to[0,1]\) because it would be minimizing the loss pointwise in each round: more formally, if \(\hat{p}_{t}=p_{t}\), then for any predictor \(f:\mathcal{X}\to[0,1]\) and loss function \(\ell\), \[\sum_{t=1}^{T}\underset{y\sim\text{Ber}(p_{t})}{\mathbb{E}}[\ell(y_{t},k^{ \ell}(\hat{p}_{t}))]\leq\sum_{t=1}^{T}\underset{y\sim\text{Ber}(p_{t})}{\mathbb{ E}}[\ell(y_{t},f(x_{t}))].\] Ensuring that no \(\ell\)-learner has regret to _any_ comparison strategy \(f:\mathcal{X}\to[0,1]\) and loss function \(\ell\) is too much to hope for. Instead we restrict our attention to a family of benchmark predictors \(\mathcal{F}\) and a family of loss functions \(\mathcal{L}\). Given \(\mathcal{L}\) and \(\mathcal{F}\), we want the forecaster to make forecasts such every \(\ell\)-learner for \(\ell\in\mathcal{L}\) can use the above post-processing function \(k^{\ell}\) to achieve low regret with respect to any comparison strategy \(f\in\mathcal{F}\). We consider two flavors of regret: (1) one in which we compare our performance to that of a single comparison strategy \(f\) fixed throughout the entire horizon \(T\) (essentially external regret) and (2) one where our comparison is to a comparison function that can depend on which forecast is made at each round -- i.e. a baseline \(f_{p}\) for every forecast \(p\in\mathcal{P}\). This is a notion of swap regret. Swap omniprediction was first defined in [1]. **Definition 2.2** (Omnipredictor).: _For any transcript \(\pi_{1:T}\) that is generated by forecaster \(\mathscr{F}\) and adversary Adv, the forecaster's swap-omniprediction regret with respect to \(\{f_{p}\}_{p\in\mathcal{P}}\in\mathcal{F}^{m}\) and loss functions \(\{\ell_{p}\}_{p\in\mathcal{P}}\in\mathcal{L}^{m}\) is defined as_ \[s\mathcal{O}(\pi_{1:T},\{\ell_{p}\}_{p\in\mathcal{P}},\{f_{p}\}_{p\in\mathcal{P }})=\sum_{p\in\mathcal{P}}\frac{n(\pi_{1:T},p)}{T}\left(\frac{1}{n(\pi_{1:T},p )}\sum_{t\in S(p)}\ell_{p}(y_{t},k_{\ell_{p}}(\hat{p}_{t}))-\frac{1}{n(\pi_{1: T},p)}\sum_{t\in S(p)}\ell_{p}(y_{t},f_{p}(x_{t}))\right).\] _Similarly, we define omniprediction regret as_ \[\mathcal{O}(\pi_{\pi_{1:T}},\ell,f)=s\mathcal{O}(\pi_{1:T},\{\ell_{p}\}_{p\in \mathcal{P}},\{f_{p}\}_{p\in\mathcal{P}})=\frac{1}{T}\left(\sum_{t=1}^{T}\ell( y_{t},k^{\ell}(\hat{p}_{t}))-\sum_{t=1}^{T}\ell(y_{t},f(x_{t}))\right)\] _where \(f_{p}=f\) and \(\ell_{p}=\ell\) for each \(p\in\mathcal{P}\)._ _Finally, we use the following notations:_ _Swap-Omniprediction regret with respect to (\(\{\ell_{p}\}_{p\in\mathcal{P}},\mathcal{F}\)) and (\(\mathcal{L},\mathcal{F}\))_ \[s\mathcal{O}(\pi_{1:T},\{\ell_{p}\}_{p\in\mathcal{P}},\mathcal{ F}) =\max_{\{f_{p}\}_{p\in\mathcal{P}}\in\mathcal{F}^{m}}s\mathcal{O }(\pi_{1:T},\{\ell_{p}\}_{p\in\mathcal{P}},\{f_{p}\}_{p\in\mathcal{P}})\] \[s\mathcal{O}(\pi_{1:T},\mathcal{L},\mathcal{F}) =\max_{\{\ell_{p}\}_{p\in\mathcal{P}}\in\mathcal{L}^{m},\{f_{p} \}_{p\in\mathcal{P}}\in\mathcal{F}^{m}}s\mathcal{O}(\pi_{1:T},\{\ell_{p}\}_{p \in\mathcal{P}},\{f_{p}\}_{p\in\mathcal{P}})\] _Omniprediction regret with respect to (\(\ell,\mathcal{F}\)) and (\(\mathcal{L},\mathcal{F}\))_ \[\mathcal{O}(\pi_{1:T},\ell,\mathcal{F})=\max_{f\in\mathcal{F}} \mathcal{O}(\pi_{1:T},\ell,f)\quad\mathcal{O}(\pi_{1:T},\mathcal{L}, \mathcal{F})=\max_{\ell\in\mathcal{L},f\in\mathcal{F}}\mathcal{O}(\pi_{1:T}, \ell,f).\] In words, for each forecast \(p\in\mathcal{P}\), we consider the subsequence of rounds \(S(p)\) in which the forecast was \(p\) and compare the \(\ell\)-learner's accumulated loss over these rounds to what the \(\ell\)-learner could have obtained instead by choosing actions using the benchmark predictor \(f_{p}\). We allow this comparison to be made against a different predictor \(f_{p}\) and loss function \(\ell_{p}\) for each possible forecast \(p\in\mathcal{P}\). Swap omniprediction regret corresponds to the worst case regret as measured over the choice of comparison predictors \(\{f_{p}\}_{p\in\mathcal{P}}\) and loss functions \(\{\ell_{p}\}_{p\in\mathcal{P}}\) of the regret averaged over forecasts \(p\in\mathcal{P}\). The non-swap version corresponds to requiring that the comparison benchmark is constrained to use the same \(f\) and the same \(\ell\) for every forecast \(p\in\mathcal{P}\). Since swap omniprediction regret compares to a richer class of baseline predictors and loss functions than the non-swap version, we have \(\mathcal{O}(\pi_{1:T},\mathcal{L},\mathcal{F})\leq s\mathcal{O}(\pi_{1:T}, \mathcal{L},\mathcal{F})\). ### Multicalibration [1, 1] have shown a quite close connection between omniprediction and multicalibration, which was originally introduced as a multi-group fairness concept by [1]. Since its introduction, various definitions of approximate multicalibration have been studied, but unfortunately, all of these variants have been referred to only as "multicalibration" leading to some confusion in the literature. To bring clarity to the landscape of approximate multicalibration definitions, we define various multicalibration error measures and give them distinguishing names, following the exposition in [14]. All of them are defined with respect to a class of predictors \(\mathcal{F}\) for which we make the following assumption. **Assumption 2.1**.: _We assume \(\mathcal{F}\) contains the constant function \(I\): \(I(x)=1\) for all \(x\in\mathcal{X}\)._ Just as how we define two versions of omniprediction (a standard and a "swap" variant), we define standard and "swap" variants of multicalibration (swap multicalibration was first defined in [1] in the batch setting). Informally, swap multicalibration allows the function \(f\) with which calibration is being measured with respect to, to vary with the forecast \(p\in\mathcal{P}\) of the forecaster. **Definition 2.3** (Multicalibration Errors (of Various Flavors)).: _For any transcript \(\pi_{1:T}=\{(x_{t},y_{t},\hat{p}_{t})\}_{t=1}^{T}\) generated by some forecaster \(\mathscr{F}\) and adversary Adv, we define the forecaster's multicalibration error with respect to \(f\in\mathcal{F}\) for forecast \(p\in P\) as_ \[K(\pi_{1:T},p,f)=\frac{1}{n(\pi_{1:T},p)}\left(\sum_{t\in S(\pi_{1:T},p)}f(x_{t })\cdot(y_{t}-\hat{p}_{t})\right)\] _if \(n(\pi_{1:T},p)\geq 1\) and 0 otherwise._ _We define \(L_{1}\)-, \(L_{2}\)-, and \(L_{\infty}\)-swap-multicalibration error with respect to \(\{f_{p}\}_{p\in\mathcal{P}}\in\mathcal{F}^{m}\) respectively as_ \[\overline{sK}_{1}(\pi_{1:T},\{f_{p}\}_{p\in\mathcal{P}}) =\sum_{p\in\mathcal{P}}\frac{n(\pi_{1:T},p)}{T}\left|K(\pi_{1:T},p, f_{p})\right|\] \[\overline{sK}_{2}(\pi_{1:T},\{f_{p}\}_{p\in\mathcal{P}}) =\sum_{p\in\mathcal{P}}\frac{n(\pi_{1:T},p)}{T}\left(K(\pi_{1:T},p,f_{p})\right)^{2}\] \[\overline{sK}_{\infty}(\pi_{1:T},\{f_{p}\}_{p\in\mathcal{P}}) =\max_{p\in\mathcal{P}}\frac{n(\pi_{1:T},p)}{T}|K(\pi_{1:T},p,f_{ p})|.\] _We define the non-swap versions of \(L_{1}\)-, \(L_{2}\)-, and \(L_{\infty}\)-multicalibration error as follows, in which the functions \(f_{p}\) must be the same function \(f\) for all \(p\):_ \[\overline{K}_{1}(\pi_{1:T},f) =\overline{sK}_{1}(\pi_{1:T},\{f_{p}\}_{p\in\mathcal{P}})\] \[\overline{K}_{2}(\pi_{1:T},f) =\overline{sK}_{2}(\pi_{1:T},\{f_{p}\}_{p\in\mathcal{P}})\] \[\overline{K}_{\infty}(\pi_{1:T},f) =\overline{sK}_{\infty}(\pi_{1:T},\{f_{p}\}_{p\in\mathcal{P}})\] _where \(f_{p}=f\) for each \(p\in\mathcal{P}\)._ _Finally, we define multicalibration errors with respect to a family of predictors \(\mathcal{F}\) respectively as_ \[\overline{sK}_{1}(\pi_{1:T},\mathcal{F}) =\max_{\{f_{p}\}_{p\in\mathcal{P}}\in\mathcal{F}^{m}}\overline{sK }_{1}(\pi_{1:T},\{f_{p}\}_{p\in\mathcal{P}})\quad\text{and}\quad\overline{K}_ {1}(\pi_{1:T},\mathcal{F})=\max_{f\in\mathcal{F}}\overline{K}_{1}(\pi_{1:T},f)\] \[\overline{sK}_{2}(\pi_{1:T},\mathcal{F}) =\max_{\{f_{p}\}_{p\in\mathcal{P}}\in\mathcal{F}^{m}}\overline{sK }_{2}(\pi_{1:T},\{f_{p}\}_{p\in\mathcal{P}})\quad\text{and}\quad\overline{K}_ {2}(\pi_{1:T},\mathcal{F})=\max_{f\in\mathcal{F}}\overline{K}_{2}(\pi_{1:T},f)\] \[\overline{sK}_{\infty}(\pi_{1:T},\mathcal{F}) =\max_{\{f_{p}\}_{p\in\mathcal{P}}\in\mathcal{F}^{m}}\overline{sK }_{\infty}(\pi_{1:T},\{f_{p}\}_{p\in\mathcal{P}})\quad\text{and}\quad\overline{ K}_{\infty}(\pi_{1:T},\mathcal{F})=\max_{f\in\mathcal{F}}\overline{K}_{ \infty}(\pi_{1:T},f).\] As with omniprediction regret, the swap variant always upper-bounds the non-swap variant: e.g. \(\overline{K}_{1}(\pi_{1:T},\mathcal{F})\leq\overline{sK}_{1}(\pi_{1:T},\mathcal{ F})\) and so forth. The following relationship between \(L_{2}\) and \(L_{1}\)-(swap-)multicalibration error will be useful for us: **Lemma 2.1**.: \[\overline{sK}_{1}(\pi_{1:T},\mathcal{F})\leq\sqrt{\overline{sK}_{2}(\pi_{1:T}, \mathcal{F})}\quad\text{and}\quad\overline{K}_{1}(\pi_{1:T},\mathcal{F})\leq \sqrt{\overline{K}_{2}(\pi_{1:T},\mathcal{F})}.\] ## 3 Oracle-efficient \(L_{2}\)-Swap-Multicalibration In this section, we derive our main technical result: an oracle efficient algorithm for obtaining \(L_{2}\)-swap-multicalibration with respect to a class of functions \(\mathcal{F}\). By "oracle efficient", we mean that our algorithm is an efficient reduction to an assumed algorithm (an "oracle") for obtaining the standard notion of external regret (with respect to the squared error loss function) to the benchmark class \(\mathcal{F}\). For example, for \(\mathcal{F}\) consisting of all linear functions, there are existing efficient algorithms that obtain regret scaling only logarithmically with \(T\)[13, 14, 15]. Even for classes \(\mathcal{F}\) for which no regret algorithms for worst-case regret bounds are not known (or do not exist), heuristics like online gradient descent often succeed in practice. At a high level, our approach will proceed as follows: 1. First we define a notion of contextual swap regret with respect to the squared error loss: informally, a predictor has contextual swap regret relative to a benchmark class of functions \(\mathcal{F}\) if _restricted to any one of its level-sets_, the predictor has higher squared error than the best function in \(\mathcal{F}\), on the sequence of features and labels realized in hindsight. 2. Next, we give an algorithm for obtaining diminishing contextual swap regret for any benchmark class \(\mathcal{F}\) by reducing to algorithms for obtaining diminishing external regret to \(\mathcal{F}\). The reduction maintains a copy of the external-regret algorithm for each level-set of the predictor, and advances only one of the simulated copies at each round. We adapt techniques from [10, 20] which have been used for similar reductions from (non contextual) swap regret to external regret. 3. Finally, we adapt a characterization theorem of [13] and [14] from the batch to the online setting, which relates contextual swap regret to \(L_{2}\)-swap multicalibration. In Section 3.1 we define the online squared loss regression oracles that we reduce to, and observe that discretizing their predictions results in only a small increase in their regret bounds. In Section 3.2 we give our main algorithmic construction for obtaining diminishing contextual swap regret. In Section 3.3 we show how bounds on diminishing contextual swap regret imply bounds on \(L_{2}\)-swap-multicalibration. These will be key ingredients for us in Section 4, in which we observe that online \(L_{2}\)-swap-multicalibration implies online \(L_{1}\)-swap multicalibration, which in turn implies online (swap-)omniprediction. All missing proofs in this section can be found in Appendix B. ### Online Regression Oracles and Modifications In this section, we define online regression oracles, which are online prediction algorithms with the guarantee that they have diminishing squared error regret with respect to a class of predictors \(\mathcal{F}\) in adversarial settings. **Definition 3.1** (Online Squared Loss Regression Oracle).: _In each round \(t\in[T]\), an online squared loss regression oracle \(\mathcal{A}:(\mathcal{X}\times\mathcal{Y})^{*}\times\mathcal{X}\to[0,1])\) maps a sequence of (feature label) pairs \(\{(x_{\tau},y_{\tau})\}_{\tau=1}^{t-1}\) and some new feature vector \(x_{t}\) to a single number \(\hat{y}_{t}=\mathcal{A}(\{(x_{\tau},y_{\tau})\}_{\tau=1}^{t-1},x_{t})\). Its regret with respect to \(f\in\mathcal{F}\) is_ \[\text{regret}(\{(x_{t},y_{t},\hat{y}_{t})\}_{t=1}^{T},f):=\sum_{t=1}^{T}(\hat {y}_{t}-y_{t})^{2}-\sum_{t=1}^{T}(f(x_{t})-y_{t})^{2}.\] _An oracle \(\mathcal{A}\) has a regret guarantee \(r_{\mathcal{A}}(T,\mathcal{F})\), if it can guarantee against any adaptively chosen sequence \(\{(x_{t},y_{t})\}_{t=1}^{T}\) (i.e. \((x_{t},y_{t})\) can be chosen as a function of \(\{(x_{\tau},y_{\tau},\hat{y}_{\tau})\}_{\tau=1}^{t-1}\)):_ \[\max_{f\in\mathcal{F}}\text{regret}(\{(x_{t},y_{t},\hat{y}_{t})\}_{t=1}^{T},f )\leq r_{\mathcal{A}}(T,\mathcal{F}).\] **Remark 3.1**.: _We can assume with out loss of generality that an online regression oracle \(\mathcal{A}\) is deterministic because both its action space and squared loss are convex. More formally, if an oracle \(\mathcal{A}\) were to output a distribution over the unit interval \(q_{t}\in\Delta([0,1])\) in each round \(t\), it is always better (incurs smaller squared error) to instead deterministicly output the expected prediction \(\mathbb{E}_{\hat{y}_{t}\sim q_{t}}[\hat{y}_{t}]\). This follows from Jensen's inequality:_ \[\sum_{t=1}^{T}\biggl{(}\operatorname*{\mathbb{E}}_{\hat{y}_{t}\sim q_{t}}[ \hat{y}_{t}]-y_{t}\biggr{)}^{2}\leq\sum_{t=1}^{T}\operatorname*{\mathbb{E}}_{ \hat{y}_{t}\sim q_{t}}\left[(\hat{y}_{t}-y_{t})^{2}\right]\] In order to produce predictions that are contained within the finite forecasting space \(\mathcal{P}\), we project and round the output of the original oracle \(\mathcal{A}\) to \([\frac{1}{m}]\). Projecting it to be within \([0,1]\) doesn't hurt at all because \(y_{t}\in\{0,1\}\). Also, because the squared error is Lipschitz, this does not increase the regret of the oracle by much. We write \(\mathcal{A}^{m}:(\mathcal{X}\times\mathcal{Y})^{*}\to\left[\frac{1}{m}\right]\) to denote an oracle that rounds the output of \(\mathcal{A}\) to the nearest multiple of \(\frac{1}{m}\): \[\mathcal{A}^{m}(\{(x_{\tau},y_{\tau})\}_{\tau=1}^{t-1},x_{t})=\text{Round} \left(\mathcal{A}(\{(x_{\tau},y_{\tau})\}_{\tau=1}^{t-1},x_{t}),\frac{1}{m}\right)\] where \(\text{Round}(x,\frac{1}{m})=\operatorname*{argmin}_{v\in[\frac{1}{m}]}|x-v|\). **Lemma 3.1**.: _Suppose \(m\geq 1\). For any \(\mathcal{A}\), the regret guarantee of its rounded version \(\mathcal{A}^{m}\) can be bounded as follows:_ \[r_{\mathcal{A}^{m}}(T,\mathcal{F})\leq r_{\mathcal{A}}(T,\mathcal{F})+\frac{3T} {m}.\] As noted in Remark 3.1, we have so far assumed without loss of generality that \(\mathcal{A}^{m}\) outputs a deterministic prediction \(\hat{y}_{t}\in[\frac{1}{m}]\). However, in Section 3.2 we will we need a randomized oracle \(\tilde{\mathcal{A}}:(\mathcal{X}\times\mathcal{Y})^{*}\times\mathcal{X}\to \Delta([\frac{1}{m}])\) that maps a sequence of (feature, label) pairs and a new feature vector to a _distribution_ over \([\frac{1}{m}]\) that has full support. We use tilde to distinguish the randomized oracle \(\tilde{\mathcal{A}}\) that we will construct from deterministic oracles \(\mathcal{A}\) and \(\mathcal{A}^{m}\). In order to construct a randomized oracle that makes full-support predictions from a deterministic oracle \(\mathcal{A}^{m}\), we select the output of \(\mathcal{A}^{m}\) with probability \(1-\frac{1}{T}\) and otherwise choose a prediction from \([\frac{1}{m}]\) uniformly at random. More formally, we define \(\tilde{\mathcal{A}}^{m}\) to be the randomized oracle such that for every \(j\in[m]\), \[\Pr\left[\tilde{\mathcal{A}}^{m}(\{x_{\tau},y_{\tau}\}_{\tau=1}^{t-1},x_{t})= \frac{j}{m}\right]=\frac{1}{T}\cdot\frac{1}{m}+\left(1-\frac{1}{T}\right)\cdot 1 \left[\mathcal{A}^{m}\left(\{x_{\tau},y_{\tau}\}_{\tau=1}^{t-1},x_{t}\right)= \frac{j}{m}\right].\] The expected regret of \(\tilde{\mathcal{A}}^{m}\) can be bounded in the following manner: **Lemma 3.2**.: _Suppose \(m\geq 1\). For any sequence \(\{(x_{t},y_{t})\}_{t=1}^{T}\) that is chosen adversarially and adaptively against \(\tilde{\mathcal{A}}^{m}\), the expected regret of \(\tilde{\mathcal{A}}^{m}\) over its own randomness can be bounded as follows: for any \(f\in\mathcal{F}\), it results in \(\pi_{1:T}\) such that_ \[\sum_{t=1}^{T}\operatorname*{\mathbb{E}}_{\hat{y}_{t}^{\prime}\sim\tilde{ \mathcal{A}}^{m}(\{(x_{\tau},y_{\tau})\}_{\tau=1}^{t-1},x_{t})}[(y_{t}^{\prime }-\hat{y}_{t})-(y_{t}^{\prime}-f(x_{t}))^{2}|\pi_{1:t-1}]\leq r_{\mathcal{A}}( T,\mathcal{F})+\frac{3T}{m}+1\] ### Contextual Swap Regret By analogy to _swap regret_[1], we define a new notion of regret called _contextual swap regret_. Informally, swap regret in a \(k\)-expert learning problem is the requirement that the learner have no regret on any of the subsequences on which they played each of the \(k\) experts. By analogy, we define contextual swap regret as the requirement that the learner have no squared-error regret to any model in \(\mathcal{F}\), not just overall, but also on each of the subsequences on which the learner played any of the \(m\) values defining its levelsets. **Definition 3.2** (Contextual Swap Regret).: _Given a transcript \(\pi_{1:T}\in\Psi^{*}\) generated by forecaster \(\mathscr{F}:\Pi^{*}\times\mathcal{X}\to\Delta\mathbb{R}\), its contextual swap regret with respect to \(\{f_{p}\}_{p\in\mathcal{P}}\) is_ \[\sum_{p\in\mathcal{P}}\sum_{t\in S(\pi_{1:T},p)}(\hat{p}_{t}-y_{t})^{2}-(f_{p} (x_{t})-y_{t})^{2}.\] Observe that contextual swap regret is a strict generalization of the standard notion of swap regret: we recover the standard notion of swap regret if \(\mathcal{F}\) consists of the constant functions \(c_{p}\) for each \(p\in\mathcal{P}\) where \(c_{p}(x)=p\) for all \(x\in\mathcal{X}\). We remark that our notion of contextual swap regret is the online analogue of the definition of swap agnostic learning in [1] and what is referred to as the "swap-regret" like condition in [1]--both of which are in the batch setting. To design an oracle efficient algorithm for minimizing contextual swap regret, we adapt ideas from [1] and [11], which design reductions from the standard notion of swap regret to external regret in the expert learning setting. Before describing how to construct forecaster \(\mathscr{F}_{\text{ContextualSwap}}\) in Algorithm 1, we introduce more notation and describe the algorithm at a high level. We instantiate \(m\) oracles \(\{\tilde{\mathcal{A}}^{m}_{i}\}_{i\in[m]}\) where each \(\tilde{\mathcal{A}}^{m}_{i}:(\mathcal{X}\times\mathcal{Y})^{*}\times\mathcal{ X}\to\Delta[m]\) is the randomized oracle that we show how to construct given some online squared loss regression oracle \(\mathcal{A}\) in Section 3.1. At any round \(t\in[T]\), the sequence of (feature, label) pairs given to each oracle \(\tilde{\mathcal{A}}^{m}_{i}\) may be different. Hence, we write \(s^{i}_{t}\in(\mathcal{X}\times\mathcal{Y})^{*}\) to denote the sequence of (feature, label) pairs that is fed into the \(i\)th oracle \(\tilde{\mathcal{A}}^{m}_{i}\) in round \(t\). It will sometimes be convenient for us to describe the oracle as outputting the weights over \(\mathcal{P}\) that correspond to the distribution it is sampling from (note that by our construction in Section 3.1, these weights are explicitly defined, so this is without loss of generality): we write \(q^{i}_{t}=\tilde{\mathcal{A}}^{m}_{i}(s^{t}_{i},x_{t})\) where \[q^{i}_{t}[j]=\Pr\left[\tilde{\mathcal{A}}^{m}_{i}(s^{t}_{i},x_{t})=\frac{j}{m}\right]\] for every \(j\in[\frac{1}{m}]\) to denote the mixing weights over possible forecasts \(\mathcal{P}=[\frac{1}{m}]\) for the \(i\)th oracle in round \(t\). ``` 1: Initialize the randomized oracles \(\tilde{\mathcal{A}}_{i}^{m}\) for each \(i\in[m]\) (by doing so, we implicitly initialize \(\mathcal{A}_{i}^{m}\)). 2: Initialize the sequence \(s_{t}^{i}=\{\}\) for each \(i\in[m]\). 3:for\(t=1,\ldots,T\)do 4: Forecaster \(\mathscr{F}_{\text{ContextualSwap}}\) observes the feature vector \(x_{t}\) 5:\(q_{t}^{t}=\tilde{\mathcal{A}}_{i}^{m}(s_{t}^{i},x_{t})\) for each \(i\in[m]\). 6: Form \(a_{t}\in\Delta([m])\) such that \[a_{t}=\sum_{i=1}^{m}q_{t}^{i}a_{t}[i].\] (1) 7: Let \(\mathbf{i}_{t}\) be distributed according to \(a_{t}\) and sample \(i_{t}\sim\mathbf{i}_{t}\) to decide which oracle to use. 8: Let \(\mathbf{j}_{t}\) be distributed according to \(q_{t}^{i_{t}}\), and sample \(j_{t}\sim\mathbf{j}_{t}\). 9: Forecaster \(\mathscr{F}_{\text{ContextualSwap}}\) forecasts \(\hat{p}_{t}=\frac{j_{t}}{m}\). 10: Forecaster \(\mathscr{F}_{\text{ContextualSwap}}\) observes \(y_{t}\). 11: Update the sequence fed into \(i_{t}\)th oracle: \(s_{t+1}^{i_{t}}=s_{t}^{i_{t}}\cup\{(x_{t},y_{t})\}\), and set \(s_{t+1}^{i}=s_{t}^{i}\) for all other \(i\neq i_{t}\). 12: Adversary Adv chooses \((x_{t+1},y_{t+1})\). ``` **Algorithm 1**\(\mathscr{F}_{\text{ContextualSwap}}(\mathcal{F},\mathcal{A},m,T)\) The reason why we need to use the randomized version of the oracle we construct--which mixes in a uniform distribution over \(\mathcal{P}\)--rather than the deterministic oracle we start with, is to ensure that there exists a distribution \(a_{t}\) that satisfies equation (1). In that equation, \(a_{t}\) may be thought of as a stationary distribution for the Markov chain whose transition probabilities are described by \(q_{t}^{i}\)'s: the probability of going from state \(i\) to \(j\) is \(q_{t}^{i}[j]\). Mixing in the uniform distribution ensures that \(q_{t}^{i}[j]>0\) for all \(i,j\in[m]\), which in turn guarantees that there exists a unique stationary distribution \(a_{t}\) in every round \(t\in[T]\)--thus making sure that equation (1) has a solution. **Fact 3.1** ([17]).: _For any Markov chain where the transition probability from state \(i\) to \(j\) is non-zero \(q^{i}[j]>0\) for every \(i,j\in[m]\) (i.e. the resulting graph representation is strongly connected), there exists a unique stationary distribution \(a\in\Delta([m])\) such that_ \[a=\sum_{i=1}^{m}q^{i}a[i].\] Note that the additional internal state that the forecaster is keeping around is \(\theta_{t}=(i_{t},j_{t})\) in this case. In other words, the transcript looks like \[\pi_{1:t}=\{(x_{\tau},y_{\tau},\hat{p}_{t},(i_{\tau},j_{\tau}))\}_{\tau=1}^{t}.\] Adapting the argument from [10] directly here would show that the contextual swap regret of forecaster \(\mathscr{F}_{\text{ContextualSwap}}\) is bounded in expectation over \(\{i_{t}\sim\mathbf{i}_{t},j_{t}\sim\mathbf{j}_{t}(i_{t})\}_{t=1}^{T}\). More specifically, against any adversary Adv, the following quantity \[\sup_{\{f_{j}\}_{j\in[m]}\in\mathcal{F}_{B}^{m}}\mathbb{E}_{\{(i _{t}\sim\mathbf{i}_{t},j_{t}\sim\mathbf{j}_{t}(i_{t}))\}_{t=1}^{T}}\left[\sum _{t=1}^{T}(j_{t}/m-y_{t})^{2}-(f_{j_{t}}(x_{t})-y_{t})^{2}\right]\] \[=\sup_{\{f_{j}\}_{j\in[m]}\in\mathcal{F}_{B}^{m}}\sum_{t=1}^{T} \sum_{\{(i_{t}\sim\mathbf{i}_{\tau},j_{\tau}\sim\mathbf{j}_{t}(i_{t}))\}_{\tau= 1}^{t}}\left[(j_{t}/m-y_{t})^{2}-(f_{j_{t}}(x_{t})-y_{t})^{2}\right] \tag{2}\] would be bounded where in each round, it's taking the expectation over the previous rounds' realizations as opposed with respect to a single realized transcript. However, later in Section 3.3, we need to use a concentration argument to show that with high probability, the realized contextual swap regret is bounded, and such concentration argument requires forming a martingale difference where we need the the per-round regret to be bounded in expectation conditional on the realized transcript. Hence, what we actually need is that forecaster \(\mathscr{F}_{\text{ContextualSwap}}\) obtains \(\pi_{1:T}\) such that \[\sup_{\{f_{j}\}_{j\in[m]}\in\mathscr{F}_{B}^{m}}\sum_{t=1}^{T}\underset{i^{ \prime}_{t}\sim\mathbf{i}_{t},j^{\prime}_{t}\sim\mathbf{j}_{t}}{\mathbb{E}} \left[(j^{\prime}_{t}/m-y_{t})^{2}-(f_{j^{\prime}_{t}}(x_{t})-y_{t})^{2}|\pi_{ 1:t-1}\right]. \tag{3}\] Once again, note that quantity (3) is not the same as (2) because in each round \(t\), (2) takes expectation over all possible realization of the randomness of the algorithm in previous rounds, but (3) is with respect to specific realization of some \(\pi_{1:t-1}\). Suppose using the original reduction of [1]: the forecast \(j_{t}/m\) is chosen according to \(a_{t}\) in each round and every oracle is updated so \(\{(x_{\tau},y_{\tau},\hat{p}_{\tau})\}_{\tau=1}^{t-1}\) is sufficient to calculate the distribution for \(\mathbf{\hat{p}}_{t}\). However, when updating every oracle via \(s^{i}_{t}\), this approach requires re-scaling the loss vector fed to \(i\)th oracle by \(a[i]\), but it is not clear how to do such rescaling in our setting where we provide feedback to oracle \(\bar{\mathcal{A}}_{i}\)'s via feature-label pairs \((x,y)\). Instead, we use [11]'s approach and use the following some concentration lemma to bound the quantity (3) in Theorem 3.1. For Forcaster \(\mathscr{F}_{\text{ContextualSwap}}\), because the random variable \(\mathbf{j}_{t}\) necessarily depends on \(i_{t}\), we make their dependence explicit by writing \(\mathbf{j}_{t}(i_{t})\) -- i.e. the distribution of \(\mathbf{j}_{t}\) when the realization of the random variable \(\mathbf{i}_{t}\) is \(i_{t}\). **Lemma 3.3**.: _Suppose the sequential shattering dimension of \(\mathcal{F}_{B}\) is finite at any scale \(\delta\): \(\text{fat}_{\delta}(\mathcal{F}_{B})<\infty\). With probability \(1-4m\rho\), \(\mathscr{F}_{\text{ContextualSwap}}\) results in \(\pi_{1:T}\) such that_ \[\max_{\{f_{j}\}_{j\in[m]}}\left|\frac{1}{T}\sum_{t=1}^{T}\underset{j ^{\prime}_{t}\sim\mathbf{j}_{t}(i_{t})}{\mathbb{E}}[(j^{\prime}_{t}/m-y_{t})^ {2}-(f_{i_{t}}(x_{t})-y_{t})^{2}|\pi_{1:t-1}]\right.\] \[\left.\qquad\qquad\qquad-\underset{i^{\prime}_{t}\sim\mathbf{i}_ {t},j^{\prime\prime}_{t}\sim\mathbf{j}_{t}(i^{\prime}_{t})}{\mathbb{E}}\left[( j^{\prime\prime}_{t}/m-y_{t})^{2}-(f_{i^{\prime}_{t}}(x_{t})-y_{t})^{2}|\pi_{1:t-1} \right]\right|\] \[\leq\max(8B,2\sqrt{B})mC_{\mathcal{F}_{B}}\sqrt{\frac{\log(\frac{ 1}{\rho})}{T}}\] _where \(C_{\mathcal{F}_{B}}\) is a finite constant that depends on the sequential fat shattering dimension of \(\mathcal{F}_{B}\)._ Because \(\mathcal{F}\) is infinite, we could not have merely taken a union bound over all such possibilities and used Azuma's inequality for each \(f\). Instead, to prove Lemma 3.3, we borrow tools from [1] to argue concentration over all possible \(\{f_{p}\}_{p\in\mathcal{P}}\). Note how the lemma requires the sequential fat shattering dimension of \(\mathcal{F}_{B}\) to be bounded at any scale \(\delta\) and the final bound also depends on this complexity measure. [10] shows that online learnable \(\mathcal{F}_{B}\) must have a finite sequential fat shattering dimension at any scale \(\delta\) and since we are assuming a regression oracle for \(\mathcal{F}_{B}\), the assumption below on the sequential fat shattering dimension is innocuous. See Appendix D to see the definition of sequential fat shattering dimension and more details. Equipped with the above concentration lemma, we can show the following contextaul swap regret bound for forecaster \(\mathscr{F}_{\text{ContextualSwap}}\): **Theorem 3.1**.: _Fix some \(B>0\). Suppose the sequential shattering dimension of \(\mathcal{F}_{B}\) is finite at any scale \(\delta\): \(\text{fat}_{\delta}(\mathcal{F}_{B})<\infty\). Suppose oracle \(\mathcal{A}\)'s regret bound \(r_{\mathcal{A}}(T,\mathcal{F}_{B})\) is concave in time horizon \(T\). Fix any adversary Adv that forms \((x_{t},y_{t})\) as a function of \(\psi_{1:t-1}=\{(x_{\tau},y_{\tau},\hat{p}_{\tau})\}_{\tau=1}^{t-1}\). With probability \(1-\rho\) over the randomness of \(\{i_{t}\sim\mathbf{i}_{t}\}_{t=1}^{T}\), Forecaster \(\mathscr{F}_{\text{Concextualswap}}(\mathcal{F}_{B},\mathcal{A},m,T)\) results in \(\pi_{1:T}=\{(x_{t},y_{t},\hat{p}_{t},(i_{t},j_{t}))\}_{t=1}^{T}\) such that_ \[\sup_{\{f_{p}\}_{p\in\mathcal{F}_{B}^{m}}}\sum_{t=1}^{T}\mathbb{E }_{\hat{p}_{t}^{\prime}}\left[(\hat{p}_{t}^{\prime}/m-y_{t})^{2}-(f_{\hat{p}_{t }^{\prime}}(x_{t})-y_{t})^{2}|\pi_{1:t-1}\right]\] \[=\sup_{\{f_{j}\}_{j\in[m]}\in\mathcal{F}_{B}^{m}}\sum_{t=1}^{T} \mathbb{E}_{i_{t}^{\prime}\sim\mathbf{i}_{t},j_{t}^{\prime\prime}\sim\mathbf{j }_{t}(i_{t}^{\prime})}\left[(j_{t}^{\prime\prime}/m-y_{t})^{2}-(f_{j_{t}^{ \prime\prime}}(x_{t})-y_{t})^{2}|\pi_{1:t-1}\right]\] \[\leq\left(mr_{\mathcal{A}}\left(\frac{T}{m},\mathcal{F}_{B} \right)+\frac{3T}{m}+m\right)+\max(8B,2\sqrt{B})mC_{\mathcal{F}_{B}}\sqrt{ \frac{\log(\frac{4m}{\rho})}{T}}.\] Proof.: Fix any \(\{f_{j}\}_{j\in[m]}\). Fix any round \(t\in[T]\) and its transcript up to previous round \(\pi_{1:t-1}\). Then, we have for any \(x_{t},y_{t}\) \[\mathbb{E}_{i_{t}^{\prime}\sim\mathbf{i}_{t},j_{t}^{\prime\prime} \sim\mathbf{j}_{t}(i_{t})}\Big{[}(j_{t}^{\prime\prime}/m-y_{t})^{2}-(f_{j_{t}^ {\prime\prime}}(x_{t})-y_{t})^{2}\Big{|}\pi_{1:t-1}\Big{]}\] \[=\sum_{i,j\in[m]}a_{t}[i]q_{i}^{t}[j]\cdot\left(\frac{j}{m}-y_{t} \right)^{2}-\sum_{i,j\in[m]}a_{t}[i]q_{i}^{t}[j]\cdot(f_{j}(x_{t})-y_{t})^{2}\] \[=\sum_{i,j\in[m]}a_{t}[i]q_{i}^{t}[j]\left(\frac{j}{m}-y_{t}\right) ^{2}-\sum_{i\in[m]}a_{t}[i](f_{i}(x_{t})-y_{t}^{\prime})^{2}\] \[=\mathbb{E}_{i_{t}^{\prime}\sim\mathbf{i}_{t},j_{t}^{\prime\prime} \sim\mathbf{j}_{t}(i_{t})}\left[(j_{t}^{\prime\prime}/m-y_{t})^{2}-(f_{i_{t}^{ \prime}}(x_{t})-y_{t})^{2}|\pi_{1:t-1}\right] \tag{4}\] where the second equality follows from (1). For convenience, we define \(L_{y_{t}}\in[0,1]^{m}\) given \(y_{t}\in\mathcal{Y}\) such that for each \(j\in[m]\) \[L_{y_{t}}[j]=\left(\frac{j}{m}-y_{t}\right)^{2}.\] Then with probability \(1-\rho\), \[\sum_{t=1}^{T}\mathbb{E}_{i_{t}^{\prime}\sim\mathbf{i}_{t},j_{t}^ {\prime\prime}\sim\mathbf{j}_{t}(i_{t}^{\prime})}\left[(j_{t}/m-y_{t})^{2}-(f_ {j_{t}^{\prime\prime}}(x_{t})-y_{t})^{2}|\pi_{1:t-1}\right]\] \[=\sum_{t=1}^{T}\mathbb{E}_{i_{t}^{\prime}\sim\mathbf{j}_{t},j_{t}^ {\prime\prime}\sim\mathbf{j}_{t}(i_{t}^{\prime})}\left[(j_{t}/m-y_{t})^{2}-(f_ {i_{t}^{\prime}}(x_{t})-y_{t})^{2}|\pi_{1:t-1}\right]\] \[\leq\sum_{t=1}^{T}\mathbb{E}_{j_{t}^{\prime\prime}\sim\mathbf{j} _{t}(i_{t})}\left[(j_{t}^{\prime\prime}/m-y_{t})^{2}-(f_{i_{t}}(x_{t})-y_{t})^ {2}|\pi_{1:t-1}\right]+\max(8B,2\sqrt{B})mC_{\mathcal{F}_{B}}\sqrt{\frac{\log( \frac{4m}{\rho})}{T}}\] \[=\sum_{i\in[m]}\sum_{t\in[T]:i_{t}=i}\mathbb{E}_{j^{\prime\prime} \sim\mathbf{j}_{t}(i_{t})}[(j^{\prime\prime}/m-y_{t})^{2}-(f_{i}(x_{t})-y_{t} )^{2}|\pi_{1:t-1}]+\max(8B,2\sqrt{B})mC_{\mathcal{F}_{B}}\sqrt{\frac{\log( \frac{4m}{\rho})}{T}}\] \[=\sum_{i\in[m]}\sum_{t\in[T]:i_{t}=i}\mathbb{A}_{i}^{m}(s_{t}^{i} )\cdot L_{y_{t}}-(f_{i}(x_{t})-y_{t})^{2}+\max(8B,2\sqrt{B})mC_{\mathcal{F}_{B }}\sqrt{\frac{\log(\frac{4m}{\rho})}{T}}\] where the first equality follows 4 and the inequality follows from Lemma 3.3 Lemma 3.2 tells us that for every \(i\in[m]\), \(i\)th oracle's overall regret is bounded as follows: \[\sum_{t\in[T]:i_{t}=i}\tilde{\mathcal{A}}_{i}^{m}(s_{t}^{i})\cdot L_{y_{t}}-(f_ {i}(x_{t})-y_{t})^{2}\leq r_{\mathcal{A}}(T_{i})+\frac{4T_{i}}{m}+\sqrt{T_{i}}\] where \(T_{i}=|\{t\in[T]:i_{t}=i\}|\). For convenience, write \[r_{\tilde{\mathcal{A}}^{m}}(T,\mathcal{F})=r_{\mathcal{A}}(T,\mathcal{F})+\frac{ 4T}{m}+\sqrt{T}.\] In other words, we now need only bound the sum of expected regret over \(m\) oracles where the randomness is taken over \(j_{t}\)'s. Note that the sum of the expected regret over \(m\) oracles can be bounded as \[\sum_{i\in[m]}\sum_{t\in[T]:i_{t}=i}\tilde{\mathcal{A}}_{i}^{m}(s _{t}^{i})\cdot L_{y_{t}}-(f_{i}(x_{t})-y_{t})^{2}\] \[\leq\sum_{i\in[m]}r_{\tilde{\mathcal{A}}_{i}^{m}}(|t\in[T]:i_{t} =i|,\mathcal{F})\] \[\leq\max_{\{T_{i}\}_{i\in[m]}:\sum_{i\in[m]}T_{i}=T}\sum_{i\in[m] }r_{\tilde{\mathcal{A}}_{m}}(T_{i},\mathcal{F})\] \[\leq\max_{\{T_{i}\}_{i\in[m]}:\sum_{i\in[m]}T_{i}=T}m\cdot r_{ \tilde{\mathcal{A}}^{m}}\left(\frac{1}{m}\sum_{i\in[m]}T_{i},\mathcal{F}\right)\] \[=m\cdot r_{\tilde{\mathcal{A}}^{m}}\left(\frac{T}{m},\mathcal{F}\right)\] \[=m\cdot\left(r_{\mathcal{A}}\left(\frac{T}{m},\mathcal{F}\right) +\frac{3T}{m^{2}}+1\right)\] \[=mr_{\mathcal{A}}\left(\frac{T}{m},\mathcal{F}\right)+\frac{3T}{m }+m\] where the second inequality follows from applying Jensen's inequality to a concave function \(r_{\tilde{\mathcal{A}}^{m}}(\cdot,\mathcal{F})\). We remark that although the algorithm maintains \(m\) copies of the underlying regression oracle \(\mathcal{A}\), it only needs to _update_ one of them per-round just as in [10]. Thus, the per-round run-time of our algorithm is equal to the per-round run-time of the regression oracle \(\mathcal{A}\) that we reduce to, plus the additional additive overhead of solving equation 1. ### From Contextual Swap Regret to \(L_{2}\)-Swap-Multicalibration Below, we extend a characterization proven in [1, 1] within a batch framework to our online setting. The characterization allows us to show that a forecaster's \(L_{2}\)-swap-multicalibration error with respect to \(\mathcal{F}\) can be bounded by its contextual swap regret with respect to \(\mathcal{F}\). The characterization rests on a mild assumption: **Assumption 3.1**.: _We assume that \(\mathcal{F}\) is closed under affine transformation: if \(f\in\mathcal{F}\), then \(f^{\prime}(x):=af(x)+b\) for every \(a,b\in\mathbb{R}\) also belongs to \(\mathcal{F}\)._ **Remark 3.2**.: _Many natural classes of regression functions are closed under affine transformations: for example the set of all linear functions, the set of all polynomial functions of any bounded degree, and the set of all regression trees of any bounded depth. Other classes (like neural networks with softmax outputs) are not already closed under affine transformation, but can be made so by introducing two new parameters (\(a\) and \(b\)) while maintaining differentiability. Thus we view this as a mild assumption, that is enforceable if it is not already satisfied._ We show that if there is a function \(f\in\mathcal{F}\) and forecast \(p\in\mathcal{P}\) that witnesses the forecaster's calibration error being large, then it must be the case that the forecasters contextual swap regret is also large. Thus, by contrapositive, if the forecaster's contextual swap regret is small, then it also cannot have large calibration error. **Lemma 3.4**.: _Fix any \(\pi_{1:T}\), \(p\in\mathcal{P}\), and \(B\geq 1\). Suppose there exists \(f\in\mathcal{F}_{B}\) such that the forecaster's multiclibration error with respect to \(f\) and \(p\) is at least \(\alpha\) for some \(\alpha\in[0,1]\):_ \[K(\pi_{1:T},p,f)\geq\alpha.\] _Then there exists \(f^{\prime}\in\mathcal{F}_{(1+\sqrt{B})^{2}}\) that witnesses that the forecaster's contextual swap regret for forecast \(p\in\mathcal{P}\) also scales with \(\alpha^{2}\):_ \[\frac{1}{n(\pi_{1:T},p)}\sum_{t\in S(\pi_{1:T},p)}(\hat{p}_{t}-y_{t})^{2}-(f^{ \prime}(x_{t})-y_{t})^{2}\geq\frac{\alpha^{2}}{B}\] Proof.: Fix any \(\pi_{1:T}\). Let \(f^{\prime}(x)=p+\eta f(x)\) where \[\eta=\min\left(1,\frac{\alpha}{\frac{1}{n(p)}\sum_{t\in S(\pi_{1:T},p)}f(x_{t} )^{2}}\right).\] \[\frac{1}{n(p)}\sum_{t\in S(p)}\left((\hat{p}_{t}-y_{t})^{2}-(f^{ \prime}(x_{t})-y_{t})^{2}\right)\] \[=\frac{1}{n(p)}\sum_{t\in S(p)}\left((p^{2}-2py_{t}+y_{t}^{2})-( f^{\prime}(x_{t})^{2}-2y_{t}f^{\prime}(x_{t})+y_{t}^{2})\right)\] \[=\frac{1}{n(p)|}\sum_{t\in S(p)}\left(p^{2}-2py_{t}-(p+\eta f(x_{ t}))^{2}+2y_{t}(p+\eta f(x_{t}))\right)\] \[=\frac{1}{n(p)}\sum_{t\in S(p)}\left(-2p\eta f(x_{t})-\eta^{2}f^ {2}(x_{t})\right)^{2}+2y_{t}\eta f(x_{t})\right)\] \[=\frac{1}{n(p)}\sum_{t\in S(p)}\left(2\eta f(x_{t})(y_{t}-p)- \eta^{2}f^{2}(x_{t}))^{2}\right)\] \[\geq 2\eta\alpha-\frac{\eta^{2}}{|S(p)|}\sum_{t\in S(p)}f(x_{t})^{2}\] For convenience, write \[\tau=\frac{1}{n(p)}\sum_{t\in S(p)}f(x_{t})^{2}\] where \(\tau\leq B\) because \(f\in\mathcal{F}_{B}\). If \(\alpha\geq\tau\) meaning \(\eta=1\), then \[\frac{1}{n(p)}\sum_{t\in S(p)}\left((\hat{p}_{t}-y_{t})^{2}-(f^{ \prime}(x_{t})-y_{t})^{2}\right)\] \[\geq\eta\left(2\alpha-\eta\tau\right)\] \[\geq\alpha\] \[\geq\frac{\alpha^{2}}{B}\] In the other case when \(\alpha<\frac{1}{n(p)}\sum_{t\in S(p)}f(x_{t})^{2}\), plugging in \(\eta=\frac{\alpha}{\tau}\) immediately gives us \[\frac{1}{n(p)}\sum_{t\in S(p)}\left((\hat{p}_{t}-y_{t})^{2}-(f^{ \prime}(x_{t})-y_{t})^{2}\right)\] \[\geq\frac{\alpha^{2}}{B}.\qed\] The contrapositive of the above lemma can be used to show that the forecaster's \(L_{2}\)-swap-multicalibration error can be bounded with its contextual swap regret. **Theorem 3.2**.: _Fix some \(B\geq 1\) and transcript \(\pi_{1:T}\) generated by forecaster \(\mathscr{F}\) and adversary Adv. Suppose the forecaster's average contextual-swap regret with respect to \(\mathcal{F}_{(1+\sqrt{B})^{2}}\) is bounded as follows: for any \(\{f_{p}\}_{p\in\mathcal{P}}\in\mathcal{F}_{(1+\sqrt{B})^{2}}^{m}\), we have_ \[\frac{1}{T}\sum_{p\in\mathcal{P}}\sum_{t\in S(\pi_{1:T},p)}(\hat{p}_{t}-y_{t}) ^{2}-(f_{p}(x_{t})-y_{t})^{2}\leq\frac{\alpha}{B^{2}}.\] _Then the forecaster's \(L_{2}\)-swap-multicalibration error with respect to \(\mathcal{F}_{B}\) is at most \(\alpha\):_ \[\overline{sK}_{2}(\pi_{1:T},\mathcal{F}_{B})\leq\alpha.\] Proof.: Fix \(\pi_{1:T}\) and \(\alpha\in[0,1]\). For the sake of contradiction, suppose the forecaster's \(L_{2}\)-swap-multicalibration error with respect to \(\mathcal{F}_{B}\) is at least \(\alpha\), meaning there exists some \(\{f_{p}\}_{p\in\mathcal{P}}\in\mathcal{F}_{B}^{m}\) such that \[\sum_{p\in P}\frac{n(p)}{T}(K(\pi_{1:T},p,f_{p}))^{2}>\alpha.\] For each \(p\in\mathcal{P}\), write \[\alpha_{p}=\frac{n(p)}{T}(K(\pi_{1:T},p,f_{p}))^{2}\] to denote the multicalibration error with respect to forecast \(p\in\mathcal{P}\). In other words, we have \[K(\pi_{1:T},p,f_{p}^{*})=\sqrt{\frac{\alpha_{p}T}{n(p)}}\] where \(f_{p}^{*}\) is either \(f_{p}\in\mathcal{F}_{B}\) or \(-f_{p}\in\mathcal{F}_{B}\) for each \(p\in\mathcal{P}\). Because \(|K(\pi_{1:T},p,f)|\leq\sqrt{B}\) for any \(f\in\mathcal{F}_{B}\), we have \(\frac{1}{\sqrt{B}}K(\pi_{1:T},p,f_{p}^{*})\in[0,1].\) In other words, \[\sqrt{\frac{\alpha_{p}T}{Bn(p)}}\leq 1\] and \(K(\pi_{1:T},p,f_{p}^{*})\geq\sqrt{\frac{\alpha_{p}T}{Bn(p)}}\) as \(B\geq 1\). Then, for each \(p\in\mathcal{P}\), Lemma 3.4 yields that there exists \(f_{p}^{\prime}\in\mathcal{F}_{(1+\sqrt{B})^{2}}\) such that \[\frac{1}{n(p)}\sum_{t\in S(p)}(\hat{p}_{t}-y_{t})^{2}-(f_{p}^{\prime}(x_{t})- y_{t})^{2}\geq\frac{1}{B}\left(\sqrt{\frac{\alpha_{p}T}{Bn(p)}}\right)^{2}=\frac{1}{B ^{2}}\frac{\alpha_{p}T}{n(p)}\] Adding over \(p\in P\) gives us \[\sum_{p\in P}\sum_{t\in S(p)}(\hat{p}_{t}-y_{t})^{2}-(f_{p}^{\prime}(x_{t})-y _{t})^{2}\geq\sum_{p\in P}\frac{\alpha_{p}T}{B^{2}}\geq\frac{\alpha T}{B^{2}}.\] This is a contradiction to our assumption about the contextual swap regret. **Remark 3.3**.: _Theorem 3.2 bounds the forecaster's multicalibration error with respect to \(\mathcal{F}_{B}\), a subset of \(\mathcal{F}\) with bounded magnitude. This is necessary, since we have assumed that \(\mathcal{F}\) is closed under affine transformation. Observe that if \(\mathcal{F}\) is closed under affine transformations, and a forecaster has non-zero calibration error with respect to \(\mathcal{F}\), then it must actually have unboundedly large calibration error with respect to \(\mathcal{F}\), because for any \(f\in\mathcal{F}\) that witnesses a failure of \(\alpha\)-approximate multicalibration, the functions \(100f\), \(1000f\), etc. are also in \(\mathcal{F}\). Thus for such function classes, statements of approximate multicalibration must always be with respect to an upper bound on the magnitude of the functions we are considering._ In this section so far, we have shown a connection between the contextual swap regret and \(L_{2}\)-multicalibration for a fixed transcript \(\pi_{1:T}\). However, the forecaster \(\mathscr{F}_{\text{ContextualSwap}}\) we have constructed in Section 3.2 only bounds the contextual swap regret in expectation for some fixed \(\{f_{p}\}_{p\in\mathcal{P}}\in\mathcal{F}_{B}^{m}\) -- see Theorem 3.1. And it is not immediate how to go from expected contextual swap regret to expected \(L_{2}\)-multicalibration. Therefore, as before, we can use the same argument as in Lemma 3.3 where we borrowed tools from [1] to prove the following concentration bound. **Lemma 3.5**.: _Suppose the sequential shattering dimension of \(\mathcal{F}_{B}\) is finite at any scale \(\delta\): \(\text{fat}_{\delta}(\mathcal{F}_{B})<\infty\). With probability \(1-4m\rho\), \(\mathscr{F}_{\text{ContextualSwap}}\) results in \(\pi_{1:T}\) such that_ \[\max_{\{f_{p}\}_{p\in[m]}}\left|\frac{1}{T}\sum_{t=1}^{T}(\hat{p }_{t}-y_{t})^{2}-(f_{\hat{p}_{t}}(x_{t})-y_{t})^{2}-\underset{\hat{p}_{t}^{ \prime}}{\mathbb{E}}\left[(\hat{p}_{t}^{\prime}-y_{t})^{2}-(f_{\hat{p}_{t}^{ \prime}}(x_{t})-y_{t})^{2}|\pi_{1:t-1}\right]\right|\] \[\leq\max(8B,2\sqrt{B})mC_{\mathcal{F}_{B}}\sqrt{\frac{\log(\frac {1}{\rho})}{T}}\] _where \(C_{\mathcal{F}_{B}}\) is a finite constant that depends on the sequential fat shattering dimension of \(\mathcal{F}_{B}\)._ **Corollary 3.1**.: _Fix some \(B\geq 1\). Assume the sequential fat shattering dimension of \(\mathcal{F}_{B}\) is bounded at any scale \(\delta\). Against any adversary Adv, \(\mathscr{F}_{\text{ContextualSwap}}(\mathcal{F}_{(1+\sqrt{B})^{2}},\mathcal{A},m,T)\) guarantees that the \(L_{2}\)-swap-multicalibration error with respect to \(\mathcal{F}_{B}\) is bounded with high probability as follows: with probability \(1-\rho\) over the randomness of \(\{\hat{p}_{t}\}_{t=1}^{T}\), \(\mathscr{F}_{\text{ContextualSwap}}\) results in \(\pi_{1:T}\) such that_ \[\overline{sK}_{2}(\pi_{1:T},\mathcal{F}_{B})\leq B^{2}\cdot\left(\frac{1}{T} \left(mr_{\mathcal{A}}\left(\frac{T}{m},\mathcal{F}_{B^{\prime}}\right)+\frac{ 3T}{m}+m\right)+16B^{\prime}mC_{\mathcal{F}_{B^{\prime}}}\sqrt{\frac{\log( \frac{8m}{\rho})}{T}}\right).\] _where \(B^{\prime}=(1+\sqrt{B})^{2}\) and \(C_{\mathcal{F}_{B}}\) is as defined in Lemma 3.5._ Proof.: With probability \(1-2\rho\), \(\mathscr{F}_{\text{ContextualSwap}}(\mathcal{F}_{B^{\prime}},\mathcal{A},T)\) guarantees that \[\sup_{\{f_{p}\}_{p\in\mathcal{P}}}\frac{1}{T}\sum_{t=1}^{T}(\hat {p}_{t}-y_{t})^{2}-(f_{\hat{p}_{t}}(x_{t})-y_{t})^{2}\] \[\leq\sup_{\{f_{p}\}_{p\in\mathcal{F}_{B^{\prime}}^{m}}}\frac{1}{ T}\sum_{t=1}^{T}\underset{\hat{p}_{t}}{\mathbb{E}}\left[(\hat{p}_{t}-y_{t})^{2}-(f_{ \hat{p}_{t}}(x_{t})-y_{t})^{2}|\pi_{1:t-1}\right]+\max(8B^{\prime},2\sqrt{B^{ \prime}})mC_{\mathcal{F}_{B}^{\prime}}\sqrt{\frac{\log(\frac{4m}{\rho})}{T}}\] \[\leq\frac{1}{T}\left(mr_{\mathcal{A}}\left(\frac{T}{m},\mathcal{F }_{B^{\prime}}\right)+\frac{3T}{m}+m\right)+16B^{\prime}mC_{\mathcal{F}_{B}} \sqrt{\frac{\log(\frac{4m}{\rho})}{T}}\] where the first inequality follows from Lemma 3.5 and the second from Theorem 3.1. Now, appealing to Theorem 3.2, which tells us that if contextual swap regret with respect to \(\mathcal{F}_{B^{\prime}}\) is bounded, then \(L_{2}\)-multicalibration error with respect to \(\mathcal{F}_{B}\) is bounded, gives us \[\overline{sK}_{2}(\pi_{1:T},\mathcal{F}_{B})\leq B^{2}\cdot\left(\frac{1}{T} \left(mr_{\mathcal{A}}\left(\frac{T}{m},\mathcal{F}_{B^{\prime}}\right)+\frac {3T}{m}+m\right)+16B^{\prime}mC_{\mathcal{F}_{B^{\prime}}}\sqrt{\frac{\log( \frac{4m}{\rho})}{T}}\right).\] We end this section by instantiating Corollary 3.1 with concrete rates. Azoury and Warmuth [1] give an efficient algorithm for online squared error linear regression that has a regret bound scaling logarithmically with \(T\) ([12] and [13] give similar bounds): **Theorem 3.3** ([12]).: _There exists an efficient online forecasting algorithm such that for all sequences \(\{(x_{t},y_{t})\}_{t=1}^{T}\) with \(|x_{t}|\|_{2}\leq 1\) and \(|y_{t}|\leq 1\), and for all parameter vectors \(\theta\):_ \[\frac{1}{T}\left(\sum_{t=1}^{T}(\hat{p}_{t}-y_{t})^{2}-(\langle\theta,x_{t} \rangle-y_{t})^{2}\right)\leq\frac{||\theta||^{2}}{T}+\frac{2d\ln(T+1)}{T}\] Here \(||\theta||\) corresponds to our bound \(B\) on the magnitude of the comparison functions \(\mathcal{F}\). Thus if \(\mathcal{F}_{(1+\sqrt{B})^{2}}\) is taken to be the set of all linear functions whose bound is less than \((1+\sqrt{B})^{2}\), Theorem 3.3 is giving us a regret bound for \(\mathcal{F}_{B}\). We can therefore instantiate Corollary 3.1 to obtain the following bound for efficient (swap) multicalibration with respect to linear functions: **Corollary 3.2**.: _Fix \(B\geq 1\). Let \(\mathcal{F}\) be the set of all \(d\)-dimensional linear functions, and let \(\mathcal{F}_{B}\) be the set of all such functions with parameter norm \(||\theta||^{2}\leq(1+\sqrt{B})^{2}\). Then against any adversary who chooses a sequence of \(d\) dimensional examples \((x_{t},y_{t})_{t=1}^{T}\) with \(||x_{t}||\leq 1\), letting \(\mathcal{A}\) be the online forecasting algorithm for linear functions from Theorem 3.3 and letting \(m=T^{1/4}\), \(\mathscr{F}_{\text{ContextualSwap}}(\mathcal{F}_{(1+\sqrt{B})^{2}},\mathcal{A},m,T)\) guarantees that the \(L_{2}\)-swap-multicalibration error with respect to \(\mathcal{F}_{B}\) is bounded with probability \(1-\rho\) over the randomness of the forecaster:_ \[\overline{sK}_{2}(\pi_{1:T},\mathcal{F}_{B}) \leq B^{2}\cdot\left(\frac{1}{T}\left(m\left(\frac{m||\theta||^{2} }{T}+\frac{2dm\ln(T+1)}{T}\right)+\frac{3T}{m}+m\right)+16B^{\prime}mC_{ \mathcal{F}_{B^{\prime}}}\sqrt{\frac{\log(\frac{8m}{\rho})}{T}}\right)\] \[=\frac{B^{2}m^{2}B^{\prime}}{T^{2}}+\frac{2dB^{2}m^{2}\ln(T+1)}{ T^{2}}+\frac{3}{m}+\frac{m}{T}+16B^{\prime}mC_{\mathcal{F}_{B^{\prime}}} \sqrt{\frac{\log(\frac{8m}{\rho})}{T}}\] \[=\tilde{O}\left(dB^{3}\sqrt{\ln\left(\frac{1}{\rho}\right)}T^{-1/ 4}\right).\] Thus we have a computationally efficient algorithm (for linear functions, we need not say "oracle efficient" since the oracle is actually implemented in polynomial time) for making predictions that are multicalibrated with respect to linear functions with error bounds tending to zero at the rate of \(O\left(\frac{1}{T^{1/4}}\right)\). Is a rate of \(O\left(\frac{1}{T^{1/2}}\right)\) for \(L_{2}\)-multicalibration possible? In Section F we show how to obtain this rate for _finite_ classes with a non-oracle-efficient algorithm with running time scaling linearly with \(|\mathcal{F}|\) and error rates scaling logarithmically with \(|\mathcal{F}|\). Achieving this rate in an oracle efficient manner is left as an open question. ## 4 Online (Swap-)Multicalibration to Online (Swap-)Omniprediction Finally we arrive at the last step of our argument, connecting our bounds for online multicalibration to bounds on online omniprediction. We adapt similar arguments of [1, 2] which connect multicalibration to omniprediction in the batch setting. All the missing proofs in this section can be found in Appendix C. First, we show that the conditional expected value of \(f(x_{t})\) over the empirical distribution of \(\pi_{1:T}\) does not vary substantially when conditioning on \(y_{t}=1\) or \(y_{t}=0\) -- in particular, an approximate version of the following equalities: \[\left|\mathbb{E}[y_{t}f(x_{t})]-\mathbb{E}[y_{t}]\,\mathbb{E}[f( x_{t})]\right| =\left|\Pr[y_{t}=1]\cdot(\mathbb{E}[f(x_{t})]-\mathbb{E}[f(x_{t}) |y_{t}=1])\right|\] \[=\left|\Pr[y_{t}=0]\cdot(\mathbb{E}[f(x_{t})]-\mathbb{E}[f(x_{t}) |y_{t}=0])\right|.\] The lemma below corresponds to Corollary 5.1 of [1] and Claim 6.3 of [1]. **Lemma 4.1**.: _Fix \(\pi_{1:T}\). If \(\overline{sK}_{1}(\pi_{1:T},\mathcal{F})\leq\alpha\), then the following holds for any \(y\in\mathcal{Y}\) and \(\{f_{p}\}_{p\in\mathcal{P}}\):_ \[\sum_{p\in\mathcal{P}}\frac{|S(\pi_{1:T},p,y)|}{T}\left|\bar{f}_{p}(\pi_{1:T},p,y)-\bar{f}_{p}(\pi_{1:T},p)\right|\leq 2\alpha.\] _Similarly, if \(\overline{K}(\pi_{1:T},\mathcal{F})\leq\alpha\), then we have for any \(y\in\mathcal{Y}\) and \(f\in\mathcal{F}\)_ \[\sum_{p\in\mathcal{P}}\frac{|S(\pi_{1:T},p,y)|}{T}\left|\bar{f}(\pi_{1:T},p,y)- \bar{f}(\pi_{1:T},p)\right|\leq 2\alpha.\] We show that for any forecast \(p\in\mathcal{P}\), the difference in loss over rounds in \(S(p)\) between the the post-processing function \(k^{\ell}\) defined in 2.1 and any other post-processing function can be bounded in terms of the calibration error. The lemma below essentially corresponds to arguments presented in Corollary 6.2 of [10] and Lemma 6.5 of [10]. **Lemma 4.2**.: _Fix \(\pi_{1:T}\) and loss function \(\ell\). For any \(p\in\mathcal{P}\), \(k:\mathcal{P}\to[0,1]\), we have_ \[\frac{1}{n(\pi_{1:T},p)}\sum_{t\in S(\pi_{1:T},p)}\ell(y_{t},k^{\ell}(p))\leq \frac{1}{n(\pi_{1:T},p)}\sum_{t\in S(\pi_{1:T},p)}\ell(y_{t},k(p))+C_{\ell}|K( \pi_{1:T},I,p)|\] _where \(I\) is the constant function defined as in Assumption 2.1, \(k^{\ell}\) is defined in Definition 2.1, and \(C_{\ell}=\max_{a}|\ell(0,a)-\ell(1,a)|\)._ Using the two helper lemmas above, we can show that \(L_{1}\)-swap multicalibration error approximately bounds the swap omniprediction regret. Write the family of convex loss functions as \(\mathcal{L}_{\text{convex}}\) -- i.e. \(\ell(y,\cdot)\) is convex for each \(y\in\mathcal{Y}\). The theorem below is an adaptation of Theorem 6.1 from [10] to the online setting. **Theorem 4.1**.: _Fix \(\pi_{1:T}\). Then for any \(\{\ell_{p}\}_{p\in\mathcal{P}}\in\mathcal{L}_{\text{convex}}^{m}\), we have_ \[s\mathcal{O}(\pi_{1:T},\{\ell_{p}\}_{p\in\mathcal{P}},\mathcal{F})\leq\left( \max_{p\in\mathcal{P}}(C_{\ell_{p}}+4D_{\ell_{p}})\right)\cdot\overline{sK}_ {1}(\pi_{1:T},\mathcal{F})\quad\text{and}\quad\mathcal{O}(\pi_{1:T},\ell, \mathcal{F})\leq(C_{\ell}+4D_{\ell})\overline{K}_{1}(\pi_{1:T},\mathcal{F})\] _where \(C_{\ell}\) is as defined in Lemma 4.2 and \(D_{\ell}=\max_{y\in\mathcal{Y},t\in[0,1]}|\ell^{\prime}(y,t)|\) is the bound on the derivative of loss._ _Therefore, for any \(\mathcal{L}\subseteq\mathcal{L}_{\text{convex}}\)_ \[s\mathcal{O}(\pi_{1:T},\mathcal{F},\mathcal{L})\leq(C_{\mathcal{L}}+4D_{ \mathcal{L}})\overline{sK}_{1}(\pi_{1:T},\mathcal{F})\quad\text{and}\quad \mathcal{O}(\pi_{1:T},\mathcal{F},\mathcal{L})\leq(C_{\mathcal{L}}+4D_{ \mathcal{L}})\overline{K}_{1}(\pi_{1:T},\mathcal{F})\] _where \(C_{\mathcal{L}}=\max_{\ell\in\mathcal{L}}C_{\ell}\) and \(D_{\mathcal{L}}=\max_{\ell\in\mathcal{L}}D_{\ell}\)._ Finally, we can apply our online multicalibration bounds to get concrete bounds for online omniprediction. First, we apply our bound for oracle efficient multicalibration which we derived in Corollary 3.1. **Corollary 4.1**.: _Fix \(B\geq 1\), some family of convex loss functions \(\mathcal{L}\subseteq\mathcal{L}_{\text{convex}}\), and family of predictors \(\mathcal{F}\). Suppose \(\mathcal{F}_{B^{\prime}}\)'s sequential fat shattering dimension at any scale \(\delta\) is bounded where \(B^{\prime}=(1+\sqrt{B})^{2}\). Against any adversary Adv, \(\mathscr{F}_{\text{ContextualSwap}}(\mathcal{F}_{B^{\prime}},\mathcal{A},m,T)\) makes forecasts such that its swap omniprediction regret with respect to \(\mathcal{F}_{B}\) and \(\mathcal{L}\) is bounded with probability \(1-\rho\):_ \[s\mathcal{O}(\pi_{1:T},\mathcal{F}_{B},\mathcal{L})\leq(C_{\mathcal{L}}+4D_{ \mathcal{L}})\sqrt{B^{2}\cdot\left(\frac{1}{T}\left(mr_{\mathcal{A}}\left( \frac{T}{m},\mathcal{F}_{B^{\prime}}\right)+\frac{3T}{m}+m\right)+16B^{\prime} mC_{\mathcal{F}_{B^{\prime}}}\sqrt{\frac{\log(\frac{8m}{\rho})}{T}}\right)}\] _where we recall that \(C_{\ell}=\max_{a}|\ell(0,a)-\ell(1,a)|\) bounds the scale of the loss function \(\ell\), \(C=\max_{\ell\in\mathcal{L}}C_{\ell}\) is a uniform bound overthe scale of all loss functions \(\ell\in\mathcal{L}\), \(D_{\ell}=\max_{y\in\mathcal{Y},t\in[0,1]}\ell^{\prime}(y,t)\) is a bound on the derivative of \(\ell\) in its second argument, and \(D=\max_{\ell\in\mathcal{L}}D_{\ell}\) is a uniform bound on the derivative. \(C_{\mathcal{F}_{B}}\) is a finite constant that depends on the sequential fat shattering dimension of \(\mathcal{F}_{B^{\prime}}\)._ Proof.: Lemma 2.1 and Theorem 4.1 give us \[s\mathcal{O}(\pi_{1:T},\mathcal{F}_{B},\mathcal{L}_{convex})\leq(C+4D)\overline{sK }_{1}(\pi_{1:T},\mathcal{F})\leq(C+4D)\sqrt{\overline{sK}_{2}(\pi_{1:T}, \mathcal{F})}.\] Applying Corollary 3.1 which states that \(\mathscr{F}_{\text{ContextualSwap}}\) bounds \(\overline{sK}_{2}(\pi_{1:T},\mathcal{F}_{B})\) gives us that with probability \(1-\rho\), \[s\mathcal{O}(\pi_{1:T},\mathcal{F}_{B},\mathcal{L}_{convex})\] \[\leq\left(\max_{p\in\mathcal{P}}(C_{\ell_{p}}+4D_{\ell_{p}}) \right)\sqrt{\overline{sK}_{2}(\pi_{1:T},\mathcal{F})}\] \[\leq\left(\max_{p\in\mathcal{P}}(C_{\ell_{p}}+4D_{\ell_{p}}) \right)\sqrt{B^{2}\cdot\left(\frac{1}{T}\left(mr_{\mathcal{A}}\left(\frac{T}{ m},\mathcal{F}_{B^{\prime}}\right)+\frac{3T}{m}+m\right)+16B^{\prime}mC_{ \mathcal{F}_{B^{\prime}}}\sqrt{\frac{\log(\frac{8m}{\rho})}{T}}\right)}.\] where the second to last inequality follows from the fact that \(\sqrt{\cdot}\) is a concave function. We now have a concrete bound for oracle efficient online omniprediction, in terms of the regret bound of a black box online squared error regression algorithm for \(\mathcal{F}\). We can plug in the bound of [1] for online linear regression, quoted in Theorem 3.3, to get rates specifically for omniprediction with respect to linear functions. **Corollary 4.2**.: _Fix some family of convex loss functions \(\mathcal{L}\subseteq\mathcal{L}_{convex}\). Let \(\mathcal{F}\) be the set of all \(d\)-dimensional linear functions, and let \(\mathcal{F}_{B}\) we the set of all such functions with parameter norm \(||\theta||^{2}\leq B^{\prime}\) where \(B^{\prime}=(1+\sqrt{B})^{2}\). Then against any adversary who chooses a sequence of \(d\) dimensional examples \((x_{t},y_{t})_{t=1}^{T}\) with \(||x_{t}||\leq 1\), then letting \(\mathcal{A}\) be the online forecasting algorithm for linear functions from Theorem 3.3 and letting \(m=T^{1/4}\), \(\mathscr{F}_{\text{ContextualSwap}}(\mathcal{F}_{B^{\prime}},\mathcal{A},m,T)\) guarantees with probability \(1-\rho\) that the swap-omniprediction regret with respect to \(\mathcal{F}_{B}\) and \(\mathcal{L}\) is bounded:_ \[s\mathcal{O}(\pi_{1:T},\mathcal{F}_{B},\mathcal{L})\leq\tilde{O}\left((C_{ \mathcal{L}}+4D_{\mathcal{L}})\cdot\sqrt{dB^{3}}\left(\ln\left(\frac{1}{\rho }\right)\right)^{1/4}T^{-1/8}\right).\] We contrast this with the bound that we can obtain for finite classes \(\mathcal{F}\) with our (non-oracle-efficient) online multicalibration algorithm from Theorem F.1. **Corollary 4.3**.: _Fix some family of convex loss functions \(\mathcal{L}\subseteq\mathcal{L}_{convex}\) and family of predictors \(\mathcal{F}_{B}\subseteq\mathcal{F}\) whose output's squared value is bounded by \(B\). Against any adversary Adv, \(\mathscr{F}_{AMF}(\mathcal{F}_{B})\) makes forecasts such that its omniprediction regret with respect to \(\mathcal{F}\) and \(\mathcal{L}\) is bounded in expectation as:_ \[\mathop{\mathbb{E}}_{\pi_{1:T}}\left[\mathcal{O}(\pi_{1:T},\mathcal{F}_{B}, \mathcal{L})\right]\leq(C_{\mathcal{L}}+4D_{\mathcal{L}})\sqrt{\frac{3B\log(T )+4\sqrt{B\ln(|\mathcal{F}_{B}|)}+\sqrt{B}}{\sqrt{T}}}=O\left(\frac{\sqrt{ \log(T)}}{T^{1/4}}\right).\] _where we recall that \(C_{\ell}=\max_{a}|\ell(0,a)-\ell(1,a)|\) bounds the scale of the loss function \(\ell\), \(C_{\mathcal{L}}=\max_{\ell\in\mathcal{L}}C_{\ell}\) is a uniform bound over the scale of all loss functions \(\ell\in\mathcal{L}\), \(D_{\ell}=\max_{y\in\mathcal{Y},\ell\in[0,1]}\ell^{\prime}(y,t)\) is a bound on the derivative of \(\ell\) in its second argument, and \(D_{\mathcal{L}}=\max_{\ell\in\mathcal{L}}D_{\ell}\) is a uniform bound on the derivative._ Proof.: Fix \(\pi_{1:T}\). Lemma 2.1 and Theorem 4.1 give us \[\mathcal{O}(\pi_{1:T},\mathcal{F}_{B},\mathcal{L}_{convex})\leq(C_{\mathcal{L }}+4D_{\mathcal{L}})\overline{K}_{1}(\pi_{1:T},\mathcal{F}_{B})\leq(C_{ \mathcal{L}}+4D_{\mathcal{L}})\sqrt{\overline{sK}_{2}(\pi_{1:T},\mathcal{F}_{B })}.\] Applying Theorem F.1 which states that \(\mathscr{F}_{AMF}\) bounds \(\overline{K}_{2}(\pi_{1:T},\mathcal{F}_{B})\) in expectation gives us \[\mathop{\mathbb{E}}_{\pi_{1:T}}\left[\mathcal{O}(\pi_{1:T},\mathcal{ F}_{B},\mathcal{L}_{convex})\right] \leq\mathop{\mathbb{E}}_{\pi_{1:T}}\left[(C_{\mathcal{L}}+4D_{ \mathcal{L}})\sqrt{\overline{K}_{2}(\pi_{1:T},\mathcal{F}_{B})}\right]\] \[\leq(C_{\mathcal{L}}+4D_{\mathcal{L}})\sqrt{\mathop{\mathbb{E}}_ {\pi_{1:T}}\left[\overline{K}_{2}(\pi_{1:T},\mathcal{F}_{B})\right]}\] \[\leq(C_{\mathcal{L}}+4D_{\mathcal{L}})\sqrt{\frac{3Bm\log(T)+4 \sqrt{B\ln(|\mathcal{F}_{B}|)}+\sqrt{B}}{\sqrt{T}}}.\] where the second to last inequality follows from the fact that \(\sqrt{\cdot}\) is a concave function. A high probability version for the \(\ell_{2}\)-swap-multicalibration via the AMF approach can be obtained by appealing to the high probability version of the AMF approach guarantee (Theorem A.2 in [13]): this will incur additional additive error of \(O\left(\sqrt{\frac{\ln(|\mathcal{F}|)}{T}}\right)\), so the overall rate remains the same. Observe that this bound depends only logarithmically on \(|\mathcal{F}|\), and so although it applies only to finite classes \(\mathcal{F}\), we could inefficiently apply it to the class of all appropriately discretized linear functions \(\mathcal{F}_{B}\), which would obtain a bound that was comparable to the oracle efficient bound we obtain in Corollary 4.2 in terms of its dependence on \(B\) and \(d\), but with an improved dependence on \(T\). ## 5 Tightness of Our Results: A Separation Between Swap Omniprediction and Omniprediction In this section we interrogate the extent to which our rates can be improved. Can we hope for \(O(\sqrt{T})\) rates for online omniprediction? Can we hope for them using our current set of techniques? First, we show that \(O(\sqrt{T})\) rates _are_ possible for (non)-swap omniprediction -- at least for finite, binary hypothesis classes. Next, we show a barrier to obtaining \(O(\sqrt{T})\) using our current techniques, which actually give swap omniprediction: it is _not_ possible to obtain \(O(\sqrt{T})\) rates in the online setting for _swap_ omniprediction. This establishes a formal separation between omniprection and swap omniprediction in the online setting. The algorithm we give corresponding to our omniprediction upper bound has running time that is polynomial in \(|\mathcal{F}|\) however: we leave the problem of finding an oracle efficient algorithm obtaining these optimal omniprediction rates as our main open question. ### An \(O(\sqrt{T})\) Upper Bound for Omniprediction In this section, we give a (non-oracle efficient) \(O(\sqrt{T})\) upper bound for omniprediction for finite binary classes \(\mathcal{F}\) that does not go through multiclibration, which together with our lower bound in Section 5.2 gives a separation between omniprediction and swap omniprediction. Our proof combines the "AMF" framework of [13] with a recent characterization of binary scoring rules by [13]. First we recall the Online Minimax Optimization framework of [13]: **Definition 5.1** (Appendix A.3 of [13]).: _A Learner plays against an adversary over rounds \(t\in[T]\). In each of the rounds, the Learner accumulates a \(d\)-dimensional loss vector for some \(d\geq 1\). Each round's loss vector lies in \([-C,C]^{d}\) for some constant \(C>0\). In each round \(t\in[T]\), the interaction between the Learner ad the Advers proceeds as follows:_ 1. _Before round_ \(t\)_, the Adversary selects and reveals to the Learner an_ environment _comprising:_ 1. _The Learner's strategy space is a finite set_ \(\Theta_{t}\) _and the Adversary's strategy space is a convex compact action sets_ _._ 2. _A continuous vector loss function_ \(\ell_{t}(\cdot,\cdot):\Theta_{t}\times\mathcal{Z}_{t}\to[-C,C]^{d}\) _where_ \(\ell_{t}^{j}:\Theta_{t}\times\mathcal{Z}_{t}\to[-C,C]\) _is concave in the 2nd argument for each_ \(j\in[d]\)_. Note that because of the finiteness of_ \(\Theta_{t}\) _and the Learner plays a mixed strategy over_ \(\Theta_{t}\)_, the loss function is already linearized in terms of the Learner._ 3. _The Learner selects a randomized strategy_ \(\tilde{\theta}_{t}\in\Delta(\Theta_{t})\)__ 4. _The Adversary observes the Learner's selection_ \(\tilde{\theta}_{t}\) _and responds with some_ \(z_{t}\in\mathcal{Z}_{t}\)_._ 5. _The Learner plays_ \(\theta_{t}\sim\tilde{\theta}_{t}\) _and suffers (and observes) the loss vector_ \(\ell_{t}(\theta_{t},z_{t})\)_._ _The Learner's objective is to minimize the value of the maximum dimension of the accumulated loss vector after \(T\) rounds: i.e. \(\sum_{j\in[d]}\sum_{t=1}^{T}\ell_{t}^{j}(\theta_{t},z_{t})\)._ **Definition 5.2** ([11]).: _The Adversary-Moves-First (AMF) value of the game defined by the environment \((\Theta_{t},\mathcal{Y}_{t},\ell_{t})\) at round \(t\) is_ \[w_{t}^{A}=\sup_{z_{t}\in\mathcal{Z}_{t}}\min_{\tilde{\theta}_{t}\in\Delta \Theta_{t}}\left(\max_{j\in[d]}\mathop{\mathbb{E}}_{\theta_{t}\sim\tilde{ \theta}_{t}}[\ell_{d}^{j}(\theta_{t},z_{t})]\right).\] **Definition 5.3** ([11]).: _Given \(\{(\Theta_{\tau},\mathcal{Z}_{\tau},\ell_{\tau}),\theta_{\tau},z_{\tau}\}_{\tau=1}^ {t}\), we define the Learner's Adversary-Moves-First (AMF) regret for the \(j\)th dimension at round \(t\in[T]\) as_ \[R_{t}^{j}(\{(\Theta_{\tau},\mathcal{Z}_{\tau},\ell_{\tau}),\theta_{\tau},z_{ \tau}\}_{\tau=1}^{t})=\sum_{\tau=1}^{t}\left(\ell_{\tau}^{j}(\theta_{\tau},z_{ \tau})-w_{\tau}^{A}\right).\] _The overall AMF regret is defined as \(R_{t}(\{(\Theta_{\tau},\mathcal{Z}_{\tau},\ell_{\tau}),x_{\tau},y_{\tau}\}_{ \tau=1}^{t})=\max_{j\in[d]}R_{t}^{j}(\{(\Theta_{\tau},\mathcal{Z}_{\tau},\ell _{\tau}),\theta_{\tau},z_{\tau}\}_{\tau=1}^{t})\). When it is obvious from the context, we write \(R_{t}^{j}\) and \(R_{t}\)._ ``` 1:for\(t=1,\dots,T\)do 2: AMF Learner observes the adversarially chosen \(\Theta_{t}\), \(\mathcal{Z}_{t}\), and \(\ell_{t}\). 3: Let \[\chi_{t}^{j}:=\frac{\exp(\eta\sum_{s=1}^{t-1}\ell_{s}^{j}(\theta_{t},z_{t}))}{ \sum_{i\in[d]}\exp(\eta\sum_{s=1}^{t-1}\ell_{s}^{i}(\theta_{t},z_{t}))}\] 4: AMF Learner selects a mixed strategy \(\tilde{\theta}_{t}\) where \[\tilde{\theta}_{t}\in\mathop{\mathrm{argmin}}_{\tilde{\theta}\in\Delta(\Theta _{t})}\max_{z\in\mathcal{Z}}\sum_{i\in[d]}\chi_{t}^{i}\mathop{\mathbb{E}}_{ \theta\sim\tilde{\theta}}[\ell_{t}^{i}(\theta,z)]\] (5) 5: AMF Adversary chooses \(z_{t}\in\mathcal{Z}_{t}\). 6: AMF Learner plays \(\theta_{t}\sim\tilde{\theta}_{t}\). ``` **Algorithm 2**Algorithm 2 from [11] **Theorem 5.1** (Theorem A.1 and A.2 of [11]).: _For any \(T\geq\ln(d)\), the learner that plays according to Algorithm 2 of [11] against any Adversary achieves with appropriately chosen learning rate obtains AMF regret bounded as follows_ \[\mathop{\mathbb{E}}_{\{\theta_{i}\}_{t=1}^{T}}[R_{T}]\leq 4C\sqrt{T\ln d}.\] _And with probability \(1-\rho\), it achieves_ \[R_{T}\leq 8C\sqrt{T\ln\left(\frac{d}{\rho}\right)}.\] Next, we construct a multiobjective optimization problem within this framework that will imply \(O(\sqrt{T})\) omniprediction bounds by using a structural result about proper loss functions recently proven by [13]. For any \(v\in[0,1]\), let \(\ell_{v}\) be a loss function defined as follows: \[\ell_{v}(y,\hat{p})=(v-y)\cdot\operatorname{sign}(\hat{p}-v)\] where \(\operatorname{sign}(x)=1\) if \(x\geq 0\) and \(-1\) otherwise. Let us also write \(\mathcal{L}_{\mathcal{V}}=\{\ell_{v}:v\in[0,1]\}\) to denote a collection of such loss functions over \(v\in[0,1]\) and \(\mathcal{\tilde{L}}_{\mathcal{V}}^{m}=\{\ell_{v}:v\in[\frac{1}{m^{\prime}}]\}\) to denote the discretized version of it where we describe how to set \(m^{\prime}\) later. First, we note that \(\ell_{v}(y,q)\) is a proper-scoring rule: **Lemma 5.1**.: _For any \(v\in[0,1]\) and \(p\),_ \[p\in\operatorname*{argmin}_{a\in[0,1]}\ \operatorname*{\mathbb{E}}_{y\sim Ber(p)}[ \ell_{v}(y,a)].\] Proof.: Note that \[\operatorname*{\mathbb{E}}_{y\sim\operatorname{Ber}(p)}[\ell_{v}(y,a)]=(v-p) \cdot\operatorname{sign}(a-v).\] A prediction \(a\) minimizes the expected loss if the sign of \(a-v\) is opposite that of \(v-p\). \(a=p\) satisfies this criterion. As a result of \(\ell_{v}\) being a proper scoring rule, the forecaster's omniprediction regret simplifies to \[\mathcal{O}(\pi_{1:T},\mathcal{L}_{\mathcal{V}},\mathcal{F}) =\max_{f\in\mathcal{F}}\max_{\ell_{v}\in\mathcal{L}_{\mathcal{V}} }\frac{1}{T}\sum_{t=1}^{T}\ell_{v}(y_{t},k_{\ell_{v}}(\hat{p}_{t}))-\ell_{v}(y _{t},f(x_{t}))\] \[=\max_{f\in\mathcal{F}}\max_{\ell_{v}\in\mathcal{L}_{\mathcal{V}} }\frac{1}{T}\sum_{t=1}^{T}\ell_{v}(y_{t},\hat{p}_{t})-\ell_{v}(y_{t},f(x_{t})).\] We now argue that omniprediction with respect to \(\mathcal{\tilde{L_{\mathcal{V}}}}^{m}\) implies omniprediction with respect to \(\mathcal{L}_{\mathcal{V}}\). **Lemma 5.2**.: _Suppose \(m^{\prime}>m\). Fix some family of boolean predictors \(\mathcal{F}:\mathcal{X}\to\{0,1\}\). We have that for all transcripts \(\pi_{1:T}\)_ \[\mathcal{O}(\pi_{1:T},\mathcal{L}_{\mathcal{V}},\mathcal{F})\leq\mathcal{O}( \pi_{1:T},\mathcal{\tilde{L}}_{\mathcal{V}}^{m},\mathcal{F})+\frac{2T}{m^{ \prime}}.\] Proof.: Fix \(f\in\mathcal{F}\), \(v\in[0,1]\), and \(\pi_{1:T}\). Suppose \(v\in(\frac{i}{m},\frac{i+1}{m}]\). Then among \([\frac{1}{m^{\prime}}]\), choose \(v^{\prime}\) so that it falls within \((\frac{i}{m},\frac{i+1}{m}]\) and is closest to \(v\): \[v^{\prime}=\operatorname*{argmin}_{a\in[\frac{i}{m^{\prime}}],a\in(\frac{i}{m},\frac{i+1}{m}]}|v-a|.\] This guarantees that there doesn't exist any \(\hat{p}\in[\frac{1}{m}]\) that falls between \(v\) and \(v^{\prime}\), meaning for any \(\hat{p}\in[\frac{1}{m}]\), \(v\) and \(v^{\prime}\) always fall on the same side relative to \(\hat{p}\). Hence, we have for any \(\hat{p}\in[\frac{1}{m}]\) \[\operatorname{sign}(\hat{p}-v)=\operatorname{sign}(\hat{p}-v^{\prime}).\] Therefore, we have for any \(\hat{p}\in[1/m]\) \[|\ell_{v}(y,\hat{p})-\ell_{v^{\prime}}(y,\hat{p})|\leq|\operatorname{sign}( \hat{p}-v)(v-v^{\prime})|\leq\frac{1}{m^{\prime}}.\] Writing \(S(p,b)=\{t\in[T]:\hat{p}_{t}=p,f(x_{t})=b\}\), we can show that \[\sum_{t=1}^{T}\ell_{v}(y_{t},\hat{p}_{t})-\ell_{v}(y_{t},f(x_{t}))\] \[=\sum_{p\in\mathcal{P}}\sum_{b\in\{0,1\}}\sum_{t\in S(p,b)}\ell_{v }(y_{t},p)-\ell_{v}(y_{t},b)\] \[\leq\sum_{p\in\mathcal{P}}\sum_{b\in\{0,1\}}\sum_{t\in S(p,b)}\ell _{v^{\prime}}(y_{t},p)-\ell_{v^{\prime}}(y_{t},b)+\frac{2T}{m^{\prime}}.\qed\] Therefore, it suffices to bound the omniprediction regret with respect to \(\tilde{\mathcal{L}}_{\mathcal{V}}^{m^{\prime}}\). Now, define the following loss function \(\ell_{t}\) for the AMF approach in the following manner: we have a coordinate for each \(v\in[\frac{1}{m^{\prime}}]\), \(f\in\mathcal{F}\), \(a\in[\frac{1}{m}]\) and \(b\in\{0,1\}\) where \[\ell_{t}^{f,a,b,v}(\theta_{t},z_{t})=1\left[f(x_{t})=b\right]\cdot\left( \mathop{\mathbb{E}}_{y_{t}\sim z_{t}}[\ell_{v}(y_{t},\theta_{t})-\ell_{v}(y_{ t},a)]\right).\] ``` 1:for\(t=1,\dots,T\)do 2: Forecaster \(\mathscr{F}_{\mathcal{V}}\) observes the feature vector \(x_{t}\in\mathcal{X}\). 3: Construct the AMF-Learner's environment as \(\Theta_{t}=\mathcal{P}\), \(\mathcal{Z}_{t}=\Delta(\mathcal{Y})\), and \(\ell_{t}\) as defined in (9). 4: After observing the environment \((\Theta_{t},\mathcal{Z}_{t},\ell_{t})\), AMF-Learner chooses a random strategy \(\tilde{\theta}_{t}\) according to Algorithm 2 of [10]. 5: Adversary Adv chooses \(p_{t}\in[0,1]\) which is the the probability with which \(y_{t}=1\). 6: AMF Learner chooses \(\theta_{t}\in\mathcal{P}\) by sampling from \(\tilde{\theta}_{t}\in\Delta(\mathcal{P})\). 7: Forecaster \(\mathscr{F}_{AMF}\) makes the forecast \(\hat{p}_{t}=\theta_{t}\) and observes \(y_{t}\sim p_{t}\). 8: Set the action played by the AMF Adversary's strategy as \(z_{t}=y_{t}\) and have AMF Learner suffer \(\ell_{t}(\theta_{t},z_{t})\). ``` **Algorithm 3**Forecaster \(\mathscr{F}_{\mathcal{V}}(\mathcal{F})\) We first show how to bound the Adversary-Moves-First value of the game for the loss function defined above. **Lemma 5.3**.: \[w_{t}^{A}=\max_{z\in\Delta(\mathcal{Y})}\min_{\hat{p}\in[\frac{1}{m}]}\max_{f \in\mathcal{F},a\in[\frac{1}{m}],b\in\{0,1\},v\in[\frac{1}{m^{\prime}}]} \mathbb{1}\left[f(x_{t})=b\right]\cdot\left(\mathop{\mathbb{E}}_{y_{t}\sim z_ {t}}[\ell_{v}(y_{t},\hat{p})-\ell_{v}(y_{t},a)]\right)\leq\frac{2}{m}.\] Proof.: We need to bound \[\max_{z\in\Delta(\mathcal{Y})}\min_{\hat{p}\in[\frac{1}{m}]}\max_{f\in \mathcal{F},a\in[\frac{1}{m}],b\in\{0,1\},v\in[\frac{1}{m^{\prime}}]} \mathbb{1}\left[f(x_{t})=b\right]\cdot\left(\mathop{\mathbb{E}}_{y_{t}\sim z_ {t}}[\ell_{v}(y_{t},\hat{p})-\ell_{v}(y_{t},a)]\right).\] Given any \(z\in\Delta(\mathcal{Y})\), consider \[\hat{p}=\mathop{\mathrm{argmin}}_{a\in[\frac{1}{m}]}|a-\mathop{\mathbb{E}}_{y \sim z}[y]|.\] First, we show that for any \(v\in[1/m^{\prime}]\) and \(a\in[1/m]\), \[\mathop{\mathbb{E}}_{y\sim z}[\ell_{v}(y,\hat{p})-\ell(y,a)]\leq\frac{2}{m}.\] For any \(v\in[\frac{1}{m^{\prime}}]\) such that \(|v-\mathop{\mathbb{E}}_{y\sim z}[y]|\leq\frac{1}{m}\). It is immediate that \[|(v-\mathop{\mathbb{E}}_{y\sim z}[y])\cdot\mathrm{sign}(\hat{p}-v)|\leq\frac{ 1}{m}\quad\text{and}\quad|(v-\mathop{\mathbb{E}}_{y\sim z}[y])\cdot\mathrm{ sign}(a-v)|\leq\frac{1}{m}.\] Let us now focus on \(v\in[\frac{1}{m^{\prime}}]\) such that \(|v-\mathbb{E}_{y\sim z}[y]|>\frac{1}{m}\). Because \(\ell_{v}\) is a proper scoring rule, we have \[\mathop{\mathbb{E}}_{y\sim z}[\ell_{v}(y,\hat{p})-\ell(y,a)]\leq\mathop{\mathbb{ E}}_{y\sim z}\left[\ell_{v}(y,\hat{p})-\ell\left(y,\mathop{\mathbb{E}}_{y\sim z }[y]\right)\right].\] Note that \(\hat{p}\) and \(\mathbb{E}_{y\sim z}[y]\) both fall on the same side with respect to \(v\) as \(|v-\mathbb{E}_{y\sim z}[y]|>\frac{1}{m}\) but \(|\hat{p}-\mathbb{E}_{y\sim z}[y]|\leq\frac{1}{m}\). Therefore, \[\mathrm{sign}(\hat{p}-v)=\mathrm{sign}\left(\mathop{\mathbb{E}}_{y\sim z}[y]-v \right),\text{ implying }\ell_{v}(y,\hat{p})=\ell\left(y,\mathop{\mathbb{E}}_{y\sim z}[y]\right).\] Therefore, given any \(z\in\Delta(\mathcal{Y})\), \(\hat{p}\) constructed as above satisfies Invoking Theorem 5.1 with Lemma 5.3 that bounds the AMF value yields the following: **Theorem 5.2**.: _Suppose \(m^{\prime}>m\). Forecaster \(\mathcal{F}_{\mathcal{V}}(\mathcal{F})\) as described in Algorithm 3 achieves_ \[\mathop{\mathbb{E}}_{\{\hat{p}_{t}\}_{t=1}^{T}}\left[\max_{v\in[\frac{1}{m^{ \prime}}],f\in\mathcal{F},a\in[\frac{1}{m}],b\in\{0,1\}}\sum_{t=1}^{T}\mathbb{1 }[f(x_{t})=b]\cdot(\ell_{v}(y_{t},\hat{p}_{t})-\ell_{v}(y_{t},a))\right]\leq 8 \sqrt{T\ln(2|\mathcal{F}|{m^{\prime}}^{2})}+\frac{2T}{m}.\] _and with probability \(1-\rho\),_ \[\max_{v\in[\frac{1}{m^{\prime}}],f\in\mathcal{F},a\in[\frac{1}{m}],b\in\{0,1 \}}\sum_{t=1}^{T}\mathbb{1}[f(x_{t})=b]\cdot(\ell_{v}(y_{t},\hat{p}_{t})-\ell_ {v}(y_{t},a))\leq 16\sqrt{T\ln\left(\frac{2|\mathcal{F}|{m^{\prime}}^{2}}{ \rho}\right)}+\frac{2T}{m}.\] Proof.: Note that for each coordinate \(f\in\mathcal{F},v\in[\frac{1}{m^{\prime}}],a\in[\frac{1}{m}],b\in\{0,1\}\), the loss function \(\ell_{t}\) is always bounded \[\left|\mathbb{1}[f(x_{t})=b]\cdot\left(\mathop{\mathbb{E}}_{y_{t}\sim z_{t}}[ \ell_{v}(y_{t},\theta_{t})-\ell_{v}(y_{t},a)]\right)\right|\leq 2.\] Theorem 5.1 and Lemma 5.3 together yield \[\max_{v\in[\frac{1}{m^{\prime}}],f\in\mathcal{F},b\in\{0,1\}}\mathop{\mathbb{ E}}_{\{\hat{p}_{t}\}_{t=1}^{T}}\left[\sum_{t=1}^{T}\mathbb{1}[f(x_{t})=b] \cdot(\ell_{v}(y_{t},\hat{p}_{t})-\ell_{v}(y_{t},a))\right]\leq 8\sqrt{T\ln(2| \mathcal{F}|{m^{\prime}}^{2})}+\frac{2T}{m}.\] The high probability bound follows the same way. One immediate corollary is that we can achieve \(O(\sqrt{T})\) omniprediction for the set of loss functions \(\mathcal{L}_{\mathcal{V}}\) that serve as a lower bound of \(\Omega(T^{0.528})\) for swap-omniprediction -- See Section 5.2. **Corollary 5.1**.: _Suppose \(m^{\prime}>m\). Fix some family of boolean functions \(\mathcal{F}:\mathcal{X}\to\{0,1\}\). Then, Forecaster \(\mathcal{F}_{\mathcal{V}}(\mathcal{F})\) as described in Algorithm 3 achieves with probability \(1-\rho\) over the randomness of the transcript_ \[\mathcal{O}(\pi_{1:T},\mathcal{L}_{\mathcal{V}},\mathcal{F})\leq\frac{1}{T} \left(16\sqrt{T\ln\left(\frac{2|\mathcal{F}|{m^{\prime}}^{2}}{\rho}\right)}+ \frac{2T}{m}+\frac{2T}{m^{\prime}}\right).\] _Setting \(m=\sqrt{T}\) and \(m^{\prime}=2m\), we have_ \[\mathcal{O}(\pi_{1:T},\mathcal{L}_{\mathcal{V}},\mathcal{F})=\tilde{O}(T^{-0.5 }).\] Proof.: For a given transcript, \[\max_{\ell_{v}\in\tilde{\mathcal{L}}_{\mathcal{V}}^{\prime\prime},f \in\mathcal{F}}\sum_{t=1}^{T}\ell_{v}(y_{t},\hat{p}_{t})-\ell_{v}(y_{t},f(x_{t}))\] \[=\max_{\ell_{v}\in\tilde{\mathcal{L}}_{\mathcal{V}}^{\prime\prime },f\in\mathcal{F}}\sum_{b\in\{0,1\}}\sum_{t\in[T]:f(x_{t})=b}\ell_{v}(y_{t}, \hat{p}_{t})-\ell_{v}(y_{t},b)\] \[\leq 2\max_{\ell_{v}\in\tilde{\mathcal{L}}_{\mathcal{V}}^{\prime \prime},f\in\mathcal{F},b\in\{0,1\}}\sum_{t\in[T]:f(x_{t})=b}\ell_{v}(y_{t}, \hat{p}_{t})-\ell_{v}(y_{t},b).\qed\] In fact, we can further strengthen the above result with the following theorem from [13]. **Theorem 5.3** (Theorem 8 of [13]).: _For any proper loss function \(\ell\) bounded in \([-1,1]\),_ \[\sum_{t=1}^{T}\ell(y_{t},\hat{p}_{t})-\ell(y_{t},\beta)\leq 2\max_{\ell_{v}\in \mathcal{L}_{\mathcal{V}}}\left(\sum_{t=1}^{T}\ell_{v}(y_{t},\hat{p}_{t})- \ell_{v}\left(y_{t},\beta\right)\right).\] _where \(\beta=\frac{1}{T}\sum_{t=1}^{T}y_{t}\)._ Because our regret guarantee in Theorem 5.2 holds conditionally on the value of \(f(x_{t})\) values, we can use Theorem 5.3 from [13] to show that we can in fact guarantee omniprediction with respect to proper losses: i.e. \(p\in\operatorname*{argmin}_{\tilde{p}}\operatorname*{\mathbb{E}}_{y\sim \operatorname*{Ber}(p)}[\ell(y,\hat{p})]\). In fact, in Corollary 5.2, we argue how to generalize to a more general set of losses. **Theorem 5.4**.: _Suppose \(m^{\prime}>m\). Fix some finite family of boolean functions \(\mathcal{F}:\mathcal{X}\to\{0,1\}\). Forecaster \(\mathcal{F}_{\mathcal{V}}(\mathcal{F})\) as described in Algorithm 3 achieves the following with probability \(1-\rho\): for any proper loss \(\ell\) (bounded in \([-1,1]\))_ \[\sum_{t=1}^{T}\ell(y_{t},k_{\ell}(\hat{p}_{t}))-\ell(y_{t},f(x_{t}))\leq 32 \sqrt{T\ln\left(\frac{2|\mathcal{F}|{m^{\prime}}^{2}}{\rho}\right)}+\frac{4T}{m}\] _Setting \(m=\sqrt{T}\) and \(m^{\prime}=2m\) guarantees_ \[\sum_{t=1}^{T}\ell(y_{t},k_{\ell}(\hat{p}_{t}))-\ell(y_{t},f(x_{t}))=\tilde{ O}(\sqrt{T}).\] Proof.: Fix any proper scoring rule \(\ell\) and \(f\in\mathcal{F}\). Write \(R(b)=\{t\in[T]:f(x_{t})=b\}\). For each \(t\in[T]\), we also write \(\beta_{b}=\frac{1}{|R(b)|}\sum_{t\in R(b)}y_{t}\) and \(\tilde{\beta}_{b}=\operatorname*{argmin}_{a\in[\frac{1}{m}]}|a-\beta_{b}|\). As noted in the proof of Lemma 5.3, we can show that \[\left|\operatorname*{\mathbb{E}}_{y\sim\operatorname*{Ber}(\beta_{b})}[\ell_ {v}(y,\beta_{b})]-\operatorname*{\mathbb{E}}_{y\sim\operatorname*{Ber}(\beta _{b})}[\ell_{v}(y,\tilde{\beta}_{b})]\right|\leq\frac{2}{m} \tag{6}\] If \(|v-\operatorname*{\mathbb{E}}_{y\sim\operatorname*{Ber}(\beta_{b})}|\leq\frac {1}{m}\), we have \(|\operatorname*{\mathbb{E}}_{y\sim\operatorname*{Ber}(\beta_{b})}[\ell_{v}(y, \beta_{b})]|\leq\frac{1}{m}\) and \(|\operatorname*{\mathbb{E}}_{y\sim\operatorname*{Ber}(\beta_{b})}[\ell_{v}(y, \tilde{\beta}_{b})]|\leq\frac{1}{m}\). Otherwise, \(\operatorname*{\mathbb{E}}_{y\sim\operatorname*{Ber}(\beta_{b})}[\ell_{v}(y, \beta_{b})]=\operatorname*{\mathbb{E}}_{y\sim\operatorname*{Ber}(\beta_{b})}[ \ell_{v}(y,\beta_{b})]\) as \(\operatorname*{sign}(\beta_{b}-v)=\operatorname*{sign}(\tilde{\beta}_{b}-v)\); note that \(|v-\beta_{b}|>\frac{1}{m}\) but \(|\beta_{b}-\tilde{\beta}_{b}|\leq\frac{1}{m}\). Then, we can show that with probability \(1-\rho\), \[\sum_{t=1}^{T}\ell(y_{t},k_{\ell}(\hat{p}_{t}))-\ell(y_{t},f(x_{t}))\] \[= \sum_{b\in\{0,1\}}\sum_{t\in R(b)}\ell(y_{t},\hat{p}_{t})-\ell(y_{t },b)\] \[\leq \sum_{b\in\{0,1\}}\sum_{t\in R(b)}\ell(y_{t},\hat{p}_{t})-\ell(y_{t },\beta_{b})\] ( \[\ell\] is a proper scoring rule) \[\leq 2\left(\sum_{b\in\{0,1\}}\sum_{t\in R(b)}\ell_{v}(y_{t},\hat {p}_{t})-\ell_{v}(y_{t},\beta_{b})\right)\] (Theorem 5.3) \[\leq 2\left(\sum_{b\in\{0,1\}}\sum_{t\in R(b)}\ell_{v}(y_{t},\hat {p}_{t})-\ell_{v}(y_{t},\tilde{\beta}_{b})\right)+\frac{2T}{m}\] (Equation (6)) \[\leq 32\sqrt{T\ln\left(\frac{2|\mathcal{F}|{m^{\prime}}^{2}}{ \rho}\right)}+\frac{2T}{m}+\frac{2T}{m}.\] (Theorem 5.2) Now, we show how to generalize the above omniprediction to a more general family of losses: all bi-monotone losses \(\ell\). We say a loss function \(\ell\) is bi-monotone if \(\ell(1,\cdot)\) is monotonically increasing and \(\ell(0,\cdot)\) is monotonically decreasing. **Corollary 5.2**.: _With \(m=\sqrt{T}\) and \(m^{\prime}=2m\), Forecaster \(\mathcal{F}_{\mathcal{V}}(\mathcal{F})\) as described in Algorithm 3 achieves the following with probability \(1-\rho\) over the randomness of the transcript: for any bi-monotone loss \(\ell\) bounded in \([-1,1]\)_ \[\sum_{t=1}^{T}\ell(y_{t},k_{\ell}(\hat{p}_{t}))-\ell(y_{t},f(x_{t}))\leq\tilde {O}(\sqrt{T}).\] Proof.: Note that for any monotone loss function \(\ell\), we have \(\ell(y,k_{\ell}(0))=\ell(y,0)\) and \(\ell(y,k_{\ell}(1))=\ell(y,1)\). Writing \(\tilde{\ell}(y,\hat{p})=\ell(y,k_{\ell}(\hat{p}))\), we have \[\ell(y,k_{\ell}(\hat{p}))-\ell(y,f(x))\] \[=\ell(y,k_{\ell}(\hat{p}))-\ell(y,k_{\ell}(f(x)))\] \[=\tilde{\ell}(y,\hat{p})-\tilde{\ell}(y,f(x)).\] Note that \(\tilde{\ell}\) is a proper loss function as a result of the optimal processing function \(k_{\ell}\) \[p\in\operatorname*{argmin}_{\hat{p}}\operatorname*{\mathbb{E}}_{y\sim\operatorname {Ber}(p)}[\tilde{\ell}(y,\hat{p})].\] Theorem 5.4 tells us that Forecaster \(\mathscr{F}_{\mathcal{V}}\) gets \(O(\sqrt{T})\) regret with respect to all proper loss functions, which includes \(\tilde{\ell}\) as well. Therefore, we have \[\sum_{t=1}^{T}\ell(y_{t},k_{\ell}(\hat{p}_{t}))-\ell(y_{t},f(x_{t}))\leq\tilde {O}(\sqrt{T}).\qed\] ### A Lower Bound for Swap Omniprediction Consider \(\mathcal{L}\), the class of convex loss functions that are \(\gamma\)-Lipschitz in both the arguments, which we denote with \(\mathcal{L}^{\gamma}_{\text{convex}}\). We show that swap omniprediction with respect to \((\mathcal{L}^{\gamma}_{\text{convex}},\mathcal{F})\) implies \(L_{1}\)-multicalibration with respect to class \(\mathcal{F}\). In fact, our lower bound holds for \(\mathcal{L}_{\mathcal{V}}\), the restricted class of convex loss functions on which we based our omniprediction upper bound in Section 5.1, and so our lower bound serves to formally separate the rates obtainable for omniprediction and for swap omniprediction. We assume that the benchmark class \(\mathcal{F}\) contains the constant \(0\) and \(1\) function. In particular, the assumption is satisfied by the class of real-valued linear functions with bounded norm. Along with the lower bound of [13] on \(L_{1}\) calibration error, the following lemma implies impossibility of achieving swap-omniprediction with \(O(\sqrt{T})\) regret. In fact, it is impossible to achieve regret of \(o(T^{0.528})\). **Lemma 5.4**.: _Assume that \(\mathcal{F}\) contains the constant \(0\) and the constant \(1\) function. Recall that \(\mathcal{L}^{\gamma}_{\text{convex}}\) is the class of \(\gamma\)-Lipschitz convex loss functions. Then, for \(\gamma\geq 2\),_ \[\operatorname*{\mathbb{E}}_{\pi_{1:T}}\left[s\mathcal{O}(\pi_{1:T},\mathcal{F },\mathcal{L}^{\gamma}_{\text{convex}})\right]\geq\Omega\left(\operatorname*{ \mathbb{E}}_{\pi_{1:T}}\left[\overline{K}_{1}(\pi_{1:T},1)\right]-\frac{1}{T} \right).\] _Here, \(\overline{K}_{1}(\pi_{1:T},1)\) denotes the \(L_{1}\)-calibration error for the transcript \(\pi_{1:T}\)._ **Remark 5.1**.: _In fact, Lemma 5.4 holds for the class of loss functions \(\mathcal{L}_{\mathcal{V}}\) defined in Section 5.1. Recall that \(\mathcal{L}_{\mathcal{V}}=\{\ell_{v}:v\in[0,1]\}\) where \(\ell_{v}\) is proper-scoring rule defined as follows:_ \[\ell_{v}(y,q)=(v-y)\cdot\text{sign}(q-v).\] _That is,_ \[\operatorname*{\mathbb{E}}_{\pi_{1:T}}\left[s\mathcal{O}(\pi_{1:T},\mathcal{F },\mathcal{L}_{\mathcal{V}})\right]\geq\Omega\left(\operatorname*{\mathbb{E}} _{\pi_{1:T}}\left[\overline{K}_{1}(\pi_{1:T},1)\right]-\frac{1}{T}\right).\] _This gives a strict separation between omniprediction and swap-omniprediction._ Proof of Lemma 5.4.: For all \(v\in[0,1]\), let \(\operatorname{Trunc}_{v}:\{0,1\}\times[0,1]\to[-1,1]\) be a loss function defined as follows: \[\operatorname{Trunc}_{v}(y,q)=(v-y)(2q-1).\] It is easy to see that \(\forall v,\operatorname{Trunc}_{v}\in\mathcal{L}^{\gamma}_{\text{convex}}\), which contains the class of \(2\)-Lipschitz convex loss functions. Next, we compute \(k_{\operatorname{Trunc}_{v}}(p)\), which is the optimal post-processing function for loss \(\operatorname{Trunc}_{v}\) when \(y\) is drawn from \(\operatorname{Ber}(p)\). \[k_{\operatorname{Trunc}_{v}}(p)=\arg\min_{q\in[0,1]}\operatorname*{\mathbb{E}} _{y\sim\operatorname{Ber}(p)}\left[\operatorname{Trunc}_{v}(y,q)\right]=\arg \min_{q\in[0,1]}(v-p)(2q-1)=\begin{cases}0\text{ if }p<v\\ 1\text{ if }p\geq v.\end{cases}\] For \(p=v\), \(k_{\operatorname{Trunc}_{v}}(p)\) can be arbitrarily chosen, but w.l.o.g. we can assume it to be \(1\) for the proof of this lemma. We can compute \(\operatorname{Trunc}_{v}(y,k_{\operatorname{Trunc}_{v}}(p))\) as follows: \[\operatorname{Trunc}_{v}(y,k_{\operatorname{Trunc}_{v}}(p))=\begin{cases}-(v-y )\text{ if }p<v\\ (v-y)\text{ if }p\geq v.\end{cases} \tag{7}\] Therefore, \(\operatorname{Trunc}_{v}(y,k_{\operatorname{Trunc}_{v}}(p))=(v-y)\cdot\text{ sign}(p-v)\), which is exactly equal to proper-scoring rule \(\ell_{v}\) mentioned in Remark 5.1. Hence, the rest of the proof follows for loss class \(\mathcal{L}_{\mathcal{V}}\). For a given transcript \(\pi_{1:T}\), \(L_{1}\)-multicalibration error with respect to the constant function \(1\) can be written as \[\overline{K}_{1}(\pi_{1:T},1)=\sum_{p\in\mathcal{P}}\frac{n(\pi_{1:T},p)}{T} \left|\overline{K}(\pi_{1:T},p,1)\right|,\] where \(\overline{K}(\pi_{1:T},p,1)=\frac{1}{n(\pi_{1:T},p)}\left(\sum_{t\in S(\pi_{1:T},p) }(y_{t}-\hat{p}_{t})\right)\) is the calibration error for the time-steps when forecaster's prediction value was \(p\), that is, \(\hat{p}_{t}=p\). Next, we will show that \[s\mathcal{O}(\pi_{1:T},\mathcal{F},\mathcal{L}_{\text{convex}}^{\gamma})\geq 2 \overline{K}_{1}(\pi_{1:T},1)-\frac{2}{T}, \tag{8}\] and the lemma statement follows by taking expectation over the transcripts. \[s\mathcal{O}(\pi_{1:T},\mathcal{F},\mathcal{L}_{\text{convex}}^ {\gamma}) =\max_{\{\ell_{p}\in\mathcal{L}_{\text{convex}}^{\gamma}\}_{p\in \mathcal{P}},\{f_{p}\in\mathcal{F}\}_{p\in\mathcal{P}}}s\mathcal{O}(\pi_{1:T},\{\ell_{p}\}_{p\in\mathcal{P}},\{f_{p}\}_{p\in\mathcal{P}})\] \[\geq\max_{\forall p\in\mathcal{P},\ \ell_{p}\in\{\text{Trunc}_{(p+1/T)},\text{Trunc}_{(p-1/T)}\},f_{p}\in\{0,1 \}}s\mathcal{O}(\pi_{1:T},\{\ell_{p}\}_{p\in\mathcal{P}},\{f_{p}\}_{p\in \mathcal{P}}).\] For all \(p\in\mathcal{P}\), we compute the swap-omniprediction regret with respect to the following values of \(\ell_{p}\) and \(f_{p}\). 1. If \(\overline{K}(\pi_{1:T},p,1)>0\), let \(\ell_{p}=\text{Trunc}_{(p+1/T)}\) and \(f_{p}=1\). 2. Else if \(\overline{K}(\pi_{1:T},p,1)<0\), let \(\ell_{p}=\text{Trunc}_{(p-1/T)}\) and \(f_{p}=0\). 3. Else \(\ell_{p}=\text{Trunc}_{p}\) and \(f_{p}=1\). To summarize, we take \(\ell_{p}=\text{Trunc}_{\left(p+\frac{1}{T},\frac{\overline{K}(\pi_{1:T},p,1) }{|\overline{K}(\pi_{1:T},p,1)|}\right)}\) and \(f_{p}=1-\frac{|\overline{K}(\pi_{1:T},p,1)|-\overline{K}(\pi_{1:T},p,1)}{2| \overline{K}(\pi_{1:T},p,1)|}\). Recall that \(s\mathcal{O}(\pi_{1:T},\{\ell_{p}\}_{p\in\mathcal{P}},\{f_{p}\}_{p\in \mathcal{P}})\) is equal to \[\sum_{p\in\mathcal{P}}\frac{n(\pi_{1:T},p)}{T}\left(\frac{1}{n(\pi_{1:T},p)} \sum_{t\in S(p)}\ell_{p}(y_{t},k_{\ell_{p}}(\hat{p}_{t}))-\frac{1}{n(\pi_{1:T},p)}\sum_{t\in S(p)}\ell_{p}(y_{t},f_{p}(x_{t}))\right).\] Next, we show that for all \(p\in\mathcal{P}\), \[\left(\frac{1}{n(\pi_{1:T},p)}\sum_{t\in S(p)}\ell_{p}(y_{t},k_{\ell_{p}}(\hat {p}_{t}))-\frac{1}{n(\pi_{1:T},p)}\sum_{t\in S(p)}\ell_{p}(y_{t},f_{p}(x_{t})) \right)\ \geq\ 2\left(|\overline{K}(\pi_{1:T},p,1)|-\frac{1}{T}\right),\] which implies Equation (8): \[\sum_{t\in S(p)}\ell_{p}(y_{t},k_{\ell_{p}}(\hat{p}_{t}))-\sum_{t\in S (p)}\ell_{p}(y_{t},f_{p}(x_{t}))\] \[=\sum_{t\in S(p)}\ell_{p}(y_{t},k_{\ell_{p}}(p))-\sum_{t\in S(p)} \ell_{p}(y_{t},f_{p}(x_{t}))\] \[=\sum_{t\in S(p)}\operatorname{Trunc}_{\left(p+\frac{1}{T}\cdot \frac{\overline{K}(\pi_{1:T},p,1)}{|\overline{K}(\pi_{1:T},p,1)|}\right)}\left( y_{t},k_{\operatorname{Trunc}_{\left(p+\frac{1}{T}\cdot\frac{\overline{K}(\pi_{1:T},p,1)}{|\overline{K}(\pi_{1:T},p,1)|}\right)}(p)\right)\] \[\quad-\sum_{t\in S(p)}\operatorname{Trunc}_{\left(p+\frac{1}{T} \cdot\frac{\overline{K}(\pi_{1:T},p,1)}{|\overline{K}(\pi_{1:T},p,1)|}\right) }\left(y_{t},1-\frac{|\overline{K}(\pi_{1:T},p,1)|-\overline{K}(\pi_{1:T},p,1) }{2|\overline{K}(\pi_{1:T},p,1)|}\right)\] \[=\sum_{t\in S(p)}\left(p+\frac{1}{T}\cdot\frac{\overline{K}(\pi_{ 1:T},p,1)}{|\overline{K}(\pi_{1:T},p,1)|}-y_{t}\right)\cdot\operatorname{sign} \left(-\frac{1}{T}\cdot\frac{\overline{K}(\pi_{1:T},p,1)}{|\overline{K}(\pi_{ 1:T},p,1)|}\right)\] (using Equation (7)) \[\quad-\sum_{t\in S(p)}\left(p+\frac{1}{T}\cdot\frac{\overline{K}( \pi_{1:T},p,1)}{|\overline{K}(\pi_{1:T},p,1)|}-y_{t}\right)\cdot\left(1-\frac{ |\overline{K}(\pi_{1:T},p,1)|-\overline{K}(\pi_{1:T},p,1)}{|\overline{K}(\pi_ {1:T},p,1)|}\right)\] \[=\sum_{t\in S(p)}\left(p+\frac{1}{T}\cdot\frac{\overline{K}(\pi_ {1:T},p,1)}{|\overline{K}(\pi_{1:T},p,1)|}-y_{t}\right)\cdot\left(1-\frac{| \overline{K}(\pi_{1:T},p,1)|+\overline{K}(\pi_{1:T},p,1)}{|\overline{K}(\pi_ {1:T},p,1)|}\right)\] \[\quad-\sum_{t\in S(p)}\left(p+\frac{1}{T}\cdot\frac{\overline{K}( \pi_{1:T},p,1)}{|\overline{K}(\pi_{1:T},p,1)|}-y_{t}\right)\cdot\left(1-\frac{ |\overline{K}(\pi_{1:T},p,1)|-\overline{K}(\pi_{1:T},p,1)}{|\overline{K}(\pi_ {1:T},p,1)|}\right)\] \[=\sum_{t\in S(p)}\left(p+\frac{1}{T}\cdot\frac{\overline{K}(\pi_ {1:T},p,1)}{|\overline{K}(\pi_{1:T},p,1)|}-y_{t}\right)\cdot\frac{-2\overline{ K}(\pi_{1:T},p,1)}{|\overline{K}(\pi_{1:T},p,1)|}\] \[\geq\sum_{t\in S(p)}\left(y_{t}-p\right)\cdot\frac{2\overline{K}( \pi_{1:T},p,1)}{|\overline{K}(\pi_{1:T},p,1)|}-\sum_{t\in S(p)}\frac{2}{T}\] \[=2\left|\sum_{t\in S(p)}\left(y_{t}-p\right)\right|-\frac{2n(\pi_ {1:T},p)}{T}.\] Now, we state [11]'s \(L_{1}\)-calibration lower bound result. **Theorem 5.5** ([11]).: _There exists an adversary Adv such that for every Forecaster \(\mathscr{F}\), we have_ \[\operatorname*{\mathbb{E}}_{\pi_{1:T}}[\overline{K}_{1}(\pi_{1:T},1)]=\Omega \left(\frac{T^{0.528}}{T}\right)\] Lemma 5.4 and Theorem 5.5 together result in a lower bound for the swap omniprediction: **Corollary 5.3**.: _There exists an adversary Adv such that for every Forecaster \(\mathscr{F}\), we have_ \[\operatorname*{\mathbb{E}}_{\pi_{1:T}}\left[s\mathcal{O}(\pi_{1:T},\mathcal{F}, \mathcal{L}_{\text{convex}}^{\gamma})\right]=\Omega\left(\frac{T^{0.528}}{T} \right).\] An Extension: Online Oracle-efficient Quantile Multicalibration and Multivalid Conformal Prediction In this section, we observe that the techniques that we develop in this paper are largely independent of the goal of _mean_ multicalibration and _squared error_ online regression. The connection between mean multicalibration and the squared error loss function comes from the fact that squared error is a proper scoring rule -- i.e. it is minimized by or _elicits_ the mean. More generally, there is a direct connection between multicalibration for a generic distributional property \(\Gamma\) and the regression loss that elicits that property (i.e. is minimized at that property) [10]. In this section, we show how to extend our results to give an online oracle efficient algorithm for _quantile_ multicalibration, as studied by [13, 2]. Informally, squared error is to mean multicalibration as pinball loss is to quantile multicalibration, a connection that was first formalized in [14]. At a high level, we can therefore similarly reduce the problem of online quantile multicalibration to the problem of online learning with respect to pinball loss. One primary reason to study online quantile multicalibration is that it has a direct application to the problem of online conformal prediction, which we define next. We describe the online conformal prediction problem following [2]. Fix a bounded conformal score function \(s_{t}:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}\) which can change in arbitrary ways between rounds \(t\in[T]\). Without loss of generality, we assume that the scoring function takes values in the unit interval: \(s_{t}(x,y)\in\mathcal{S}\) where \(\mathcal{S}=[0,1]\) for any \(x\in\mathcal{X},y\in\mathcal{Y}\), and \(t\in[T]\). Fix some target coverage rate \(q\). In each round \(t\in[T]\), an interaction between a conformal learner and an adversary proceeds as follows: 1. The conformal learner chooses a conformal score function \(s_{t}:\mathcal{X}\times\mathcal{Y}\to[0,1]\), which may be observed by the adversary. 2. The _adversary_ chooses a joint distribution over feature vectors \(x_{t}\in\mathcal{X}\) and labels \(y_{t}\in\mathcal{Y}\). The learner receives \(x_{t}\) (a realized feature vector), but no information about the label \(y_{t}\). 3. The learner produces a conformity threshold \(\hat{p}_{t}\in\mathcal{P}_{\text{quantile}}\) where \(\mathcal{P}_{\text{quantile}}=[\frac{1}{m}]\) as before. This corresponds to a prediction set which the learner outputs: \[\mathcal{T}_{t}(x_{t})=\{y\in\mathcal{Y}:s_{t}(x_{t},y)\leq\hat{p}_{t}\}.\] 4. The learner then learns the realized label \(y_{t}\). Ideally, the learner wants to produce prediction sets \(\mathcal{T}_{t}(x_{t})\) that cover the true label \(y\) with probability \(q\) over the randomness of the adversary's unknown label distribution: \(\Pr_{y|x_{t}}[y\in\mathcal{T}_{t}(x_{t})]\approx q\). Because of the structure of the prediction sets, this is equivalent to choosing a conformity threshold \(q_{t}\) such that over the randomness of the adversary's unknown label distribution: \(\Pr_{y|x_{t}}[s_{t}(x_{t},y)\leq q_{t}]\approx q\). Because the adversary may choose the label distribution with knowledge of the conformal score function, we will elide the particulars of the conformal score function and the distribution on labels \(y_{t}\) in our derivation, and instead equivalently imagine the adversary directly choosing a distribution over conformal scores \(s_{t}\) conditional on \(x_{t}\) (representing the distribution over conformal scores \(s_{t}(x_{t},y_{t})\)). We may thus view the interaction in the following simplified form: 1. The _adversary_ chooses a joint distribution over feature vector \(x_{t}\in\mathcal{X}\) and conformal score \(s_{t}\in\mathcal{S}\). The learner receives \(x_{t}\) (a realized feature vector), but no information about \(s_{t}\). 2. The learner produces a conformity threshold \(\hat{p}_{t}\in\mathcal{P}_{\text{quantile}}\). 3. The learner observes the realized conformal score \(s_{t}\). We'll refer to the conditional conformal score distribution at round \(t\in[T]\) as \(\mathbf{s}_{t}\) (or \(\mathbf{s}_{t}|x_{t}\) to make it obvious what the realized feature vector is) for convenience. Also, just as in [14], we assume that \(\mathbf{s}_{t}\) is smooth: **Definition 6.1**.: _[_14_]_ _A conditional nonconformity score distribution \(\mathbf{s}\sim\Delta(\mathcal{S})\) is \(\rho\)-Lipschitz if we have_ \[\Pr_{s\sim\mathbf{s}}[s\leq\tau^{\prime}]-\Pr_{s\sim\mathbf{s}}[s\leq\tau] \leq\rho(\tau^{\prime}-\tau)\quad\text{ for all }0\leq\tau\leq\tau^{\prime}\leq 1.\] _We say distribution \(\mathcal{D}\in\Delta(\mathcal{X}\times\mathcal{S})\) is \(\rho\)-Lipschitz if the conditional conformal distribution \(\mathcal{D}_{\mathcal{S}}(x)\) is \(\rho\)-Lipschitz for every \(x\) in the support of \(\mathcal{D}\) (i.e. \(\mathcal{D}_{\mathcal{X}}(x)>0\))._ As in Section 2, we write \(\pi_{1:t}=\{(x_{\tau},s_{\tau},\hat{p}_{\tau})\}_{\tau=1}^{t}\) to denote the realized transcript of the interaction between the learner and the adversary. Similarly, we write \(\tilde{\pi}_{1:t}=\{(x_{\tau},\mathbf{s}_{\tau},\hat{p}_{\tau})\}_{\tau=1}^{t}\) to denote the unrealized transcript where \(x_{t}\) and \(\hat{p}_{t}\) are realized but \(\mathbf{s}_{t}\)'s haven't been realized. As before, we write \[S(\pi_{1:t},p) =S(\tilde{\pi}_{1:t},p)=\{\tau\in[t]:\hat{p}_{t}=p\}\] \[n(\pi_{1:t},p) =n(\tilde{\pi}_{1:t},p)=|\{\tau\in[t]:\hat{p}_{t}=p\}|.\] We write \(\mathcal{D}(\tilde{\pi}_{1:T})\) to denote the uniform distribution over \(\{(x_{t},\mathbf{s}_{t})\}_{t=1}^{T}\) and \(\mathcal{D}^{p}(\tilde{\pi}_{1:T})\) to denote the uniform distribution over \(\{(x_{t},\mathbf{s}_{t})\}_{t\in S(\pi_{1:T},p)}\). When it is obvious from the context, we just write \(\mathcal{D}\) and \(\mathcal{D}^{p}\). Given any distribution \(\mathcal{D}\in\Delta(\mathcal{X}\times\mathcal{S})\), to refer to its marginal distribution over just the feature vectors \(\mathcal{X}\), we write \(\mathcal{D}_{\mathcal{X}}\). And we write \(\mathcal{D}_{\mathcal{S}}(x)\in\Delta(\mathcal{S})\) to denote the conformal score distribution of \(\mathcal{D}\) conditioning on the feature vector \(x\). When there are multiple arguments, we sometimes carry the arguments: e.g. given \(\mathcal{D}(\tilde{\pi}_{1:T})\), we write \(\mathcal{D}(\tilde{\pi}_{1:T},x)=\mathcal{D}(\tilde{\pi}_{1:T})(x)\) to denote the conformal score distribution on conditioning on \(x\). **Definition 6.2**.: _Given \(\pi_{1:T}\) that is generated by the conformal learner and the adversary, its quantile-multivalidity error with respect to target quantile \(q\), \(f\in\mathcal{F}\) and \(p\in\mathcal{P}_{\text{quantile}}\) is defined as_ \[Q(\pi_{1:T},p,f)=\frac{1}{n(\pi_{1:T},p)}\sum_{t\in S(\pi_{1:T},p)}f(x_{t})\cdot (q-\mathbbm{1}[s_{t}\leq p]).\] _Similarly given unrealized transcript \(\tilde{\pi}_{1:T}\), we write its quantile-multivalidity error with respect to \(f\in\mathcal{F}\) and \(p\in\mathcal{P}_{\text{quantile}}\) as:_ \[Q(\tilde{\pi}_{1:T},p,f) =\frac{1}{n(\pi_{1:T},p)}\sum_{t\in S(\tilde{\pi}_{1:T},p)}f(x_{t })\cdot\left(q-\Pr_{s_{t}\sim\mathcal{B}_{t}}[s_{t}\leq p]\right)\] \[=\mathbb{E}_{x\sim\mathcal{D}_{\mathcal{X}}^{p}(\tilde{\pi}_{1:T} )}\left[f(x)\cdot\left(q-\Pr_{s_{t}\sim\mathcal{D}_{\mathcal{S}}(\tilde{\pi}_ {1:T},x)}[s_{t}\leq p]\right)\right]\] _As before, we write the \(L_{2}\)-swap-quantile-multivalidity error with respect to \(\{f_{p}\}_{p\in\mathcal{P}_{\text{quantile}}}\) and \(L_{2}\)-quantile-multivalidity error with respect to \(f\) as_ \[\overline{sQ}_{2}(\pi_{1:T},\{f_{p}\}_{p\in\mathcal{P}_{\text{quantile }}}) =\sum_{p\in\mathcal{P}}\frac{n(\pi_{1:T},p)}{T}\left(Q(\pi_{1:T},p,f_{p}) \right)^{2}\] \[\overline{Q}_{2}(\pi_{1:T},f) =\sum_{p\in\mathcal{P}}\frac{n(\pi_{1:T},p)}{T}\left(Q(\pi_{1:T},p,f)\right)^{2}.\] _Also, similarly, we write_ \[\overline{sQ}_{2}(\tilde{\pi}_{1:T},\{f_{p}\}_{p\in\mathcal{P}_{ \text{quantile}}}) =\sum_{p\in\mathcal{P}}\frac{n(\pi_{1:T},p)}{T}\left(Q(\tilde{\pi}_ {1:T},p,f_{p})\right)^{2}\] \[\overline{Q}_{2}(\tilde{\pi}_{1:T},f) =\sum_{p\in\mathcal{P}}\frac{n(\pi_{1:T},p)}{T}\left(Q(\tilde{\pi}_ {1:T},p,f)\right)^{2}.\] ### Sketch: Minimizing \(L_{2}\)-Swap-Quantile-Multivalidity Error Here we sketch out how to minimize \(L_{2}\)-swap-quantile-multivalidity error \(\overline{sQ}_{2}\) with techniques that mirror the techniques presented in Section 3. Details can be found in Appendix E. First, just as in Section 3 where we leverage an online regression oracle for squared loss, we need to rely on an online quantile regression oracle for pinball loss where pinball loss is defined below. Note that just as squared loss is a proper scoring rule for eliciting means, pinball loss is a proper scoring rule for eliciting quantiles. **Definition 6.3** (Pinball Loss).: _The pinball loss at target quantile \(q\) for prediction \(p\in\mathcal{P}_{\text{quantile}}\) and score \(s\in\mathcal{S}\) is_ \[PB_{q}(p,s)=(s-p)q\cdot 1[s>p]+(p-s)(1-q)\cdot 1[s\leq p].\] _Given a distribution \(\mathbf{s}\) over \(\Delta([0,1])\), we similarly write_ \[PB_{q}(p,\mathbf{s})=\mathop{\mathbb{E}}_{s\sim\mathbf{s}}[PB_{q}(p,s)].\] We write \(\mathcal{A}:\Psi^{*}\times\mathcal{X}\to\mathcal{P}_{\text{quantile}}\) to denote the online quantile regression oracle. Suppose against any adversarially and adaptively chosen sequence of \(\{(x_{t},s_{t})\}_{t=1}^{T}\), its pinball loss regret guarantee with respect to any \(f\in\mathcal{F}\) can be bounded as \[\sum_{t=1}^{T}PB_{q}(\hat{p}_{t},s_{t})-\sum_{t=1}^{T}PB_{q}(f(x_{t}),s_{t}) \leq r_{\mathcal{A}}(T,\mathcal{F}).\] Because the pinball loss with target quantile \(q\) is \(q^{\prime}=\max(q,1-q)\)-Lipschitz, we can round \(\mathcal{A}\)'s output to \([\frac{1}{m}]\) and suffer at most an additional \(\frac{q^{\prime}}{m}\) loss from rounding. As before, we also choose a prediction from \([\frac{1}{m}]\) at random with probability \(1-\frac{1}{\sqrt{T}}\). In other words, this rounded and randomized oracle, which we denote by \(\tilde{\mathcal{A}}^{m}\) as before, has an expected regret guarantee of \(r_{\mathcal{A}}(T,\mathcal{F})+\frac{q^{\prime}T}{m}+\sqrt{T}\). Finally, similarly as in Section 3.1, we can borrow ideas from [1, 1] to construct a conformal learner such that its contextual swap regret is bounded with high probability against any \(\{f_{p}\}_{p\in\mathcal{P}_{\text{quantile}}}\) in terms of \(r_{\tilde{\mathcal{A}}^{m}}(T,\mathcal{F})\): \[\sum_{t=1}^{T}\mathop{\mathbb{E}}_{\hat{p}_{t}\sim\hat{\mathbf{ p}}_{t}}[PB_{q}(p,s_{t})-PB_{q}(f_{\hat{p}_{t}}(x_{t}),s_{t})|\pi_{1:t-1}]\] \[\leq\left(mr_{\mathcal{A}}\left(\frac{T}{m},\mathcal{F}\right)+ \frac{q^{\prime}T}{m}+1\right)+\max(8B,2\sqrt{B})mC_{\mathcal{F}_{B}}\sqrt{ \frac{\log(\frac{4m}{\rho})}{T}}.\] Here we simply observe that our algorithmic construction in Section 3.1 was agnostic as to the form of the loss function, and so it carries over unchanged when we replace squared loss with pinball loss. The final piece to our argument is the connection between contextual swap regret with respect to pinball loss and \(L_{2}\)-swap-quantile-multivalidity error. Using similar ideas as in Lemma 3.4 and borrowing some ideas from [13], we can show that if there exists \(f\in\mathcal{F}\) and \(p\in\mathcal{P}_{\text{quantile}}\) such that \(Q(\pi_{1:T},p,f)\geq\alpha\), then there must exist \(f^{\prime}\in\mathcal{F}\) such that \[\sum_{t\in S(\pi_{1:T},p)}PB_{q}(p,s_{t})-PB_{q}(f^{\prime}(p),s_{t})\] also scales linearly with \(\alpha\). While deferring the actual details to Appendix E, here we go over the overall argument that shows their connection while highlighting the additional arguments needed compared to Lemma 3.4. We first need to argue that if the multivalidity error with respect to \(f\) and \(p\) under the uniform distribution over realized (feature, conformal score) pairs \(\{(x_{t},s_{t})\}_{t=1}^{T}\) is big, then the multivalidity error with respect to \(\{(x_{t},\mathbf{s}_{t})\}_{t=1}^{T}\) must be big as well; we can appeal to [1] to show this concentration over all \(f\in\mathcal{F}\). By taking the randomness over \(s_{t}\) into account through \(\mathbf{s}_{t}\), we now have the conformal score distribution over \(\{(x_{t},\mathbf{s}_{t})\}_{t=1}^{T}\) is now continuous and smooth. Now, we need to modify Lemma 3.1 of [13] -- the lemma states that fixing the multivalidity error with respect to some group function \(g:\mathcal{X}\to\{0,1\}\) and level set \(p\) by swapping \(p\) with \(p+\eta\) for the points that belong to group \(g\) and are given the prediction \(p\) decreases the pinball loss over these points by an amount that depends on the multivalidity error and the smoothness of the distribution. We note that they choose \(\eta\) such that \(p+\eta\) is the exact \(q\)-quantile for these points, which requires the conformal score distribution to be continuous, and they show the decrease in pinball loss depends on the smoothness of the distribution over the conformal score distribution. Both of these issues of continuity and smoothness of the conformal distribution are taken care of by dealing with the uniform distribution over \(\{(x_{t},\mathbf{s}_{t})\}_{t=1}^{T}\) as opposed to uniform distribution over the realized (feature, conformal score) pairs \(\{(x_{t},s_{t})\}_{t=1}^{T}\). Note that swapping above corresponds to predicting with \(p+\eta\cdot g(x)\) for the points who are given prediction \(p\). However, their argument only works for Boolean functions \(g:\mathcal{X}\to\{0,1\}\), but we have soft-membership determined by some \(f:\mathcal{X}\to\mathbb{R}\) which simply reweights the points according to \(f(x_{t})\): \[\operatorname*{\mathbb{E}}_{s_{t}\sim\mathbf{s}_{t}\forall t\in[T]}\left[ \frac{1}{n(p)}\sum_{t\in S(p)}f(x_{t})\cdot(q-1[s_{t}\leq p])\right].\] Even under this re-weighting under \(f\), it's easy to see that there exists some \(b\in\mathbb{R}\) such that \[\operatorname*{\mathbb{E}}_{s_{t}\sim\mathbf{s}_{t}\forall t\in[T]}\left[ \frac{1}{n(p)}\sum_{t\in S(p)}f(x_{t})\cdot(q-1[s_{t}\leq p+b\cdot f(x_{t})])\right]\] is exactly \(0\) as the above value is monotonic in \(b\). Swapping \(p\) to \(p+b\cdot f(x)\) will decrease the pinball loss with respect to this \(f\)-reweighted distribution over the points whose prediction was \(p\) via the same argument as in Lemma 3.1 of [13]. The actual argument presented in the appendix doesn't choose \(b\) to set the quantile error to be exactly \(0\) so as to avoid having \(b\) be too big as in Lemma 3.4. By our assumption that \(\mathcal{F}\) is closed under affine transformation, we have \(p+b\cdot f\in\mathcal{F}\). Therefore, if the multivalidity error under uniform distribution over \(\{(x_{t},\mathbf{s}_{t})\}_{t=1}^{T}\) with respect to \(f\) and \(p\) is big, then \(f^{\prime}=p+b\cdot f\) witnesses to the fact that contextual swap regret under the same distribution must be big as well. Finally, we can once again appeal to [1] to argue that contextual swap regret under to the empirical distribution over \(\{(x_{t},s_{t})\}_{t=1}^{T}\) must be close to that under the uniform distribution over \(\{x_{t},\mathbf{s}_{t}\}_{t=1}^{T}\) to conclude that contextual swap regret with respect to \(\{(x_{t},s_{t})\}_{t=1}^{T}\) must be big as well. This concludes the connection between multivalidity error with respect to \(f\) and \(p\) and contextual swap regret for the pinball loss. ## 7 Discussion and Conclusion We have given the first oracle efficient algorithms for online multicalibration, online omniprediction, and online multivalid conformal prediction. Our algorithms do not, however, obtain the rates that we would ideally like: \(O(T^{-1/2})\). Are these rates obtainable in an oracle efficient way? We leave this as our main open question. Achieving these rates seemingly requires new techniques: our current techniques give algorithms that obtain the stronger guarantee of _swap_-omniprediction for which we have proven that obtaining an \(O(T^{-1/2})\) rate is impossible. On the other hand, at least for finite binary benchmark classes, we have shown that using different techniques, \(O(T^{-1/2})\) rates are obtainable for (vanilla) omniprediction -- so the lower bound is not inherent to the goal of omniprediction; but the techniques we use to obtain these stronger omniprediction bounds seem to require enumerating over the benchmark class \(\mathcal{F}\), and it is not clear how one would implement this same algorithm by reducing to solving an (online) learning problem over \(\mathcal{F}\). ## 8 Acknowledgements We thank Jon Schneider for helpful discussions and pointing us to [12].
2308.11495
Evaluating the accuracy of Gaussian approximations in VSWIR imaging spectroscopy retrievals
The joint retrieval of surface reflectances and atmospheric parameters in VSWIR imaging spectroscopy is a computationally challenging high-dimensional problem. Using NASA's Surface Biology and Geology mission as the motivational context, the uncertainty associated with the retrievals is crucial for further application of the retrieved results for environmental applications. Although Markov chain Monte Carlo (MCMC) is a Bayesian method ideal for uncertainty quantification, the full-dimensional implementation of MCMC for the retrieval is computationally intractable. In this work, we developed a block Metropolis MCMC algorithm for the high-dimensional VSWIR surface reflectance retrieval that leverages the structure of the forward radiative transfer model to enable tractable fully Bayesian computation. We use the posterior distribution from this MCMC algorithm to assess the limitations of optimal estimation, the state-of-the-art Bayesian algorithm in operational retrievals which is more computationally efficient but uses a Gaussian approximation to characterize the posterior. Analyzing the differences in the posterior computed by each method, the MCMC algorithm was shown to give more physically sensible results and reveals the non-Gaussian structure of the posterior, specifically in the atmospheric aerosol optical depth parameter and the low-wavelength surface reflectances.
Kelvin M. Leung, David R. Thompson, Jouni Susiluoto, Jayanth Jagalur-Mohan, Amy Braverman, Youssef Marzouk
2023-08-22T15:17:33Z
http://arxiv.org/abs/2308.11495v2
# Evaluating the accuracy of Gaussian approximations in VSWIR imaging spectroscopy retrievals ###### Abstract The joint retrieval of surface reflectances and atmospheric parameters in VSWIR imaging spectroscopy is a computationally challenging high-dimensional problem. Using NASA's Surface Biology and Geology mission as the motivational context, the uncertainty associated with the retrievals is crucial for further application of the retrieved results for environmental applications. Although Markov chain Monte Carlo (MCMC) is a Bayesian method ideal for uncertainty quantification, the full-dimensional implementation of MCMC for the retrieval is computationally intractable. In this work, we developed a block Metropolis MCMC algorithm for the high-dimensional VSWIR surface reflectance retrieval that leverages the structure of the forward radiative transfer model to enable tractable fully Bayesian computation. We use the posterior distribution from this MCMC algorithm to assess the limitations of optimal estimation, the state-of-the-art Bayesian algorithm in operational retrievals [1] which is more computationally efficient but uses a Gaussian approximation to characterize the posterior. Analyzing the differences in the posterior computed by each method, the MCMC algorithm was shown to give more physically sensible results and reveals the non-Gaussian structure of the posterior, specifically in the atmospheric aerosol optical depth parameter and the low-wavelength surface reflectances. remote sensing imaging spectroscopy Markov chain Monte Carlo Bayesian computation ## 1 Introduction Whether airborne or orbital, all remote sensing missions face a common challenge of characterizing distant objects using only measurements made at the sensor. In the Earth sciences, investigators will often solve this problem with physics-based models that use the state of the surface or atmosphere to predict the remote measurement. Investigators can then _retrieve_, or determine, the state most consistent with the remote data. Model inversion methods are used for diverse sensors ranging from infrared or microwave sounders, to multiangle imagers, to radiometers. Perhaps one of the most challenging applications from a computational perspective is remote Visible/Short-Wave Infrared (VSWIR) imaging spectroscopy [2]. VSWIR imaging spectrometers acquire a data cube with two spatial dimensions and one spectral dimension. In other words, they produce images in which each pixel contains a radiance spectrum covering the entire solar reflected interval from 380 to 2500 nm. This interval is sensitive to diverse surface and atmospheric processes, making these sensors useful for a wide range of studies from terrestrial and aquatic ecology, to geology, to hydrology and cryosphere studies. These Earth surface studies aim to measure properties of the surface that create characteristic features in the reflectance spectra. Roughly speaking, surface reflectance is the fraction of incident illumination at the surface which is reflected back in the direction of the sensor. However, imaging spectrometers observe radiance at the top of the atmosphere, so inference to remove the effects of the atmosphere is required to estimate surface reflectance at each pixel. The reflectance can then be used to further estimate properties of the Earth surface. Because of the high data volume of these sensors and their broad spectral range encompassing a wide range of physical phenomena, VSWIR imaging spectrometers present a particularly challenging test case for efficient inference algorithms. Our motivating context for this problem is NASA's Surface Biology and Geology mission (SBG) [3, 4]. The objective of SBG is to track changes in surface properties pertaining to ecosystems, coastal zones, agriculture, and snow and ice accumulations over time, for the entire planet, by first retrieving the surface reflectances. For these types of scientific applications, the uncertainty associated with the retrieval is particularly important. This motivates the need for a Bayesian method to determine the posterior distribution of the surface reflectances and related atmospheric parameters. Markov chain Monte Carlo (MCMC) is a Bayesian sampling method that was introduced in the context of remote sensing retrievals in [5, 6, 7]. Recent retrieval problems are generally high-dimensional and require methods of dimension reduction to lower the computational complexity. For example, [8] breaks up the high-dimensional parameter space into low-dimensional blocks that can be sampled in parallel. [7] and [9] implement MCMC in a low-dimensional parameter subspace obtained from principal component analysis (PCA). More recent methods of dimension reduction such as likelihood informed subspace (LIS) have also been considered in the retrieval context, where MCMC is performed in a specific low-dimensional subspace that is determined by the data [10]. [11] uses LIS in atmospheric methane retrievals, and [12] uses LIS in retrievals for atmospheric concentrations of carbon dioxide in NASA's Orbiting Carbon Observatory-2 (OCO-2) mission. Contrary to most retrievals where the number of parameters is under 100, there are over 400 parameters in the SBG retrieval problem, which makes the problem much more difficult in terms of computational tractability. Furthermore, since dimension reduction methods such as PCA or LIS are low-dimensional approximations, significantly reducing the dimension leads to problems in convergence to the posterior distribution. In this work, we present a method of utilizing the structure of the problem to overcome the high dimensionality and create a tractable sampling algorithm. Optimal estimation (OE) [1] is the current state-of-the-art algorithm for an operational setting, such as for NASA's EMIT mission [13, 14]. OE is a Bayesian retrieval algorithm that computes a maximum a posteriori (MAP) estimate of the parameters and characterizes the posterior distribution using the Laplace approximation. Although this leads to fast Gaussian approximations of the posterior, the posterior is not Gaussian in the general case. Our objective is to explore how well this approximation holds. There are two main contributions of this work. 1. We developed a computationally tractable Block Metropolis MCMC algorithm for the VSWIR retrieval problem. This fully Bayesian algorithm allows for the characterization of a non-Gaussian posterior and performs exact inference in the limit of infinite samples. 2. We use this algorithm to evaluate the limitations of the OE method and identify the scenarios in which it is sufficiently accurate. ## 2 Setup of the remote sensing problem The remote sensing retrieval considered in this paper is modelled as an inverse problem. For each pixel of the image captured by the imaging spectrometer, the quantities of interest are the surface and atmospheric parameters that are retrieved given the radiance at the same pixel. This type of retrieval can be thought of as a statistical inference problem for one set of multidimensional parameters with one set of multidimensional data. We use the notation \(\mathbf{y}\) to denote the set of radiance observations from the imaging spectrometer. The radiances are used to infer the state \(\mathbf{x}\), which consists of the surface and atmospheric parameters. Incoming solar radiation is reflected off the Earth surface, and the transfer of radiation through the atmosphere is modelled by a vector-valued forward function \(f(\cdot)\). The full expression is known and is written in (4). The observations are represented by the output of the forward model with additive noise, \(\mathbf{y}=f(\mathbf{x})+\epsilon\). The setup of this inference problem and the desire for uncertainty quantification leads to a Bayesian formulation. Given the prior and likelihood distributions, the posterior distribution of the surface and atmospheric parameters conditioned on the observed radiance, \(\pi(\mathbf{x}|\mathbf{y})\), is obtained using Bayes rule: \[\pi(\mathbf{x}|\mathbf{y})=\frac{\pi(\mathbf{y}|\mathbf{x})\pi(\mathbf{x})}{ \pi(\mathbf{y})}\propto\pi(\mathbf{y}|\mathbf{x})\pi(\mathbf{x}). \tag{1}\] This section provides an overview of the elements associated with the prior and likelihood distributions, including parameters and data. Formulations to determine the posterior distribution are described in Section 3. ### Surface and atmospheric parameters The inversion problem estimates the surface and atmospheric parameters, which are concatenated into one state vector: \[\mathbf{x}=[\mathbf{x}_{\text{refl}},\mathbf{x}_{\text{atm}}]^{\top}, \tag{2}\] where \(\mathbf{x}_{\text{refl}}\in\mathbb{R}^{n}\) and \(\mathbf{x}_{\text{atm}}\in\mathbb{R}^{2}\). There are \(n=432\) surface parameters and two atmospheric parameters. * The surface parameters \(\mathbf{x}_{\text{refl}}\) are surface reflectances that describe the proportion of solar radiation that is reflected off the surface at each of the \(n\) wavelengths. The wavelengths range from 380 to 2500 nm. * The two atmospheric parameters are \(\mathbf{x}_{\text{atm}}=[x_{\text{AOD}},x_{\text{H2O}}]^{\top}\), which consist of Aerosol Optical Depth (AOD) at 550 nm and column precipitable water vapour (cm). Aerosol optical depth (AOD) is a measure of the atmospheric concentration of aerosols. Specifically, it is the proportion of radiation that is absorbed by aerosols at wavelength 550 nm. The second atmospheric parameter is the column precipitable water vapour (cm), which is the volume of water per vertical column of atmosphere. **Prior.** The prior on the parameters is modelled as a normal distribution. The surface and atmospheric parameters are treated independently, giving a block diagonal structure to the prior covariance: \[\boldsymbol{\mu}_{\text{pr}}=\begin{bmatrix}\boldsymbol{\mu}_{\text{refl}}^{0 },\boldsymbol{\mu}_{\text{atm}}^{0}\end{bmatrix}^{\top},\ \ \ \Gamma_{\text{pr}}=\begin{bmatrix}\Gamma_{\text{refl}}^{0}&0\\ 0&\Gamma_{\text{atm}}^{0}\end{bmatrix} \tag{3}\] The surface prior is created using a set of over 1400 historical reflectance spectra from the EcoSIS spectral library [15]. These spectra are fitted to a Gaussian mixture model with eight components, each corresponding to a different type of terrain on the Earth surface that have similar characteristics, such as vegetation or aquatic environments. For a particular inversion, the component with shortest Mahalanobis distance to the initial state estimate is used as the Gaussian surface prior. For the atmospheric parameters, the priors are chosen to have large variances to allow relatively unconstrained exploration of the posterior. ### Forward model The forward model, \(f(\cdot)=[f_{1}(\cdot),\dots,f_{n}(\cdot)]^{\top}\), approximates the propagation of photons through the Earth atmosphere from the surface to the imaging spectrometer. The model used in this work consists of two stages to map the state \(\mathbf{x}\) to the radiance \(\mathbf{y}\). The first step is the computation of intermediate parameters using MODTRAN 6.0, a high-fidelity radiative transfer model [16]. The outputs of MODTRAN given \(\mathbf{x}_{\text{atm}}\) are three \(n\)-dimensional parameters that describe light propagation through the atmosphere: path reflectance \(\boldsymbol{\rho}_{a}\), spherical albedo \(\mathbf{s}\), and atmospheric transmission \(\mathbf{t}\). For computational efficiency, a lookup table is generated for a set of reference atmospheric conditions. The second step is to use the intermediate parameters to calculate the radiance. After linearly interpolating these parameters in the lookup table, the mechanics of the forward model for \(i=1,\dots,n\) are given by: \[f_{i}(\mathbf{x})=\frac{\phi_{0}}{\pi}e_{0,i}\bigg{[}\rho_{a,i}(\mathbf{x}_{ \text{atm}})+\frac{t_{i}(\mathbf{x}_{\text{atm}})\cdot x_{\text{refl},i}}{1-s _{i}(\mathbf{x}_{\text{atm}})\cdot x_{\text{refl},i}}\bigg{]}, \tag{4}\] where \(\phi_{0}\) is the cosine of the solar zenith angle and \(\mathbf{e}_{0}\) is the solar downward irradiance at the top of the atmosphere [17]. ### Noise model We model the observation uncertainty matrix by combining covariance matrices from different independent error sources as in [18]. First, we determine the intrinsic sensor noise, which includes uncertainty due to discrete photon counts, electronic uncertainty in the analog-to-digital conversion, and thermal noise from the instrument itself. These sources are all independent and uncorrelated in each channel leading to a diagonal covariance structure. The photon noise contribution depends on the magnitude of the radiance itself, so we use the measurement itself to predict its own noise level. Any error induced by this circularity is acceptable since noise is small relative to the total magnitude, with a signal to noise ratio of 500 or 1000 for typical spectra. In addition to the instrument noise, we also model several systematic error sources following [1]. We use a diagonal matrix to represent a 1% uncertainty in calibration. Another diagonal matrix represents systematic radiative transfer model errors due to spectral calibration uncertainty and the intrinsic uncertainty in the unretrieved components of the atmospheric model. The covariance matrices from these independent error sources combine additively to form a final observation error matrix \(\Gamma_{\text{obs}}\). ### Retrieval The prior, forward, and noise models are used to solve the inversion problem of retrieving the surface and atmospheric parameters given the observations. From a Bayesian perspective, the retrieval is equivalent to characterizing the posterior that is given by: \[\pi(\mathbf{x}|\mathbf{y})\propto\exp\bigg{(}-\frac{1}{2}\big{\|}\mathbf{x}- \boldsymbol{\mu_{\text{pr}}}\big{\|}_{\Gamma_{\text{pr}}}^{2}-\frac{1}{2} \big{\|}\mathbf{y}-f(\mathbf{x})\big{\|}_{\Gamma_{\text{obs}}}^{2}\bigg{)} \tag{5}\] Methods for inference are described in Section 3. An example of the retrieval process from a pixel in the satellite image to the surface reflectance is shown in Figure 1. The gaps in the reflectance represent the wavelengths for which most of the radiation is absorbed by water droplets in the atmosphere, rendering the retrieval meaningless in those regions. Ignoring these wavelengths, there are \(324\) reflectance parameters of interest. ## 3 Estimation and inference formulations The existing state-of-the-art Bayesian method used in VSWIR remote sensing problems is optimal estimation (OE) [1], which is a computationally efficient way to obtain estimates of the surface and atmospheric parameters. In this work we place the emphasis on the uncertainty quantification of the retrieval of these parameters. We consider a sampling-based Markov chain Monte Carlo (MCMC) approach to characterize the full posterior distribution. ### Optimal estimation Given a Gaussian prior and the forward and noise models, the OE method solves an optimization problem to estimate the surface and atmospheric parameters. The negative log posterior is used as the objective function, and the resulting parameter estimate is denoted as \(\mathbf{x}_{\text{MAP}}=\arg\max_{\mathbf{x}}\mathbf{c}(\mathbf{x})\), where: \[\begin{split} c(\mathbf{x})=\frac{1}{2}(\mathbf{x}-\boldsymbol{ \mu_{\text{pr}}})^{\top}\Gamma_{\text{pr}}^{-1}(\mathbf{x}-\boldsymbol{\mu_{ \text{pr}}})\\ +\frac{1}{2}(\mathbf{y}-f(\mathbf{x}))^{\top}\Gamma_{\text{obs}}^{ -1}(\mathbf{y}-f(\mathbf{x})).\end{split} \tag{6}\] Figure 1: Sample retrieval over a grassy field from radiance to reflectance. The covariance estimate is a Laplace approximation [19] derived from linear Bayesian inversion theory [20] using the local linearization of the forward model at the MAP estimate [21]. \[\Gamma_{\mathrm{L}}=\left(\nabla f(\mathbf{x}_{\text{MAP}})^{\top}\Gamma_{ \text{obs}}^{-1}\nabla f(\mathbf{x}_{\text{MAP}})+\Gamma_{\text{pr}}^{-1} \right)^{-1} \tag{7}\] Since the OE posterior is characterized using a local Gaussian approximation, we sometimes refer to OE as the _approximate Bayesian_ method to contrast with the _fully Bayesian_ MCMC approach. ### Fully Bayesian approach Although the Laplace approximation in optimal estimation would be accurate if the posterior is approximately Gaussian, this is not the case in general. Since the forward model is nonlinear, the posterior shape cannot be determined a priori, making it impossible to determine whether a normal approximation is sufficiently accurate. Therefore, a method of characterizing the full posterior distribution is needed to obtain an accurate measure of uncertainty associated with the retrieval. Markov chain Monte Carlo (MCMC) [22] is a probabilistic sampling method that addresses the issue of characterizing the posterior, but is computationally intractable in the high-dimensional VSWIR retrieval problem. Methods for dimension reduction such as [10, 23, 24] designed for Bayesian inverse problems can limit the sampling to a low-dimensional subspace. However, these methods are considered _approximate_ inference because they involve truncation of information that are deemed less important based on the eigenvalues. This paper presents a technique for _exact_ inference specifically for the VSWIR retrieval that takes advantage of the conditional dependence structure of the surface and atmospheric parameters in the state vector. The next subsection outlines the structure that we then use to develop the sampling methodology in Section 4. ### Linear approximations to the forward model The structure in the forward model arises from \(\mathbf{x}_{\text{refl}}\) and \(\mathbf{x}_{\text{atm}}\) being treated as independent in (4). When the atmospheric parameters are held fixed, the forward model is approximately linear. That is, for \(\mathbf{x}_{\text{atm}}\) held constant, we can define a submodel conditioned on the atmospheric parameters: \[f_{\text{refl}}(\mathbf{x}_{\text{refl}})\approx\mathbf{A}\mathbf{x}_{\text{ refl}}+\mathbf{b}, \tag{8}\] where \(\mathbf{A}\), a diagonal matrix with \(A_{ii}=\frac{\phi_{0}}{\pi}e_{0,i}t_{i}\), and \(b_{i}=\frac{\phi_{0}}{\pi}e_{0,i}\rho_{a,i}\), \(i=1\ldots n\), are the deterministic constants computed from the intermediate parameters computed using MODTRAN. The denominator of the second term in (4) is approximately equal to one. The approximately linear structure in the surface parameters can be exploited to accelerate the sampling process. ## 4 Sampling methodology A computationally tractable fully Bayesian algorithm was developed to obtain samples from the posterior distribution of the surface and atmospheric parameters. The algorithm, based on a block Metropolis MCMC algorithm [22, 25], generates alternating samples of the reflectance and atmospheric parameter blocks. Contrary to algorithms involving dimension reduction, this algorithm performs exact inference, meaning that it converges to the true posterior distribution in the limit of infinite samples. The algorithm is described in this section, including the overall structure and the parameter tuning process. ### Exploiting structure in the forward model The forward model is known to be approximately linear in the reflectances conditioned on fixed atmospheric parameters. Since the objective is to develop a fully Bayesian algorithm, the linear model described in Section 3.3 is not used explicitly in the inversion. It is instead used to provide structure to the sampling algorithm. The motivation behind the block Metropolis algorithm is to restrict the "difficult" sampling to the atmospheric parameters. After obtaining a sample from the atmospheric block, the sampling within the surface block can converge much faster thanks to the approximate conditional linearity given a fixed atmosphere. Without this structure, the algorithm would have to blindly explore the \(n\)-parameter space, which is computationally infeasible in practice. The sampling procedure is as follows. The chain is first initialized at the MAP estimate obtained using optimal estimation. Each subsequent sample is split into the atmospheric and reflectance blocks, each with a proposal and acceptance step. The proposal is a sample from the normal distribution centered at the previous sample with some proposal covariance. This proposal covariance is discussed in the next subsection. The proposal is then accepted or rejected with some acceptance probability computed based on the posterior density. The new samples from both blocks are then concatenated and added to the chain. The acceptance step ensures asymptotic convergence of the chain. The full algorithm is outlined in Section 4.3. ### Choice of proposal covariance Different approaches were taken for the two blocks since the structure is known for the reflectances conditioned on the atmospheric parameters but not vice versa. For the atmospheric block, the proposal covariance follows the update procedure of the Adaptive Metropolis algorithm [26], in which the proposal attempts to adapt to the shape of the posterior based on the previous samples to explore the parameter space more efficiently. This adaptive scheme is given as: \[\Gamma_{\text{am}}^{(i)}=\begin{cases}\epsilon_{0}\;I_{2}&i\leq 1000\\ s_{2}\,\text{cov}\big{(}\mathbf{x}_{\text{am}}^{(0)},\ldots,\mathbf{x}_{\text {am}}^{(i-1)}\big{)}+s_{2}\,\epsilon_{\text{AM}}\,I_{2}&i>1000\end{cases}, \tag{9}\] where \(s_{2}=\frac{2.38^{2}}{2}\) and \(\epsilon_{0}=\epsilon_{\text{AM}}=10^{-3}\). Two methods of obtaining the proposal covariance for the reflectance block were compared. Both involve computing some approximation of the posterior covariance of the reflectance parameters. 1) **Linear inversion theory.** Modelling the forward submodel as linear, i.e. making (8) an equality, closed form expressions of the posterior covariance can be derived from linear Bayesian inversion theory [20]. For the linear model in (8) and using the same prior and noise model, the posterior covariance of the reflectances can be expressed as: \[\Gamma_{\text{lin}}^{(i)}=\big{(}\mathbf{I}-\Gamma_{\text{pr}}\,\mathbf{A}^{( i)T}\,\Gamma_{y}^{-1\,(i)}\,\mathbf{A}^{(i)}\big{)}\,\Gamma_{\text{pr}}, \tag{10}\] where \(\Gamma_{y}^{(i)}=\mathbf{A}^{(i)}\,\Gamma_{\text{pr}}\,\mathbf{A}^{(i)T}+ \Gamma_{\text{obs}}\) is the marginal covariance of the data. A scaled version of this posterior approximation, \(\epsilon_{1}\Gamma_{\text{lin}}\), where \(\epsilon_{1}<1\), is used as the proposal covariance. 2) **Laplace approximation.** Another method is to directly use the Laplace approximation, \(\Gamma_{\text{L}}\), obtained from OE [19, 21]. This can be done as a preprocessing step to avoid having to compute an inversion problem for every iteration of the chain. The proposal covariance is the scaled Laplace approximation, \(\epsilon_{2}\Gamma_{\text{L}}\), for some \(\epsilon_{2}<1\). The block Metropolis algorithm was implemented to compare the quality of mixing in the chain for each method of obtaining the proposal covariance. Two million samples were generated for each chain, which were then thinned by taking every 10 samples for a total of \(2\times 10^{5}\) samples. The mixing characteristics were compared by analyzing trace plots and the effective sample sizes. Both scaling parameters \(\epsilon_{1}\) and \(\epsilon_{2}\) were tuned to achieve a near-optimal acceptance rate of approximately 23% [27]. The subsequent results use the scaling parameters \(\epsilon_{1}=0.14\) and \(\epsilon_{2}=0.11\). The samples from the reflectance block affect the acceptance in the atmospheric block due to the alternating nature of the sampling process, so we must evaluate the effect of this proposal covariance on the parameters in both blocks. The trace plots for the atmospheric parameters are shown in Figure 2. While the H\({}_{2}\)O parameter trace plots are similar across the two methods, the AOD plots show that the chain explores the parameter space more efficiently when using the Laplace approximation. In the proposal from linear inversion theory, the chain requires more samples to move toward a different region of the space. Since MCMC leads to dependent samples, another useful metric is the autocorrelation time \(\tau_{a}\), which is the time it takes for the samples to become effectively independent. The effective sample size (ESS) can then be defined as: \[\text{ESS}=\frac{N}{\tau_{a}}, \tag{11}\] where \(N\) was taken as \(1.8\times 10^{5}\) after removing the first \(2\times 10^{4}\) samples in the chain as burn-in. The ESS was computed for each of the \(n+2\) parameters and summarized in Table 1. The median value is around 1000, indicating that one effectively independent sample is generated every 180 samples. The proposal covariance obtained from the Laplace approximation generates greater sample sizes throughout, suggesting better mixing and greater sampling efficiency. This method performs particularly better for the atmospheric AOD parameter, where the ESS is roughly a factor of 4 greater than the linear inversion method. For the reflectances, it induces a much better performance for wavelengths less than 1000 nm, as shown in Figure 3. While the ESS from the Laplace method remain fairly consistent throughout all wavelengths, the ESS in low wavelength regions drop significantly for the linear inversion method. Another advantage of the Laplace method is that the Laplace approximation is constant and only needs to be computed once as a preprocessing step. The linear inversion method requires the computation of (10) between the two blocks in the algorithm for each sample. Although this does not make a noticeable difference in computational time, it simplifies the algorithm. In the final implementation of the algorithm, the proposal covariance for the reflectance block is equal to \(\epsilon_{2}\Gamma_{\text{L}}\), where the scaling parameter was tuned to be \(\epsilon_{2}=0.11\). ### Block Metropolis Algorithm Algorithm 1 outlines the final algorithm, which takes advantage of the forward model structure in which \(f_{\text{ref}}(\cdot)\) is approximately linear. By using this property along with the scaled Laplace approximation as the proposal covariance, we are able to obtain samples that efficiently explore the parameter space of both the surface and atmospheric parameters. Note that in the atmospheric block, \(\mathbf{z}_{\text{atm}}\) is drawn from a truncated normal distribution with a lower bound of zero. ``` 1:Initialize \(\mathbf{x}^{(0)}=\mathbf{x}_{\text{MAP}}\) 2:for\(i=1\dots N\)do 3: Sample \(\mathbf{x}_{\text{atm}}^{(l)}\) 4: Proposal \(\mathbf{z}_{\text{atm}}\sim\mathcal{N}\big{(}\mathbf{x}_{\text{atm}}^{(i-1)}, \ \Gamma_{\text{atm}}^{(i)}\big{)}\) such that \(z_{\text{atm}}\geq 0\) 5: Metropolis accept/reject for \(\big{[}\mathbf{x}_{\text{ref}}^{(i-1)},\mathbf{z}_{\text{atm}}\big{]}\) 6: Sample \(\mathbf{x}_{\text{ref}}^{(i)}\) 7: Proposal \(\mathbf{z}_{\text{ref}}\sim\mathcal{N}\big{(}\mathbf{x}_{\text{ref}}^{(i-1)}, \ \epsilon_{2}\,\Gamma_{\text{L}}\big{)}\) 8: Metropolis accept/reject for \(\big{[}\mathbf{z}_{\text{ref}},\mathbf{x}_{\text{atm}}^{(i)}\big{]}\) 9: Compute \(\Gamma_{\text{atm}}^{(i+1)}\) 10:endfor ``` **Algorithm 1** Block Metropolis \begin{table} \begin{tabular}{|c|c|c|} \hline & Proposal from linear inversion & Proposal from Laplace \\ \hline Ref Min & 108 & 120 \\ \hline Ref Med & 1278 & 1375 \\ \hline Ref Max & 3294 & 4399 \\ \hline AOD & 166 & 633 \\ \hline H\({}_{2}\)O & 527 & 784 \\ \hline \end{tabular} \end{table} Table 1: Effective sample sizes for MCMC on Building 177 Figure 2: Trace plots of the atmospheric parameters using the two methods of obtaining proposal covariance for the reflectance block. ## 5 Results The results focus on comparing the posterior distribution characterized by the fully Bayesian MCMC method with the posterior approximated by optimal estimation. The MCMC algorithm was executed for four radiance datasets. The datasets, collected over the JPL campus using the airborne AVIRIS-NG instrument [28, 29], attempt to include a variety of terrain types and are named Building 177, Building 306, Mars Yard, and Parking Lot. The radiance spectra are shown in Figure 4. Two million samples were obtained for each chain, with the first \(2\times 10^{5}\) discarded as burn-in since the chain stabilizes by then for most cases, as seen in the trace plots in Figure 2. The chain was thinned by taking every tenth sample to reduce storage. The overall acceptance rate for all cases ranged from 0.2 to 0.3. In this section, we compare the surface posterior obtained from both methods using several metrics, followed by the posterior on the atmospheric parameters. Then, we evaluate the Gaussianity of the full posterior distribution. ### Surface posterior comparison We first compare the mean and covariance of the posterior distribution. Figure 5 plots the posterior mean reflectance for the four test cases and the MAP estimate from optimal estimation. Figure 6 plots the relative difference of the two methods normalized over the values from the MCMC method. In the first three cases, the greatest differences occur in Figure 4: Radiance measurements for test cases collected over JPL campus. Figure 3: Effective sample sizes of the reflectance chains. the low wavelength regions and all peak around 0.02. The Parking Lot case begins with a high relative difference at low wavelength but generally remain below 0.02. The relative difference between the posterior mean and MAP estimates are below 6% throughout the spectrum, which gives confidence in both methods that they converge to a sensible location in the parameter space. Figure 7 shows the marginal variance of the posterior distribution in the reflectances obtained using Laplace approximation in optimal estimation and MCMC. For the first three cases, the Laplace approximation predicts a higher marginal variance than the MCMC posterior variance, particularly in the low wavelength regions below 1000 nm. For the Parking Lot case, the posterior marginal variance is slightly higher than the Laplace approximation except for the regions around 380 nm and 550 nm. In addition to the marginal variances, it is also necessary to look into the differences in cross-correlations. We compare the MCMC and OE covariances (\(\Gamma_{\text{M}}\) and \(\Gamma_{\text{L}}\)) using three metrics involving trace, Frobenius norm, and Forstner distance, which are defined below. The first metric is the relative difference in trace normalized by the trace of the MCMC posterior covariance: \[d_{\text{tr}}=\bigg{|}\frac{\text{tr}(\Gamma_{\text{M}})-\text{tr}(\Gamma_{ \text{L}})}{\text{tr}(\Gamma_{\text{M}})}\bigg{|}. \tag{12}\] The second metric is the relative difference in Frobenius norm normalized by the Frobenius norm of the MCMC posterior covariance: \[d_{\text{norm}}=\frac{\|\Gamma_{\text{M}}-\Gamma_{\text{L}}\|_{F}}{\|\Gamma_{ \text{M}}\|_{F}}. \tag{13}\] Figure 5: Posterior mean and MAP estimates for reflectances. The third metric involves the Forstner distance, which is a metric that measures the distance between two symmetric positive definite matrices [30]. The Forstner distance between two SPD matrices \(\Gamma_{A}\) and \(\Gamma_{B}\) is defined by: \[d_{f}=\sqrt{\sum\ln^{2}(\sigma_{i})}, \tag{14}\] where \(\sigma_{i}\) are the generalized eigenvalues of the eigenpencil \((\Gamma_{A},\Gamma_{B})\).The relative difference defined using this metric is normalized by the distance between the MCMC covariance and the prior covariance: \[d_{F}=\frac{d_{f}\big{(}\Gamma_{\text{M}},\Gamma_{\text{L}}\big{)}}{d_{f} \big{(}\Gamma_{\text{M}},\Gamma_{\text{pr}}\big{)}}. \tag{15}\] The results of these comparisons are shown in Table 2 for the four test cases. The importance of using multiple metrics is highlighted in the Parking lot case, where the trace and Frobenius norm indicate a lower difference compared to the other three cases, but Forstner distance is higher than the other cases. The relative difference between MCMC and Laplace approximation covariances are greater than 0.3 in all but three of the 12 values. This numerical comparison establishes that there is a significant deviation in the covariances obtained using the approximate Bayesian and fully Bayesian algorithms. ### Eigenanalysis of the surface posterior Expanding on the eigenvalue problem used in the Forstner distance metric, we explore the interpretability of eigenproblems to reveal structure in the difference between the two covariance matrices. We compare them with respect to the eigendirections of one of the matrices. Specifically, we focus on the following eigenvalue problem involving the sample covariance of the Building 177 posterior: \[\Gamma_{\text{M}}v_{\text{M}}=\lambda_{\text{M}}v_{\text{M}}, \tag{16}\] Figure 6: Relative difference between posterior mean and MAP estimates. Figure 8: Eigenvalues of the MCMC covariance matrix. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & Trace & Frobenius Norm & Förstner Distance \\ \hline Building 177 & 0.331 & 1.820 & 0.319 \\ \hline Building 306 & 0.509 & 2.601 & 0.282 \\ \hline Mars Yard & 0.289 & 0.758 & 0.356 \\ \hline Parking Lot & 0.063 & 0.929 & 0.539 \\ \hline \end{tabular} \end{table} Table 2: Relative difference between \(\Gamma_{\text{M}}\) and \(\Gamma_{\text{L}}\) Figure 7: Marginal variance in reflectances predicted by MCMC and OE methods. where \(\lambda_{\text{M}}\) and \(v_{\text{M}}\) are the eigenvalues and eigenvectors of \(\Gamma_{\text{M}}\). The eigenvalue spectrum is shown in Figure 8. Then, the variance of \(\Gamma_{\text{L}}\) in the direction \(v_{\text{M,i}}\) can be expressed as \(v_{\text{M,i}}^{\top}\Gamma_{\text{L}}v_{\text{M,i}}\). This directional variance can be normalized using the corresponding eigenvalue as follows: \[\sigma_{\text{M,i}}^{\text{L}}=\frac{v_{\text{M,i}}^{\top}\Gamma_{\text{L}}v_{ \text{M,i}}}{v_{\text{M,i}}^{\top}\Gamma_{\text{M}}v_{\text{M,i}}}=\frac{v_{ \text{M,i}}^{\top}\Gamma_{\text{L}}v_{\text{M,i}}}{\lambda_{\text{M,i}}} \tag{17}\] Figure 9 plots this quotient, ranked in order of highest to lowest eigenvalue \(\lambda_{\text{M,i}}\). A value greater than 1 can be interpreted as the Laplace approximation having greater variance in the \(v_{\text{M,i}}\) direction, and vice versa. Consistent with Figure 7, the Laplace approximation overestimates the variance of the MCMC variance in most directions. The overall pattern is that the variances are around the same for most of the more important eigendirections, but the MCMC variance becomes smaller than the Laplace approximation variance going into the less important directions. Two interesting points to further analyze are the outliers near the leading eigendirections in Figure 9. The corresponding eigenvectors are shown in red. Comparing the shape to the posterior variance plot in Figure 7, the first eigenvector (top outlier) resembles the main feature in the lower wavelengths. The Laplace approximation predicts variance three times higher in this direction. In Section 5.4 we show how this may be related to the non-Gaussianity in the low wavelength region, which makes the Laplace approximation less accurate. The fifth eigenvector (bottom outlier) describes some of the noisy features, particularly the spike near 2500 nm, and the MCMC result predicts a variance around 70% higher than the Laplace approximation in this direction. ### Atmospheric posterior comparison While the reflectances are the quantities of interest ultimately used in subsequent analysis of the Earth surface, their behaviour is conditioned on the atmospheric parameters. Figure 10 is a 2D marginal density plot of the posterior for the two atmospheric parameters. The MAP estimate from optimal estimation is plotted in red along with an ellipse representing one standard deviation obtained using the Laplace approximation. There are two visible improvements from characterizing the posterior distribution using MCMC. First, optimal estimation has no way of ensuring positivity of the parameters, so the probabilistic interpretation is that the probability is obtaining a negative AOD parameter is almost 0.5, for example. The MCMC implementation fixes the samples to be positive and therefore leads to results that are more representative of the physical quantities. The second improvement is that MCMC sampling reveals a non-elliptical shape to the posterior, suggesting that it is not Gaussian. The Gaussianity of the posterior for both surface and atmospheric parameters is further explored in the next subsection. Figure 9: Ratio of Laplace approximation variance and MCMC variance in the eigendirections of the MCMC covariance. ### Evaluating Gaussianity The motivation for turning to a fully Bayesian approach is that the posterior is non-Gaussian in general. Here, we first demonstrate the non-Gaussianity of the posterior distribution qualitatively using normal Q-Q plots, and then quantitatively using hypothesis testing for individual parameters in one dimension. Figure 11 shows the Q-Q plots for the two atmospheric parameters across all four cases. The red line is the reference for a truncated normal distribution and the MCMC samples are plotted in blue. The truncated normal was used here since the samples for the atmospheric block were constrained to positive values in the algorithm. While the H\({}_{2}\)O parameters closely follow the truncated normal, the right tail of the AOD plots deviate from the red line, especially for the Building 177 and Parking Lot cases. Qualitatively, these two cases look the least Gaussian. Figure 12 shows the Q-Q plots for select reflectance parameters across the spectrum for the Building 177 case. Although the MCMC samples closely follow a normal distribution, the two plots for 596 nm and 746 nm have tails that deviate from the reference normal. Next, we present a more comprehensive analysis of the reflectances using a hypothesis testing approach. Treating the reflectances individually, we use the Kolmogorov-Smirnov test [31] on the empirical marginal distribution of the MCMC samples with the null hypothesis being that the reflectances are normally distributed. The \(p\)-values for each reflectance parameter are shown in Figure 13, with the red line representing \(p=0.05\). In three of the cases, the non-Gaussian phenomenon observed in certain wavelengths in Figure 12 are present in the entire low wavelength regime, with \(p\approx 0\). However, the extent of this regime varies for all three cases, with the low \(p\)-value region in the Mars Yard case extending to nearly 1000 nm. This is consistent with the findings in Figure 9, which showed that the largest difference between the OE and MCMC posterior was in the direction that represents the lower wavelength regime. The departure from Gaussianity for the reflectances in this regime may be the reason why the Laplace approximation also departs from the posterior characterized by MCMC. ## 6 Discussion We presented a fully Bayesian MCMC algorithm for the remote sensing problem that characterizes the posterior distribution of the surface reflectances and atmospheric parameters. This posterior was used to identify and understand the limitations of optimal estimation, the current state-of-the-art approximate Bayesian approach. There are three main takeaways from the results presented in this paper. * The fully Bayesian solution and approximate Bayesian solution yield very different covariances. We analyzed the differences in terms of three different metrics and the eigendirections of the MCMC covariance matrix. Figure 10: 2D marginal density plot of the atmospheric posterior distribution for Building 177. * The posterior distribution of the atmospheric parameters is more physically sensible than the Laplace approximation. * Non-Gaussianity in the posterior is revealed by the fully Bayesian solution. We identified regions of the spectrum and in the atmospheric parameters for which the Laplace approximation would not be able to sufficiently represent the non-Gaussian distribution. From the eigenanalysis, the OE posterior covariance was shown to be the most different from the MCMC posterior covariance in the low-wavelength region, which is the same region that was shown to depart from Gaussianity. Any further work on non-Gaussian posterior characterizations could focus on only this regime of the reflectances and the AOD atmospheric parameter. There is potential to develop a new combined method that uses MCMC or another non-Gaussian method for these parts, and OE for the rest of the parameters. In terms of NASA's Surface Geology and Biology mission, characterizing the posterior distribution is important for subsequent analysis. The surface reflectances are ultimately used to further infer properties of the Earth surface pertaining to problems such as ecosystems and ice accumulation. Accurately quantifying the uncertainty of the surface properties is especially important for these scientific applications. ### Limitations Although we have created a computationally tractable fully Bayesian algorithm by exploiting structure in the problem, it is not computationally feasible in an operational setting. Generating \(2\times 10^{6}\) posterior samples takes on the order of hours, whereas the approximate Bayesian approach for one retrieval is on the order of seconds. However, the two methods could be combined for the operational setting. For example, a fully Bayesian retrieval can be used as a validation step to verify and potentially correct the Laplace approximation. Or, since the Laplace approximation is sufficient to approximate many of the parameters, the algorithm in this work can be narrowed to solve a smaller Figure 11: Q-Q plots of atmospheric parameters across all four cases. The red line is a reference indicating the truncated normal distribution. subproblem on a subset of parameters using a fully Bayesian approach while maintaining the original approximate approach for the other parameters. The performance of the MCMC algorithm is contingent on the prior, noise model, and forward model. Since the noise model dictates the likelihood distribution, the posterior would change along with changes in the prior distribution and noise model. The forward model is subject to modelling error, which could affect the interactions between the atmospheric and surface parameters. This would be one of the main improvements to the current algorithm when considering operational use, since ultimately we are interested in matching the ground truth reflectances. ### Future work The retrievals performed in this work are pixel-by-pixel, meaning that for each radiance vector, there is one corresponding state vector to be inferred. The main efforts for future work are focused on extending fully Bayesian algorithms to spatial and temporal fields. Including spatial and temporal correlations can increase the retrieval accuracy and reduce the number of retrievals required, which would reduce computational time in an operational setting. ## 7 Conclusion In this work, we developed a computationally tractable fully Bayesian retrieval method for the high-dimensional VSWIR retrieval. Taking advantage of the structure in the forward radiative transfer model, we implemented a block Metropolis MCMC algorithm that alternates samples between the atmospheric and surface parameter blocks. Unlike other algorithms that use dimension reduction, the block Metropolis algorithm is asymptotically exact. We compared this fully Bayesian algorithm to the current state-of-the-art optimal estimation method in several ways to identify limitations of the approximate Bayesian approach. For the surface parameters, both methods had a similar posterior mean but the posterior variance was shown to be significantly different. The eigendirections in which they are different were analyzed and interpreted. MCMC revealed non-Gaussianity in the aerosol parameter and the low-wavelength regime of the surface reflectances, determined through Kolmogorov-Smirnov hypothesis tests. This Figure 12: Q-Q plots of select reflectance parameters for the Building 177 case. work is the first step in combining the block Metropolis algorithm with optimal estimation to allow non-Gaussian retrievals to be performed in an operational setting. ## Acknowledgments A portion of this research was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research was funded in part by a Jet Propulsion Laboratory Strategic University Partnership grant to MIT. Kelvin Leung and Youssef Marzouk also acknowledge support from the Office of Naval Research, SIMDA (Sea Ice Modeling and Data Assimilation) MURI, award number N00014-20-1-2595.
2305.08138
Traceable mixnets
We introduce the notion of \emph{traceable mixnets}. In a traditional mixnet, multiple mix-servers jointly permute and decrypt a list of ciphertexts to produce a list of plaintexts, along with a proof of correctness, such that the association between individual ciphertexts and plaintexts remains completely hidden. However, in many applications, the privacy-utility tradeoff requires answering some specific queries about this association, without revealing any information beyond the query result. We consider queries of the following types: a) given a ciphertext in the mixnet input list, whether it encrypts one of a given subset of plaintexts in the output list, and b) given a plaintext in the mixnet output list, whether it is a decryption of one of a given subset of ciphertexts in the input list. Traceable mixnets allow the mix-servers to jointly prove answers to the above queries to a querier such that neither the querier nor a threshold number of mix-servers learn any information beyond the query result. Further, if the querier is not corrupted, the corrupted mix-servers do not even learn the query result. We first comprehensively formalise these security properties of traceable mixnets and then propose a construction of traceable mixnets using novel distributed zero-knowledge proofs (ZKPs) of set membership and of a statement we call reverse set membership. Although set membership has been studied in the single-prover setting, the main challenge in our distributed setting lies in making sure that none of the mix-servers learn the association between ciphertexts and plaintexts during the proof. We implement our distributed ZKPs and show that they are faster than state-of-the-art by at least one order of magnitude.
Prashant Agrawal, Abhinav Nakarmi, Mahavir Prasad Jhawar, Subodh Sharma, Subhashis Banerjee
2023-05-14T12:18:59Z
http://arxiv.org/abs/2305.08138v3
# Traceable mixnets ###### Abstract We introduce the notion of _traceable mixnets_. In a traditional mixnet, multiple mix-servers jointly permute and decrypt a list of ciphertexts to produce a list of plaintexts, along with a proof of correctness, such that the association between individual ciphertexts and plaintexts remains completely hidden. However, in many applications, the privacy-utility tradeoff requires answering some specific queries about this association, without revealing any information beyond the query result. We consider queries of the following type: \(a)\) given a ciphertext in the mixnet input list, whether it encrypts one of a given subset of plaintexts in the output list, and \(b)\) given a plaintext in the mixnet output list, whether it is a decryption of one of a given subset of ciphertexts in the input list. Traceable mixnets allow the mix-servers to jointly prove answers to the above queries to a querier such that neither the querier nor a threshold number of mix-servers learn any information beyond the query result. If the querier is not corrupted, the corrupted mix-servers do not even learn the query result. We propose a construction of a traceable mixnet using novel distributed zero-knowledge proofs of _set membership_ and a related primitive we introduce called _reverse set membership_. Although the set membership problem has been studied in the single-prover setting, the main challenge in our distributed setting lies in making sure that none of the mix-servers learn the association between ciphertexts and plaintexts during the proof. Our construction is faster than existing techniques by at least one order of magnitude. Keywords:verifiable mixnets; traceability; distributed zero-knowledge proofs; set membership; reverse set membership ## 1 Introduction A mixnet is a cryptographic primitive used for anonymous messaging. At a high level, it takes as input a list of ciphertexts, each encrypting some sensitive personal data. The mixnet, which consists of a series of _mix-servers_, processes these ciphertexts and outputs a list of decryptions of the ciphertexts in a randomly permuted order [21]. The secret permutation that links the ciphertexts with their corresponding plaintexts is shared across the mix-servers so that unless a threshold number of them are compromised, this permutation remains completely hidden. Using _verifiable mixnets_[44], it is also possible to publicly _prove_ to a verifier that the output list is obtained correctly by permuting and decrypting each element of the input list, such that the linkages between the ciphertexts and the plaintexts remain completely hidden. In general, a mixnet can be used to hide correlations between different attributes of personal data while maintaining their integrity. Assume that the \(i^{\text{th}}\) individual's data is split into two sets of attributes \((\mathbf{u}_{i},\mathbf{v}_{i})\), where \(\mathbf{u}_{i}\) and \(\mathbf{v}_{i}\) cannot be made public together for privacy reasons. For example, \(\mathbf{u}_{i}\) may contain - or may be easily linked to - a personal id, an ethnic attribute or demographic/geographic information, and \(\mathbf{v}_{i}\) may contain a sensitive outcome like a vote in an election or an infectious disease test report. The correlation between \(\mathbf{u}_{i}\) and \(\mathbf{v}_{i}\) can be hidden by encrypting the attributes \(\mathbf{v}_{i}\) to obtain ciphertexts \(\mathbf{c}_{i}\), uploading \(\mathbf{u}_{i}\) along with \(\mathbf{c}_{i}\) to an input list, and feeding the list of ciphertexts \(\mathbf{c}\) as the mixnet input. The mixnet output \(\mathbf{v}^{\prime}\) is thus a list containing the sensitive outcomes \(\mathbf{v}_{i}\)s in a random permuted order, thus completely hiding which \(\mathbf{u}_{i}\) or \(\mathbf{c}_{i}\) correspond to which \(\mathbf{v}_{i}\) (see Figure 0(a)). However, applications that require anonymity often also require the ability to reveal - and prove correctness of - some partial information about the linkages between \(\mathbf{u}_{i}\)s and \(\mathbf{v}_{i}\)s at levels more granular than that of entire sets. For example, consider electronic health records in a medical context. Although individuals' complete medical records may not be publicly or even internally disclosed, many subset queries between \(\mathbf{u}_{i}\)s and \(\mathbf{v}_{i}\)s are crucial for public health analytics, epidemiology, and resource allocation. For example, in the forward direction, an insurance company may want to verify in zero-knowledge that an individual with a given patient identifier in \(\mathbf{u}_{i}\) does not have threatening preconditions indicated by a given subset of the \(\mathbf{v}^{\prime}\) entries. Similarly, an airline may want to verify that every passenger has a negative report for an infectious disease before allowing them to fly. In the reverse direction, a community health administration may want to understand the prevalence of a particular disease in their jurisdiction. So they may want to know how many test reports of that disease, indicated in a subset of the \(\mathbf{v}^{\prime}_{i}\)s, map back to the subset of the \(\mathbf{u}\) entries that correspond to their jurisdiction. Similarly, individuals deciding to register with an organ exchange organisation may want to find how many organ donors matching their medical profile (indicated in a subset of the \(\mathbf{v}^{\prime}_{i}\)s) map back to their own locality (indicated in a subset of the \(\mathbf{u}_{i}\)s). Various such queries can be answered if for a given ciphertext in the input list, the mixnet can show that the corresponding plaintext belongs to a given subset of entries in the output list (as in Figure 0(b)), or for a given plaintext in the output list, that the corresponding ciphertext belongs to a given subset of entries in the input list (as in Figure 0(c)). Although these queries necessarily break the perfect anonymity of mixnets, _no additional information_ beyond what is requested by an admissible query should be leaked. \begin{tabular}{|c|c|c|c|c|c|} \hline Input & \multicolumn{2}{c|}{\(\mathcal{M}_{1}\)} & \multicolumn{2}{c|}{\(\mathcal{M}_{2}\)} & \multicolumn{1}{c|}{Output} \\ \hline \(\mathbf{u}_{1},\mathbf{c}_{1}\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) \\ \hline \(\mathbf{u}_{2},\mathbf{c}_{2}\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\mathbf{v}_{5}\) \\ \hline \(\mathbf{u}_{3},\mathbf{c}_{3}\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) \\ \hline \(\mathbf{u}_{4},\mathbf{c}_{4}\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\ \hline \(\mathbf{u}_{5},\mathbf{c}_{5}\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) \\ \hline \end{tabular} (a) A traditional verifiable mixnet. \begin{tabular}{|c|c|c|c|c|c|} \hline Input & \multicolumn{2}{c|}{\(\mathcal{M}_{1}\)} & \multicolumn{2}{c|}{\(\mathcal{M}_{2}\)} & \multicolumn{1}{c|}{Output} \\ \hline \(\mathbf{u}_{1},\mathbf{c}_{1}\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) \\ \hline \(\mathbf{u}_{2},\mathbf{c}_{2}\) & \(\times\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\mathbf{v}_{5}\) \\ \hline \(\mathbf{u}_{3},\mathbf{c}_{3}\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) \\ \hline \(\mathbf{u}_{4},\mathbf{c}_{4}\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\ \hline \(\mathbf{u}_{5},\mathbf{c}_{5}\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) \\ \hline \end{tabular} (b) Query whether \(\mathbf{c}_{2}\) encrypted one of \(\{\mathbf{v}_{5},\mathbf{v}_{2},\mathbf{v}_{1}\}\): a TraceIn\((\mathbf{c}_{2},\{\mathbf{v}_{5},\mathbf{v}_{2},\mathbf{v}_{1}\})\) query in a traceable mixnet. \begin{tabular}{|c|c|c|c|c|c|} \hline Input & \multicolumn{2}{c|}{\(\mathcal{M}_{1}\)} & \multicolumn{2}{c|}{\(\mathcal{M}_{2}\)} & \multicolumn{1}{c|}{Output} \\ \hline \(\mathbf{u}_{1},\mathbf{c}_{1}\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) \\ \hline \(\mathbf{u}_{2},\mathbf{c}_{2}\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\ \hline \(\mathbf{u}_{3},\mathbf{c}_{5}\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) \\ \hline \(\mathbf{u}_{4},\mathbf{c}_{4}\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\ \hline \(\mathbf{u}_{5},\mathbf{c}_{5}\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) \\ \hline \end{tabular} (c) Query whether \(\mathbf{v}_{1}\) was encrypted in one of \(\{\mathbf{c}_{1},\mathbf{c}_{2},\mathbf{c}_{3}\}\): a TraceOut\((\{\mathbf{c}_{1},\mathbf{c}_{2},\mathbf{c}_{3}\},\mathbf{v}_{1})\) query in a traceable mixnet. Fig. 1: Mixnets in privacy-preserving applications: the \(i^{\text{th}}\) individual’s data is divided into attributes \((\mathbf{u}_{i},\mathbf{v}_{i})\); \(\mathbf{c}_{i}\)s encrypt \(\mathbf{v}_{i}\)s and are passed as input to the mixnet; the mixnet consists of two mix-servers \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) that jointly decrypt and permute the input list \(\mathbf{c}\) to output a plaintext list \(\mathbf{v}^{\prime}:=(\mathbf{v}_{\pi(i)})_{i=1}^{5}\), such that \(\pi\) is composed of secret permutations \(\pi^{(1)}\) and \(\pi^{(2)}\) of \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) respectively. Of course, for purpose limitation, it is crucial to have appropriate regulatory policies in place, and ensure that the queries allowed to an agent align with their legitimate purpose in the healthcare infrastructure. If required, section-level access control may also be applied to both the input and output lists to ensure that only authorised personnel can view the lists and query the mixnet. ### Traceable mixnets We extend traditional mixnets to _traceable mixnets_. While a traditional mixnet is completely untraceable, a traceable mixnet enables limited traceability. Given a mixnet input list \(\mathbf{c}\) and output list \(\mathbf{v}^{\prime}\), let \(\mathbf{c}_{I}\) denote ciphertexts \(\{\mathbf{c}_{i}\mid i\in I\}\) for some index set \(I\) and \(\mathbf{v}^{\prime}_{J}\) denote plaintexts \(\{\mathbf{v}^{\prime}_{j}\mid j\in J\}\) for some index set \(J\). A traceable mixnet allows the mix-servers to jointly prove to a querier answers to queries of the following form (see Figures 0(b) and 0(c)): * \(\mathbf{TraceIn}(\mathbf{c}_{i},\mathbf{v}^{\prime}_{J})\): Whether a ciphertext \(\mathbf{c}_{i}\) in the mixnet input list encrypted a plaintext in set \(\mathbf{v}^{\prime}_{J}\)? * \(\mathbf{TraceOut}(\mathbf{c}_{I},\mathbf{v}^{\prime}_{j})\): Whether a plaintext \(\mathbf{v}^{\prime}_{j}\) in the mixnet output list was encrypted in a ciphertext in set \(\mathbf{c}_{I}\)? The secrecy requirement demands that an adversary controlling the querier and less than a threshold number of mix-servers should not be able to learn any more information beyond the output of the query. Also, it should not learn any information about the output of a query that the honest mix-servers refuse to answer. Finally, if the adversary does not control the querier, it should not even learn the output of the query, to prevent mix-servers from accumulating query responses issued to different queriers over time. We provide a formal threat model in Section 2. Our secrecy requirements are more stringent than what existing proof-of-shuffle and verifiable decryption techniques [44] in the area of verifiable mixnets can provide. Given a subset of input ciphertexts and its corresponding equal-sized subset of output plaintexts, these techniques can prove in zero-knowledge that the two sets correspond to each other. However, to use these techniques, one must reveal the corresponding equal-sized set, leaking more information than the TraceIn/TraceOut output (see Figure 2). Further, notice that the example applications we have shown actually require multiple TraceIn/TraceOut queries against a given set. Thus, let us define the following batched queries: * \(\mathbf{BTraceIn}(\mathbf{c}_{I},\mathbf{v}^{\prime}_{J})\): Which ciphertexts in set \(\mathbf{c}_{I}\) encrypted a plaintext in set \(\mathbf{v}^{\prime}_{J}\)? * \(\mathbf{BTraceOut}(\mathbf{c}_{I},\mathbf{v}^{\prime}_{J})\): Which plaintexts in set \(\mathbf{v}^{\prime}_{J}\) were encrypted in a ciphertext in set \(\mathbf{c}_{I}\)? A \(\mathbf{BTraceIn}(\mathbf{c}_{I},\mathbf{v}^{\prime}_{J})\) query can be answered by repeatedly executing a \(\mathbf{TraceIn}(\mathbf{c}_{i},\mathbf{v}^{\prime}_{J})\) query for each \(i\in I\). A \(\mathbf{BTraceOut}(\mathbf{c}_{I},\mathbf{v}^{\prime}_{J})\) query can be answered by repeatedly executing a \(\mathbf{TraceOut}(\mathbf{c}_{I},\mathbf{v}^{\prime}_{j})\) query for each \(j\in J\). The BTraceIn/BTraceOut queries should be answered efficiently, in time linear/log-linear in the size of the mixnet input list, to support typical applications containing millions of entries. Note that when \(I\) and \(J\) refer to all the indices in the input/output lists, the BTraceIn and BTraceOut queries together provide the same guarantees as a traditional verifiable mixnet. ### Our contributions 1. We introduce and formalise the notion of _traceable mixnets_ and provide completeness, soundness and secrecy definitions for them (Section 2). 2. We propose a construction of traceable mixnets (Section 4) in terms of distributed zero-knowledge proofs of _set membership_[15] and a related primitive we introduce called _reverse set membership_ (see Definition 2). Informally, a ZKP of set membership proves that a given cryptographic commitment [61] commits one of a given set of values, whereas a ZKP of reverse set membership proves that a given value _is committed by_ one of a given set of commitments. Prior work has focused on the set membership problem and the single-prover case, whereas our ZKPs work in the distributed mixnet setting where none of the provers (the mix-servers) learn any information about either the commitment openings or the permutation between the input and output lists. Our ZKPs are interactive, but this is appropriate for the inherently interactive nature of our use-case. 3. We provide a comprehensive implementation and benchmarking of our proposal (Section 6). Our construction has linear time complexity in the size of the mixnet input list for batched queries and greatly outperforms even single-prover existing techniques. Specifically, our distributed ZKPs of set member Figure 2: Additional information leakage when using proof-of-shuffle and verifiable decryption techniques. Subfigures \((a)\) and \((b)\) are for a TraceIn query: the goal is to identify whether ciphertext \(\mathbf{c}_{4}\) encrypted a plaintext in the set \(\{\mathbf{v}_{2}^{\prime},\mathbf{v}_{3}^{\prime},\mathbf{v}_{4}^{\prime}\}\). With these techniques, one needs to reveal either \((a)\) plaintext \(\mathbf{v}_{3}^{\prime}(=\mathbf{v}_{4})\) corresponding to \(\mathbf{c}_{4}\), or \((b)\) set \(\{\mathbf{c}_{1},\mathbf{c}_{2},\mathbf{c}_{4}\}\) corresponding to set \(\{\mathbf{v}_{2}^{\prime},\mathbf{v}_{3}^{\prime},\mathbf{v}_{4}^{\prime}\}\). Subfigures \((c)\) and \((d)\) are for a TraceOut query: the goal is to identify whether plaintext \(\mathbf{v}_{2}^{\prime}(=\mathbf{v}_{3})\) was encrypted in one of the ciphertexts in the set \(\{\mathbf{c}_{3},\mathbf{c}_{4},\mathbf{c}_{5}\}\). For this, one needs to reveal either \((c)\) ciphertext \(\mathbf{c}_{3}\) corresponding to \(\mathbf{v}_{2}^{\prime}\), or \((d)\) set \(\{\mathbf{v}_{1}^{\prime},\mathbf{v}_{2}^{\prime},\mathbf{v}_{4}^{\prime}\}\) corresponding to set \(\{\mathbf{c}_{3},\mathbf{c}_{4},\mathbf{c}_{5}\}\). A traceable mixnet can achieve these goals without revealing any intermediate information. ship and reverse set membership (for BTraceIn and BTraceOut queries respectively) have per-prover proving times that are respectively 43x and 9x faster than single-prover proofs using zkSNARKs and Merkle trees. By conservative estimates, this would make them at least 86x and 18x faster than the state-of-the-art collaborative zkSNARKs [59] in the distributed setting. Our implementation is available at [https://github.com/agrawalprash/traceable-mixnets](https://github.com/agrawalprash/traceable-mixnets). ### Related work #### 1.3.1 Existing privacy enhancing technologies A common technique for privacy-preserving statistical data disclosure is _anonymisation_: sharing noisy or partially hidden versions \((\vec{\mathbf{u}},\vec{\mathbf{v}})\) of datasets \((\mathbf{u},\mathbf{v})\). However, it is well-known that anonymisation is a poor safeguard and does not prevent against subsequent re-identification of individuals [56, 27]. One can also anonymise by only releasing output \(\mathbf{v}^{\prime}\) of a traditional mixnet, but this would not allow analytics queries against the \(\mathbf{u}\) attributes. Another approach is to use _differentially private_ mechanisms [32] to interactively provide noisy answers to queries about attributes \((\mathbf{u},\mathbf{v})\) such that the distribution of the answers changes negligibly on changing the values of any given individual's data items. Even here, because of correlations in different individuals' data items, an adversary allowed to run arbitrary queries can infer arbitrary information about the joint distribution \((\mathbf{u}_{i},\mathbf{v}_{i})\)[68]. Hence, providing even noisy answers to arbitrary queries about \((\mathbf{u},\,\mathbf{v})\) is risky. Besides, this approach is only applicable to statistical queries. In comparison, the traceable mixnet approach keeps datasets \(\mathbf{u}\) and \(\mathbf{v}\) unlinkable, except when providing answers to specific pre-approved queries to specific agents. The answers are exact, so they are more useful than noisy ones when the agents satisfy the regulatory policy. Also, the distributed mixnet setting naturally fits into privacy-preserving applications, whereas most anonymisation/differential privacy solutions assume a trusted data curator. Group/ring signatures [20, 62] allow a verifier to verify that a message was sent by one of a group of senders, without learning which one. This has parallels with our TraceOut query, if we map the group of senders with the set of ciphertexts in the TraceOut query. However, it requires active involvement of the senders when the TraceOut query is made and is not well-suited for running varied analytics queries at the backend. Similarly, anonymous credentials [17] let individuals prove in zero-knowledge that they satisfy some eligibility criteria, which has parallels with our TraceIn queries, but this approach also requires active involvement of the individuals. #### 1.3.2 Existing verifiable mixnets We review existing verifiable mixnets by following Haines and Muller's comprehensive review on the topic [44]. All the reviewed techniques -- _message tracing_[66, 50], _verification codes_[63, 50], _trip wires_[48, 13], _message replication_[48], _randomised partial checking (RPC)_[45, 52, 51, 49] and _proofs of correct shuffle_[57, 64, 67] -- only verify whether the mixnet output list was a decryption and permutation of its input list of ciphertexts and do not support the fine-grained TraceIn/TraceOut queries. _Message tracing_ and _verification codes_ provide a limited form of traceability by allowing senders of input ciphertexts to verify that their _own_ ciphertexts were processed correctly. However, this approach is only sender-verifiable and someone who does not hold the ciphertext secrets cannot perform the verification. _Proofs of correct shuffle_ represent the state-of-the-art approach in the area of verifiable mixnets. A proof-of-shuffle proves in zero-knowledge that a list of ciphertexts is a permutation and re-encryption of another list of ciphertexts. This, combined with verifiable decryption techniques [37], provides a zero-knowledge proof that a list of plaintexts is a decryption and permutation of a list of ciphertexts. Such an approach can also prove that a _sublist_ in the mixnet output list is a decryption and permutation of its corresponding sublist in the mixnet input list. However, as shown in Figure 2, this does not help answer the TraceIn/TraceOut queries _in zero knowledge_. #### 2.0.1 Set membership proofs The TraceIn and TraceOut queries of a traceable mixnet are very closely related to zero-knowledge proofs (ZKPs) of set membership and a related statement that we call reverse set membership. We define them for the single-prover case below6: Footnote 6: Note that we are using the popular Camenisch-Stadler notation [18] to write ZKPs of knowledge. Definition 1 (ZKP of set membership): Given a commitment scheme \(\mathsf{comm}\) with commitment space \(\Gamma\), message space \(V\) and randomness space \(R\), a commitment \(\gamma\in\Gamma\) and a set of values \(\phi\in 2^{V}\), a _ZKP of set membership_, denoted as \(\rho_{\mathsf{SM}}(\gamma,\phi):=\mathsf{PK}\{(v,r):\gamma=\mathsf{comm}(v;r) \wedge v\in\phi\}\), is a ZKP of knowledge of the opening \((v,r)\) of \(\gamma\) such that the committed value \(v\) is in set \(\phi\) and \(r\in R\). Definition 2 (ZKP of reverse set membership): Given a commitment scheme \(\mathsf{comm}\) as above, a value \(v\in V\) and a set of commitments \(\Phi\in 2^{\Gamma}\), a _ZKP of reverse set membership_, denoted as \(\rho_{\mathsf{RSM}}(\Phi,v):=\mathsf{PK}\{(r):\gamma=\mathsf{comm}(v;r)\wedge \gamma\in\Phi\}\), is a ZKP of knowledge of a randomness \(r\in R\) such that some commitment \(\gamma\in\Phi\) commits \(v\) using randomness \(r\). A. Techniques with quadratic complexity:Both ZKPs of set membership and reverse set membership can be constructed using a generic OR composition of \(\Sigma\)-protocols [24] as \(\rho_{\mathsf{SM}}(\gamma,\phi):=\mathsf{PK}\{(r):\bigvee_{v\in\phi}\gamma= \mathsf{comm}(v;r)\}\) and \(\rho_{\mathsf{RSM}}(\Phi,v):=\mathsf{PK}\{(r):\bigvee_{\gamma\in\Phi}\gamma= g^{v}h^{r}\}\) respectively. However, for proving \(\rho_{\mathsf{SM}}(\gamma,\phi)\) for each \(\gamma\in\Phi\) against the same set \(\phi\) or \(\rho_{\mathsf{RSM}}(v,\Phi)\) for each \(v\in\phi\) against the same set \(\Phi\) (the "batched" queries), such an approach results in an overall \(O(n^{2})\) complexity if \(|\Phi|,|\phi|\) are \(O(n)\). Groth and Kohlweiss [42] propose a ZKP of knowledge of the form \(\rho_{\mathsf{RSM}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! computational_ complexity (for both the prover and the verifier) remains \(O(n^{2})\). _Accumulator-based techniques:_ Cryptographic accumulators [55, 6, 58] allow creating efficient ZKPs of set membership. An accumulator scheme (\(\mathsf{Acc}\), \(\mathsf{GenWitness}\), \(\mathsf{AccVer}\)) allows computing a short digest \(A_{\phi}\) to a large set \(\phi\) as \(A_{\phi}\leftarrow\mathsf{Acc}(\phi)\) and a short membership witness \(w_{v}\) for a member \(v\in\phi\) as \(w_{v}\leftarrow\mathsf{GenWitness}(A_{\phi},v)\) such that \(\mathsf{AccVer}(A_{\phi},v,w_{v})=1\) is a proof that \(v\in\phi\). A ZKP of set membership can thus be constructed by requiring both the prover and the verifier to first compute \(A_{\phi}\), the prover to compute \(w_{v}\) and both to then engage in a ZKP of knowledge \(\rho_{\mathsf{SM}\text{-}\mathsf{Acc}}(\gamma,A_{\phi}):=\mathsf{PK}\{(v,r,w_{ v}):\gamma=\mathsf{comm}(v;r)\land\mathsf{AccVer}(A_{\phi},v,w_{v})=1\}\). If the accumulator scheme allows set members to come from the commitment space, a ZKP of reverse set membership too can be similarly constructed using a ZKP of knowledge \(\rho_{\mathsf{RSM}\text{-}\mathsf{Acc}}(v,A_{\Phi}):=\mathsf{PK}\{(v,r,w_{ \gamma}):\gamma=\mathsf{comm}(v;r)\land\mathsf{AccVer}(A_{\Phi},\gamma,w_{ \gamma})=1\}\). The most popular approach for constructing ZKPs of set membership using accumulators involves Merkle accumulators [55] as the accumulator scheme and zkSNARKs [41, 38, 22] as the ZK proof system. With Merkle accumulators, both \(\mathsf{GenWitness}\) and \(\mathsf{AccVer}\) take \(O(\log n)\) time, where \(n=|\phi|\), allowing for proving \(n\) set membership and reverse set membership statements in \(O(n\log(n))\) time. However, computation of hashes in the underlying zkSNARK circuit is the main efficiency bottleneck for this approach. Note that this approach can generically support \(\rho_{\mathsf{RSM}\text{-}\mathsf{Acc}}\) by computing Merkle accumulators over hashes of commitments. Benarroch et al. [9] present a non-generic ZKP of set membership using RSA accumulators, which avoids the expensive hash computations. However, the technique does not support ZKP of reverse set membership. Additionally, computing the witness \(w_{v}\) for RSA accumulators takes \(O(n)\) time, making this technique \(O(n^{2})\) for batched queries. This issue may be somewhat mitigated using dynamic accumulators [16] that allow efficient witness updates with updates to the set. There have also been attempts to efficiently batch multiple membership proofs together [12, 19]. _Extending to the distributed setting:_ All the above techniques are focused on the single-prover setting. Extending them to our distributed mixnet setting where none of the provers (the mix-servers) have complete information about either the commitment openings or the permutation between the commitments and the plaintext values is non-trivial. Also, the batching techniques [12, 19] presuppose that the prover knows which entries pass the membership proof. This assumption breaks down in traceable mixnets. Collaborative zkSNARKs [59] have recently been proposed to allow a distributed set of provers holding secret shares of a SNARK witness to prove joint knowledge of the same. They add roughly 2x overhead in per-prover proving time over the standard zkSNARKs [59]. DPZKs [28] provide similar guarantees. However, even to securely obtain shares of the SNARK witness (the commitment openings and the permutation in our setting), an additional MPC protocol is likely required. D. Signature-based set membership:In this paper, we build upon a different approach towards ZKPs of set membership, initiated by Camenisch et al. [15] in the single-prover setting. The main idea is that the verifier provides to the prover signatures on all members of the set under a fresh signing key generated by the verifier; the prover proves knowledge of a signature on the committed value in ZK. The protocol is a proof of set membership because the prover does not obtain signatures on non-members of the set and cannot forge them. Importantly, batched set membership queries are \(O(n)\) because the verifier's signatures can be reused. We extend this technique in two important ways. First, we provide a novel protocol to perform _batched reverse set membership proofs_ with \(O(n)\) complexity. Second, we provide distributed versions of both set membership and reverse set membership proofs and construct a traceable mixnet using these distributed ZKPs. ## 2 Formal definitions We now formalise the notion of a traceable mixnet. We directly present the batched BTraceIn/BTraceOut protocols as that is our main focus (the TraceIn/TraceOut protocols are trivial special cases). Also, for simplicity, we present our mixnet for the special case when all \(m\) mix-servers in the mixnet are required to decrypt the ciphertexts but any set of less than \(m\) mix-servers colluding together cannot break secrecy. Extension to the general case when only some \(t\) out of \(m\) mix-servers are required to decrypt the ciphertexts but any set of less than \(t\) mix-servers cannot break secrecy is possible with standard threshold cryptography techniques [30]. Finally, we assume existence of an authenticated broadcast channel for publishing authenticated messages among the parties. **Notation.** Given a positive integer \(n\), we denote the set \(\{1,\ldots,n\}\) by \([n]\). We let (boldface) \(\mathbf{x}\in X^{n}\) denote an \(n\)-length vector of values drawn from a set \(X\), \(\mathbf{x}_{i}\) denote the \(i^{\text{th}}\) component of \(\mathbf{x}\), and, given an index set \(I\subseteq[n]\), \(\mathbf{x}_{I}\) denote the set \(\{\mathbf{x}_{i}\mid i\in I\}\). For any scalar binary operation \(\odot:X\times Y\to Z\) and vectors \(\mathbf{x}\in X^{n}\), \(\mathbf{y}\in Y^{n}\), we abuse notation to let \(\mathbf{x}\odot\mathbf{y}\) denote the vector \((\mathbf{x}_{i}\odot\mathbf{y}_{i})_{i\in[n]}\), i.e., the vector obtained by component-wise application of \(\odot\). We carry the scalar representation of the binary operation when lifted to the vector version, e.g., \(\mathbf{u}^{\mathbf{v}}\) denotes the vector \((\mathbf{u}_{i}^{\mathbf{v}_{i}})_{i\in[n]}\). Similarly, we write \(\mathbf{f}(\mathbf{v})\) to denote the vector \((f(\mathbf{v}_{i}))_{i\in[n]}\), \(\mathbf{f}(x,\mathbf{v})\) to denote the vector \((f(x,\mathbf{v}_{i}))_{i\in[n]}\), and so on. We denote a multiparty computation protocol \(P\) between parties \(\mathcal{P}_{1},\ldots,\mathcal{P}_{m}\) where the common input of each party is \(ci\), \(\mathcal{P}_{k}\)'s secret input is \(si_{k}\), the common output is \(co\) and \(\mathcal{P}_{k}\)'s secret output is \(so_{k}\) as follows: \(co,(\mathcal{P}_{k}[\![so_{k}]\!])_{k\in[m]}\gets P(ci,(\mathcal{P}_{k}[ \![si_{k}]\!])_{k\in[m]})\). When a party does not have a secret output, we drop it from the left-hand side. In security experiments where the experimenter plays the role of honest parties \((\mathcal{P}_{k})_{k\in H}\) for some \(H\subset[m]\) and an adversary \(\mathcal{A}\) plays the role of \((\mathcal{P}_{k})_{k\in[m]\setminus H}\), we indicate it as \(co\), \((\mathcal{P}_{k}[\![so_{k}]\!])_{k\in H}\gets P(ci,\,(\mathcal{P}_{k}[\![si_{ k}]\!])_{k\in H}\), \((\mathcal{P}_{k}^{\mathcal{A}})_{k\in[m]\setminus H})\). We call a function \(f\)_negligible_ if for any polynomial \(p\), there exists an \(N\in\mathbb{N}\) such that \(f(x)<1/p(x)\) for all \(x>N\). Definition 3 (Traceable mixnets): A _traceable mixnet_ is a tuple of protocols/algorithms_ (Keygen, Enc, Mix, BTraceln, BTraceOut)_ between \(n\) senders \((S_{i})_{i\in[n]}\), \(m\) mix-servers \((\mathcal{M}_{k})_{k\in[m]}\) and a querier or verifier \(\mathcal{Q}\) such that: * \(\mathsf{mpk},(\mathcal{M}_{k}[\![\mathsf{msk}^{(k)}]\!])_{k\in[m]}\leftarrow \mathsf{Keygen}(1^{\lambda},(\mathcal{M}_{k}[\![]\!])_{k\in[m]})\) is a key generation protocol between \((\mathcal{M}_{k})_{k\in[m]}\), where \(\lambda\) is a security parameter (given in unary), individual mix-servers do not have any secret input, the common output is a mixnet public key \(\mathsf{mpk}\) and each \(\mathcal{M}_{k}\)'s secret output is a secret key \(\mathsf{msk}^{(k)}\). * \(\mathbf{c}_{i}\leftarrow\mathsf{Enc}(\mathsf{mpk},\mathbf{v}_{i})\) is an algorithm run by sender \(S_{i}\), where \(\mathsf{mpk}\) is the mixnet public key and \(\mathbf{v}_{i}\) is \(S_{i}\)'s sensitive input drawn from some plaintext space \(\mathbb{V}\). Output \(\mathbf{c}_{i}\) is a ciphertext "encrypting" \(\mathbf{v}_{i}\). * \(\mathbf{v}^{\prime},(\mathcal{M}_{k}[\![\omega^{(k)}]\!])_{k\in[m]}\leftarrow \mathsf{Mix}(\mathsf{mpk},\mathbf{c},\,(\mathcal{M}_{k}[\![\mathsf{msk}^{(k)}]\!]) _{k\in[m]})\) is a mixing protocol between \((\mathcal{M}_{k})_{k\in[m]}\), where \(\mathsf{mpk}\) is the mixnet public key and \(\mathbf{c}\leftarrow(\mathsf{Enc}(\mathsf{mpk},\,\mathbf{v}_{i}))_{i\in[n]}\) is a vector of ciphertexts encrypting the senders' plaintexts \((\mathbf{v}_{i})_{i\in[n]}\). Each \(\mathcal{M}_{k}\)'s secret input is its secret key \(\mathsf{msk}^{(k)}\). The common output is a vector \(\mathbf{v}^{\prime}\) of plaintext values obtained after permuting and decrypting \(\mathbf{c}\) (thus \(\mathbf{v}^{\prime}=(\mathbf{v}_{\pi(i)})_{i\in[n]}\) for some permutation \(\pi\)). Each \(\mathcal{M}_{k}\)'s secret output is a witness \(\omega^{(k)}\) to be used in proving correctness of the BTraceln/BTraceOut outputs (see below). * \(\mathcal{Q}[\![\mathbf{c}_{I^{*}}]\!]\leftarrow\mathsf{BTraceln}(\mathsf{mpk},\,\bm {c},\,\mathbf{v}^{\prime},\,I,\,J,\,(\mathcal{M}_{k}[\![\mathsf{msk}^{(k)},\, \omega^{(k)}]\!])_{k\in[m]},\,\mathcal{Q}[\![]\!])\) is a protocol between \((\mathcal{M}_{k})_{k\in[m]}\) and querier \(\mathcal{Q}\), where \(\mathbf{v}^{\prime},\mathcal{M}_{k}[\![\omega^{(k)}]\!]\leftarrow\mathsf{Mix}( \mathsf{mpk},\,\mathbf{c},\,\mathcal{M}_{k}[\![\mathsf{msk}^{(k)}]\!])\), \(\mathbf{c}\leftarrow(\mathsf{Enc}(\mathsf{mpk},\,\mathbf{v}_{i}))_{i\in[n]}\) and \(I,J\subseteq[n]\). \(\mathcal{Q}\)'s secret output is a set of ciphertexts \(\mathbf{c}_{I^{*}}=\{\mathbf{c}_{i}\in\mathbf{c}_{I}\mid\mathbf{v}_{i}\in\mathbf{v}^{\prime}_{J}\}\); \(\mathcal{Q}\) may abort if it is not convinced about the correctness of \(\mathbf{c}_{I^{*}}\). * \(\mathcal{Q}[\![\mathbf{v}^{\prime}_{J^{*}}]\!]\leftarrow\mathsf{BTraceln}(\mathsf{ mpk},\,\mathbf{c},\,\mathbf{v}^{\prime},\,I,\,J,\,(\mathcal{M}_{k}[\![\mathsf{msk}^{(k)},\, \omega^{(k)}]\!])_{k\in[m]},\,\mathcal{Q}[\![]\!])\) is a protocol between \((\mathcal{M}_{k})_{k\in[m]}\) and \(\mathcal{Q}\), where all inputs are exactly the same as BTraceln and \(\mathcal{Q}\)'s secret output is a set of plaintexts \(\mathbf{v}^{\prime}_{J^{*}}=\{\mathbf{v}^{\prime}_{j}\in\mathbf{v}^{\prime}_{J}\mid\mathbf{v}^{ \prime}_{j}\in\mathbf{v}_{I}\}\); \(\mathcal{Q}\) may abort if it is not convinced about the correctness of \(\mathbf{v}^{\prime}_{J^{*}}\). ### Completeness The completeness definition for traceable mixnets (see Definition 4 and Figure 3) models that when all the parties are honest then \(a)\)\(\mathcal{Q}\)'s output \(\mathbf{c}_{I^{*}}\) on a BTraceln query on \((\mathbf{c},\mathbf{v}^{\prime},I,J)\) is exactly the set of ciphertexts in \(\mathbf{c}_{I}\) that encrypted some plaintext in \(\mathbf{v}^{\prime}_{J}\) (line 7), and \(b)\) its output \(\mathbf{v}^{\prime}_{J^{*}}\) on a BTraceOut query on \((\mathbf{c},\mathbf{v}^{\prime},I,J)\) is exactly the set of plaintexts in \(\mathbf{v}^{\prime}_{J}\) that were encrypted by some ciphertext in \(\mathbf{c}_{I}\) (line 8). Note that we only consider the case of distinct values. The case of repeated values is trivially reducible to this case if \(S_{i}\) prefixes \(\mathbf{v}_{i}\) with a nonce drawn uniformly from a large set. **Definition 4** (Completeness).: _A traceable mixnet satisfies completeness if for each security parameter \(\lambda\in\mathbb{N}\), number of ciphertexts \(n\in\mathbb{N}\), vector \(\mathbf{v}\in\mathbb{V}^{n}\) of distinct plaintext values and index sets \(I,J\subseteq[n]\), there exists a negligible function \(\mathsf{negl}\) such that_ \[\Pr[\mathsf{Exp}_{\mathsf{completeness}}(1^{\lambda},n,\mathbf{v},I,J)=1]\geq 1- \mathsf{negl}(\lambda), \tag{1}\] _where \(\mathsf{Exp}_{\mathsf{completeness}}\) is as defined in Figure 3._ ### Soundness The soundness definition (see Definition 5) models that as long as input ciphertexts are well-formed, even when all the mix-servers are dishonest, the sets \(\mathbf{c}_{I^{*}}\) and \(\mathbf{v}^{\prime}_{J^{*}}\) output by \(\mathcal{Q}\) in the \(\mathsf{BTraceln}\) and \(\mathsf{BTraceOut}\) queries respectively must be "correct," where the correctness of \(\mathbf{c}_{I^{*}}\) and \(\mathbf{v}^{\prime}_{J^{*}}\) is exactly as defined in the completeness experiment. We do allow the cheating mix-servers to force \(\mathcal{Q}\) to abort but they should not be able to force an incorrect output. We consider only the case of well-formed input ciphertexts because the aim of \(\mathsf{BTraceIn}/\mathsf{BTraceOut}\) queries is to establish whether the mix-servers acted dishonestly and soundness with respect to ill-formed inputs is not well-defined. Nevertheless, proofs of well-formedness of inputs are generally required for application-level correctness and these can be constructed by the senders at the time of uploading their ciphertexts. In more detail, the experiment (Figure 4) begins with the key generation protocol where an adversary \(\mathcal{A}\) controlling all the mix-servers \((\mathcal{M}_{k})_{k\in[m]}\) provides the mixnet public key \(\mathsf{mpk}\) (line 1). We let \(\mathcal{A}\) supply the plaintexts but create ciphertexts from them honestly (lines 2-3), modelling that \(\mathcal{A}\) can supply plaintexts of its choice to cheat but the ciphertexts must be well-formed. We then allow \(\mathcal{A}\) to run the \(\mathsf{Mix}\) protocol and produce the output list \(\mathbf{v}^{\prime}\) (line 4). \(\mathcal{A}\) then outputs index sets \(I\) and \(J\) on which it wants to break the subsequent \(\mathsf{BTraceln}/\mathsf{BTraceOut}\) queries (line 3). During the \(\mathsf{BTraceln}/\mathsf{BTraceOut}\) protocols, \(\mathcal{Q}\) interacting with \((\mathcal{M}_{k})_{k\in[m]}\) controlled by \(\mathcal{A}\) outputs \(\mathbf{c}_{I^{*}}\) and \(\mathbf{v}^{\prime}_{J^{*}}\) respectively (lines 6-7). \(\mathcal{A}\) wins if it supplies distinct plaintexts to each sender and outputs Figure 3: Completeness experiment distinct entries after mixing, \(\mathcal{Q}\) produces valid outputs \(\mathbf{c}_{I^{*}},\mathbf{v}^{\prime}_{J^{*}}\) (does not abort) but at least one of them is incorrect. Definition 5 (Soundness): A traceable mixnet satisfies _soundness_ if for each PPT adversary \(\mathcal{A}\) and security parameter \(\lambda\in\mathbb{N}\), there exists a negligible function \(\mathsf{negl}\) such that \[\Pr[\mathsf{Exp}^{\mathcal{A}}_{\mathsf{soundness}}(1^{\lambda})=1]\leq\mathsf{ negl}(\lambda), \tag{2}\] where \(\mathsf{Exp}^{\mathcal{A}}_{\mathsf{soundness}}\) is as defined in Figure 4. ### Secrecy The secrecy definition (see Definition 6) extends a standard anonymity property [8, 7, 29, 10] to the case when the \(\mathsf{BTraceln}/\mathsf{BTraceOut}\) queries are also available. The standard anonymity property can be stated for our setting as follows: an adversary controlling all-but-two senders, the querier and any set of less than \(m\) mix-servers should not be able to distinguish between a world where ciphertexts \((c_{0},c_{1})\) sent by the two honest senders encrypt values \((v_{0},v_{1})\) (world 0) and the world where they encrypt \((v_{1},v_{0})\) (world 1). In the presence of \(\mathsf{BTraceln}/\mathsf{BTraceOut}\) query protocols, it is trivial to distinguish between the two worlds because of the query output itself (e.g., if \(I,J\) given to a \(\mathsf{BTraceln}\) query are such that \(c_{0},c_{1}\in\mathbf{c}_{I}\) and \(v_{0}\in\mathbf{v}^{\prime}_{J}\) but \(v_{1}\not\in\mathbf{v}^{\prime}_{J}\) then \(\mathcal{A}\) immediately knows it is in world 0 if output \(\mathbf{c}_{I^{*}}\) includes \(c_{0}\)). Therefore, we require that \(a)\) in all the \(\mathsf{BTraceln}\) queries either both \(v_{0},v_{1}\in\mathbf{v}^{\prime}_{J}\) or both \(v_{0},v_{1}\not\in\mathbf{v}^{\prime}_{J}\) and \(b)\) in all the \(\mathsf{BTraceOut}\) queries either both \(c_{0},c_{1}\in\mathbf{c}_{I}\) or both \(c_{0},c_{1}\not\in\mathbf{c}_{I}\). In more detail, in this experiment (Figure 5), adversary \(\mathcal{A}\) engages in the key generation protocol where it controls all the mix-servers except one, i.e., \(\mathcal{M}_{k^{*}}\) (line 1). It then supplies input ciphertexts for all the senders except the two that it does not control, say \(S_{i_{0}}\) and \(S_{i_{1}}\). For these senders, it supplies the values \(v_{0},v_{1}\) (line 2). In world 0 (\(b=0\)), \(S_{i_{0}}\)'s ciphertext \(\mathbf{c}_{i_{0}}\) encrypts \(v_{0}\) and \(S_{i_{1}}\)'s ciphertext \(\mathbf{c}_{i_{1}}\) encrypts \(v_{1}\); in world 1 (\(b=1\)), this order is reversed (lines 3-4). Figure 4: Soundness experiment The ciphertext list thus formed is processed through the \(\mathsf{Mix}\) protocol, where \(\mathcal{A}\) controls all mix-servers except \(\mathcal{M}_{k^{*}}\) and produces an output plaintext list \(\mathbf{v}^{\prime}\) (line 5). Then, \(\mathcal{A}\) obtains access to oracles \(\mathsf{OTraceln}\), \(\mathsf{OTraceOut}\) that let it choose \(I\), \(J\), control \(\mathcal{Q}\) and all mix-servers except \(\mathcal{M}_{k^{*}}\) and interact with \(\mathcal{M}_{k^{*}}\) in the \(\mathsf{BTraceln}\), \(\mathsf{BTraceOut}\) protocols (lines 6-14). \(\mathcal{A}\) is required to respect the condition of including either both or none of the honest senders' ciphertexts/plaintexts in its oracle calls (lines 12 and 18). Finally, \(\mathcal{A}\) outputs a bit \(b^{\prime}\) as its guess of the bit \(b\) (line 5) and wins if its advantage in making the correct guess is non-negligible. Definition 6 (Secrecy): A traceable mixnet satisfies _secrecy_ if for each PPT adversary \(\mathcal{A}\), security parameter \(\lambda\in\mathbb{N}\), \(k^{*}\in[m]\), and \(i_{0},i_{1}\in[n]\), there exists a negligible function \(\mathsf{negl}\) such that \[\begin{split}|\Pr[\mathsf{Exp}^{\mathcal{A}}_{\mathsf{secrecy}} (1^{\lambda},k^{*},i_{0},i_{1},0)=1]-\\ \Pr[\mathsf{Exp}^{\mathcal{A}}_{\mathsf{secrecy}}(1^{\lambda},k^ {*},i_{0},i_{1},1)=1]|\leq\mathsf{negl}(\lambda),\end{split} \tag{3}\] where \(\mathsf{Exp}^{\mathcal{A}}_{\mathsf{secrecy}}\) is as defined in Figure 5. Note that the case of the honest mix-server refusing to answer a query is already modelled by Definition 6 if we restrict \(\mathcal{A}\) to not make the corresponding \(\mathsf{OTraceln}/\mathsf{OTraceOut}\) call. Definition 7 below models that when \(\mathcal{A}\) does not control \(\mathcal{Q}\), it should not even learn the output of the \(\mathsf{BTraceln}/\mathsf{BTraceOut}\) queries. Definition 7 (Output secrecy): A traceable mixnet satisfies _output secrecy_ if it satisfies secrecy as per Definition 6 except that in experiment \(\mathsf{Exp}^{\mathcal{A}}_{\mathsf{secrecy}}\) (Figure 5), \(\mathcal{A}\) does not control \(\mathcal{Q}\) during the \(\mathsf{BTraceln}/\mathsf{BTraceOut}\) calls (lines 10 and 14) and the constraints on lines 9 and 13 are removed. Figure 5: Secrecy experiment Preliminaries ### Setup In our traceable mixnet construction, we assume that the output of the following Setup algorithm is implicitly available to all the parties: (\(q\), \(\mathbb{G}_{1}\), \(\mathbb{G}_{2}\), \(\mathbb{G}_{T}\), \(e\), \(f_{1}\), \(g_{1}\), \(h_{1}\), \(f_{2}\), \(g_{2}\), \(f_{T}\)) \(\leftarrow\)Setup\((1^{\lambda},m,n)\). Setup takes as input a security parameter \(\lambda\in\mathbb{N}\), integers \(m\) and \(n\) (\(m\) represents the number of mix-servers and \(n\) represents the number of input ciphertexts) and outputs the following setup parameters: a large prime number \(q\) (\(q\gg m,n\)), cyclic groups \(\mathbb{G}_{1},\mathbb{G}_{2},\mathbb{G}_{T}\) of order \(q\), generators (\(f_{1},g_{1},h_{1}\)), (\(f_{2},g_{2}\)) and \(f_{T}\) of groups \(\mathbb{G}_{1}\), \(\mathbb{G}_{2}\) and \(\mathbb{G}_{T}\) respectively, and an efficiently computable bilinear map \(e:\mathbb{G}_{1}\times\mathbb{G}_{2}\rightarrow\mathbb{G}_{T}\)7. We assume that the \(n\)-Strong Diffie Hellman (SDH) assumption [11] holds in groups (\(\mathbb{G}_{1},\mathbb{G}_{2}\)) and that the decisional Diffie-Hellman (DDH) and discrete logarithm (DL) problems are hard in \(\mathbb{G}_{1}\). We assume that all generators are randomly generated, e.g., as the output of a hash function modelled as a random oracle. Footnote 7: For all \(a,b\in\mathbb{Z}_{q}\) and generators \(g_{1},g_{2}\) of \(\mathbb{G}_{1}\) and \(\mathbb{G}_{2}\) respectively, \(e(g_{1}^{a},g_{2}^{b})=e(g_{1},g_{2})^{ab}\) and \(e(g_{1},g_{2})\neq 1_{\mathbb{G}_{T}}\), where \(1_{\mathbb{G}_{T}}\) denotes the identity element of \(\mathbb{G}_{T}\). ### Key cryptographic primitives Our construction uses the following cryptographic primitives: #### 3.2.1 Pedersen commitments on \(\mathbb{G}_{1}\). Given \(g_{1},h_{1}\in\mathbb{G}_{1}\), a Pedersen commitment to a \(v\in\mathbb{Z}_{q}\) is computed by choosing \(r\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}\) and setting \(\mathsf{comm}(v;r):=g_{1}^{v}h_{1}^{r}\). Pedersen commitments are \(a)\)_perfectly hiding_: \(g_{1}^{v}h_{1}^{r}\) information-theoretically hides \(v\); \(b)\)_computationally binding_: computing \(v_{1},r_{1},v_{2},r_{2}\in\mathbb{Z}_{q}\) such that \(g_{1}^{v_{1}}h_{1}^{r_{1}}=g_{1}^{v_{2}}h_{1}^{r_{2}}\) and \((v_{1},r_{1})\neq(v_{2},r_{2})\) breaks the DL assumption in \(\mathbb{G}_{1}\), and \(c)\)_additively homomorphic_: given \(\gamma_{1}=g_{1}^{v_{1}}h_{1}^{r_{1}}\) and \(\gamma_{2}=g_{1}^{v_{2}}h_{1}^{r_{2}}\), \(\gamma_{1}\gamma_{2}=g_{1}^{v_{1}+v_{2}}h_{1}^{r_{1}+r_{2}}=\mathsf{comm}(v_{ 1}+v_{2};r_{1}+r_{2})\). #### 3.2.2 Boneh-Boyen signatures In a (basic) Boneh-Boyen signature scheme (Section 3.1; [11]), the signer chooses its secret key as \(x\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}\) and verification key as \(y\gets g_{2}^{x}\). To sign a message \(m\in\mathbb{Z}_{q}\), the signer computes \(\sigma\gets g_{1}^{\frac{1}{m+x}}\). The signature is verified by checking \(e(\sigma,yg_{2}^{m})\stackrel{{?}}{{=}}e(g_{1},g_{2})\). This scheme is unforgeable against weak chosen message attacks under the \(n\)-SDH assumption [11]. #### 3.2.3 BBS+ signatures In a BBS+ signature scheme [2], the signer chooses its secret key as \(x\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}^{*}\) and verification key as \(y\gets f_{2}^{x}\). To sign a message \(m\in\mathbb{Z}_{q}\), the signer computes \(c,r\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}\) and \(S\leftarrow(f_{1}g_{1}^{m}h_{1}^{r})^{\frac{1}{c+x}}\) and outputs its signature as \(\sigma:=(S,c,r)\). The signature is verified by checking \(e(S,yf_{2}^{c})\stackrel{{?}}{{=}}e(f_{1}g_{1}^{m}h_{1}^{r},f_{ 2})\). The scheme is unforgeable against adaptively chosen message attacks under the \(n\)-SDH assumption [2]. #### 3.1.2 Obtaining signatures on committed values An interesting property about BBS+ signatures is that a committer knowing opening \(v,r\) of a Pedersen commitment \(\gamma=g_{1}^{v}h_{1}^{r}\) can obtain a _signature on the committed value_\(v\) from a signer by revealing to it only \(\gamma\) and a ZKP of knowledge of \((v,r)\): * The committer sends \(\gamma\) and \(\rho_{\gamma}\leftarrow\mathsf{PK}\{(v,r):\gamma=g_{1}^{v}h_{1}^{r}\}\) to the signer. The signer verifies \(\rho_{\gamma}\). * The signer computes a _quasi BBS+ signature_ by choosing \(c,\hat{r}\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}\) and computing \(S\leftarrow(f_{1}h_{1}^{\hat{r}}\gamma)^{\frac{1}{c+\mathsf{k}}}\). It sends \(\hat{\sigma}:=(S,c,\hat{r})\) to the committer. * The committer obtains \(\sigma\leftarrow(S,c,\hat{r}+r)\). Thus, \(\sigma=((f_{1}h_{1}^{\hat{r}+r}g_{1}^{v})^{\frac{1}{c+\mathsf{k}}}\), \(c\), \(\hat{r}+r)\) is a valid BBS+ signature on message \(v\) under verification key \(\mathsf{vk}=f_{2}^{\mathsf{sk}}\). Given a _quasi_-BBS+ signature \(\hat{\sigma}\), it is possible to verify that \(\hat{\sigma}\) was correctly generated using commitment \(\gamma\), using the following verification algorithm: * \(\mathsf{VerQ}(\hat{\sigma}=(S,c,\hat{r}),\gamma,\mathsf{vk})\): \(e(S,\mathsf{vk}\ f_{2}^{c})\stackrel{{?}}{{=}}e(f_{1}h_{1}^{\hat{ r}}\gamma,f_{2})\). #### 3.1.3 \((m,m)\)-threshold secret sharing An \((m,m)\)-threshold secret sharing scheme consists of the following algorithms to share a secret \(x\) among \(m\) parties: * \((x^{(k)})_{k\in[m]}\leftarrow\mathsf{Share}_{m,m}(x\in\mathbb{Z}_{q}):(x^{(k)} )_{k\in[m-1]}\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}^{m-1}\); \(x^{(m)}\leftarrow(x-\sum_{k\in[m-1]}x^{(k)})\pmod{q}\) * \(x\leftarrow\mathsf{Recons}((x^{(k)})_{k\in[m]}):x\leftarrow\sum\limits_{k\in[m] }x^{(k)}\pmod{q}\) With algorithm \(\mathsf{Recons}\), all \(m\) parties coming together can reconstruct the secret \(x\) but any less than \(m\) of them do not learn any information about \(x\). A secret sharing scheme is called _additive_ (resp. _multiplicative_) if \(\mathcal{P}_{k}\) on input its shares \(x^{(k)}\), \(y^{(k)}\) of secrets \(x\) and \(y\) respectively can obtain its share of \(x+y\) (resp. \(xy\)) without any additional interaction with other parties. The proposed scheme is clearly additive. It can also be made multiplicative using Beaver's trick [23], which employs an input-independent precomputation step and an algorithm \(\mathsf{Mult}\) such that \(\mathcal{P}_{k}\) can obtain its share of \(xy\) as \(\mathsf{Mult}(x^{(k)},y^{(k)})\). #### 3.1.4 \((m,m)\)-threshold proofs of knowledge An \((m,m)\)-threshold proof of knowledge (also called a _distributed proof of knowledge_ or a DPK) is a multiparty protocol between a set of provers \((\mathcal{P}_{k})_{k\in[m]}\) and a verifier \(\mathcal{V}\) that convinces \(\mathcal{V}\) that for a given common input \(x\), the provers know secret shares \(\omega^{(k)}\) of a secret \(\omega\) such that a given predicate \(p(x,\omega)\) is true. We denote these DPKs as \(\mathcal{V}[\![res]\leftarrow\mathsf{DPK}(x,p,(\mathcal{P}_{k}[\![\omega^{(k) }]\!])_{k\in[m]},\mathcal{V}[\![]\!])\), where \(res=1\) means that \(\mathcal{V}\) accepted the proof. The secrecy guarantee is that an adversary \(\mathcal{A}\) controlling \(\mathcal{V}\) and all \((\mathcal{P}_{k})_{k\neq k^{*}}\) for some \(k^{*}\) cannot learn anything about \(\omega^{(k^{*})}\) (and thus \(w\)) [47]. We use DPKs where the predicate \(p\) is of the form \(\bigwedge_{i\in[\ell]}y_{i}=\prod_{j\in[\ell^{\prime}]}g_{ij}^{\omega_{j}}\) for \(\ell,\ell^{\prime}\in\mathbb{N}\), public values \(y_{i},g_{ij}\in\mathbb{G}_{1}\), \(\mathbb{G}_{2}\) or \(\mathbb{G}_{T}\) and \(\omega_{j}\in\mathbb{Z}_{q}\). These proofs can be constructed using standard \(\Sigma\)-protocol techniques [31, 47, 14] if each prover \(\mathcal{P}_{k}\) knows share \(\omega_{j}^{(k)}\) of each \(\omega_{j}\). We use the following NIZK variant obtained using the Fiat-Shamir heuristic [36]: * \((\mathcal{P}_{k})_{k\in[m]}\): Publish \(a_{i}^{(k)}\leftarrow\prod_{j\in[\ell^{\prime}]}g_{ij}^{r_{j}^{(k)}}\), where \(r_{j}^{(k)}\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}\). * \((\mathcal{P}_{k})_{k\in[m]}\): Compute \(a_{i}\leftarrow\prod_{k\in[m]}a_{i}^{(k)}\); \(c\gets H(p\|(a_{i})_{i\in[\ell]})\), where \(H\) is a cryptographic hash function modelled as a random oracle; \(z_{j}^{(k)}\gets r_{j}^{(k)}-c\omega_{j}^{(k)}\pmod{q}\). Send \(z_{j}^{(k)}\) to \(\mathcal{V}\). * \(\mathcal{V}\): Obtain \(((a_{i}^{(k)})_{i\in[\ell],k\in[m]}\), \(c\), \((z_{j}^{(k)})_{j\in[\ell^{\prime}],k\in[m]})\) from \((\mathcal{P}_{k})_{k\in[m]}\). Compute \(a_{i}\leftarrow\prod_{k\in[m]}a_{i}^{(k)}\); \(z_{j}\leftarrow\sum_{k\in[m]}z_{j}^{(k)}\pmod{q}\). Check \(c\stackrel{{?}}{{=}}H(p\|(a_{i})_{i\in[\ell]})\,\&\,\bigwedge_{i \in[\ell]}a_{i}\stackrel{{?}}{{=}}y_{i}^{c}\prod_{j\in[\ell^{ \prime}]}g_{ij}^{z_{j}}\). In this variant, if \(\mathcal{A}\) controls \((\mathcal{P}_{k})_{k\neq k^{*}}\) but not \(\mathcal{V}\), it does not even learn whether the statement was proved successfully or not, because it only sees \(a_{i}^{(k^{*})}\leftarrow\prod_{j\in[\ell^{\prime}]}g_{ij}^{r_{j}^{(k^{*})}}\) and not \(z_{j}^{(k^{*})}\). #### \((m,m)\)-threshold homomorphic encryption An \((m,m)\)-threshold encryption scheme \(\mathsf{E}^{\mathsf{th}}\) with plaintext space \(\mathbb{M}(\mathsf{E}^{\mathsf{th}})\) and ciphertext space \(\mathbb{C}(\mathsf{E}^{\mathsf{th}})\) is a tuple of algorithms/protocols (\(\mathsf{K}\mathsf{e}\mathsf{ygen}\), \(\mathsf{Enc}\), \(\mathsf{T}\mathsf{Dec}\)) between parties \((\mathcal{P}_{k})_{k\in[m]}\), where \(\mathsf{K}\mathsf{e}\mathsf{ygen}\) denotes a key generation protocol, \(\mathsf{Enc}\) denotes an encryption algorithm and \(\mathsf{T}\mathsf{Dec}\) denotes a threshold decryption protocol such that for all \(x\in\mathbb{M}(\mathsf{E}^{\mathsf{th}})\), security parameters \(\lambda\in\mathbb{N}\), (\(\mathsf{pk}\), \((\mathcal{P}_{k}[\![\mathsf{sk}^{(k)}]\!])_{k\in[m]})\leftarrow\mathsf{K} \mathsf{ygen}(1^{\lambda}\), \(\mathcal{P}_{k}[\![\mathsf{]}\!])\), \(\mathsf{T}\mathsf{Dec}(\mathsf{E}^{\mathsf{th}}_{\mathsf{EG}}\).\(\mathsf{Enc}(\mathsf{pk}\), \(x)\), \((\mathcal{P}_{k}[\![\mathsf{sk}^{(k)}]\!])_{k\in[m]})=x\). _IND-CPA security_ of \((m,m)\)-threshold encryption schemes is defined analogously to the IND-CPA security of vanilla public-key encryption schemes [39] under the condition that the adversary controls less than \(m\) parties. We use the following threshold encryption schemes: \(a)\)\(\mathsf{E}^{\mathsf{th}}_{\mathsf{EG}}\): the threshold \(\mathsf{El}\) Gamal encryption scheme [30] where \(\mathbb{M}(\mathsf{E}^{\mathsf{th}}_{\mathsf{EG}})=\mathbb{G}_{1}\) and \(\mathbb{C}(\mathsf{E}^{\mathsf{th}}_{\mathsf{EG}})=\mathbb{G}_{1}\times \mathbb{G}_{1}\), and \(b)\)\(\mathsf{E}^{\mathsf{th}}_{\mathsf{Pa}}\): an optimised threshold Paillier encryption scheme suggested by Damgard et al. [25] where \(\mathbb{M}(\mathsf{E}^{\mathsf{th}}_{\mathsf{Pa}})=\mathbb{Z}_{N}\) for an RSA modulus \(N\) and \(\mathbb{C}(\mathsf{E}^{\mathsf{th}}_{\mathsf{Pa}})=\mathbb{Z}_{N^{2}}^{*}\). Note that \(\mathsf{E}^{\mathsf{th}}_{\mathsf{EG}}\) is multiplicatively homomorphic in \(\mathbb{G}_{1}\): for any two ciphertexts \(c_{1},c_{2}\in\mathbb{G}_{1}\times\mathbb{G}_{1}\) encrypting messages \(m_{1},m_{2}\in\mathbb{G}_{1}\), \(c_{1}c_{2}\) (their component-wise multiplication in \(\mathbb{G}_{1}\)) decrypts to the message \(m_{1}m_{2}\) (group multiplication in \(\mathbb{G}_{1}\)). Further, \(\mathsf{E}^{\mathsf{th}}_{\mathsf{Pa}}\) is additively homomorphic in \(\mathbb{Z}_{N}\): for any two ciphertexts \(c_{1},c_{2}\in\mathbb{Z}_{N^{2}}^{*}\) encrypting messages \(m_{1},m_{2}\in\mathbb{Z}_{N}\), \(c_{1}c_{2}\bmod N^{2}\) decrypts to the message \(m_{1}+m_{2}\mod N\). However, we require additive homomorphism in \(\mathbb{Z}_{q}\) (a prime order group). Therefore, we let \(N>q\), intepret messages in \(\mathbb{Z}_{q}\) as messages in \(\mathbb{Z}_{N}\) and carefully use \(\mathsf{E}^{\mathsf{th}}_{\mathsf{Pa}}\)'s homomorphic addition modulo \(N\) to obtain homomorphic addition modulo \(q\) (see Section 4.2). \(\mathsf{E}^{\mathsf{th}}_{\mathsf{EG}}\) is IND-CPA secure under the DDH assumption in \(\mathbb{G}_{1}\)[34]. \(\mathsf{E}^{\mathsf{th}}_{\mathsf{Pa}}\) is IND-CPA secure under the decisional composite residuosity (DCR) assumption [60]. Also, both \(\mathsf{E}^{\mathsf{th}}_{\mathsf{EG}}\) and \(\mathsf{E}^{\mathsf{th}}_{\mathsf{Pa}}\) support distributed key generation protocols. For \(\mathsf{E}^{\mathsf{th}}_{\mathsf{EG}}\), each party already generates its key shares independently; for \(\mathsf{E}^{\mathsf{th}}_{\mathsf{Pa}}\), a secure key generation protocol can be designed using techniques such as [26]. Further, the \(\mathsf{T}\mathsf{Dec}\) protocols of both \(\mathsf{E}^{\mathsf{th}}_{\mathsf{EG}}\) and \(\mathsf{E}^{\mathsf{th}}_{\mathsf{Pa}}\) provide _simulation security_, i.e, the adversary's view in the \(\mathsf{T}\mathsf{Dec}\) protocol can be simulated given access to a decryption oracle. For \(\mathsf{E}^{\mathsf{th}}_{\mathsf{EG}}\), this follows straightforwardly; for \(\mathsf{E}^{\mathsf{th}}_{\mathsf{Pa}}\), [25] provide a proof (see [25]; Theorem 4). We also use a standard (non-threshold) IND-CPA secure public-key encryption scheme \(\mathsf{E}\) on message space \(\mathbb{Z}_{q}\). Shuffles and proofs of shuffleLet \(\mathsf{E}^{\mathsf{th}}\) be an \((m,m)\) threshold homomorphic encryption scheme, \(\mathsf{pk}\) be a public key under \(\mathsf{E}^{\mathsf{th}}\), \(\mathbf{\epsilon}\) be an \(n\)-length vector of ciphertexts against \(\mathsf{pk}\), \(\pi^{(1)},\ldots,\pi^{(m)}\in\mathsf{Perm}(n)\) be secret permutations of parties \(\mathcal{P}_{1},\ldots,\mathcal{P}_{m}\), where \(\mathsf{Perm}(n)\) denotes the space of permutation functions, and \(\mathsf{E}^{\mathsf{th}}\).\(\mathsf{REnc}(\mathsf{pk},c)\) denotes re-encryption under \(\mathsf{E}^{\mathsf{th}}\) of ciphertext \(c\) with fresh randomness. We let \(\mathbf{\epsilon^{\prime}}\leftarrow\mathsf{Shuffle}(\mathsf{E}^{\mathsf{th}}, \mathsf{pk},\mathbf{\epsilon},\mathcal{P}_{1}[\![\pi^{(1)}]\!],\ldots,\mathcal{P}_{ m}[\![\pi^{(m)}]\!])\) denote repeated shuffling of \(\mathbf{\epsilon}\) by each of \((\mathcal{P}_{k})_{k\in[m]}\) in sequence, such that \(\forall j\in[n]:\mathbf{\epsilon}^{\prime}_{j}=\mathsf{E}^{\mathsf{th}}\).\(\mathsf{REnc}(\mathsf{pk},\mathbf{\epsilon}_{\pi(j)})\), where \(\pi=\pi^{(m)}\circ\cdots\circ\pi^{(1)}\). Note that the order of parties in the Shuffle protocol is important, as it denotes that first \(\mathcal{P}_{1}\) re-encrypts and permutes, then \(\mathcal{P}_{2}\), and so on. See Figure 6. It can be proved efficiently in zero-knowledge that \(\mathbf{\epsilon^{\prime}}\) is a shuffle of \(\mathbf{\epsilon}\), if each \(\mathcal{P}_{k}\) proves that \(\mathbf{\epsilon}^{(k)}\) was a shuffle of \(\mathbf{\epsilon}^{(k-1)}\)[67, 64]. ## 4 Our construction We now describe our traceable mixnet construction (see Figure 7). During key generation, mix-servers \((\mathcal{M}_{k})_{k\in[m]}\) generate public keys \((\mathsf{pk}^{(k)})_{k\in[m]}\), \(\mathsf{pk}_{\mathsf{EG}}\), \(\mathsf{pk}_{\mathsf{Pa}}\) for encryption schemes \(\mathsf{E}\), \(\mathsf{E}^{\mathsf{th}}_{\mathsf{EG}}\), and \(\mathsf{E}^{\mathsf{th}}_{\mathsf{Pa}}\) respectively, such that \(\mathcal{M}_{k}\) owns \(\mathsf{sk}^{(k)}\) corresponding to \(\mathsf{pk}^{(k)}\) and secret keys corresponding to \(\mathsf{pk}_{\mathsf{EG}},\mathsf{pk}_{\mathsf{Pa}}\) are shared among \((\mathcal{M}_{k})_{k\in[m]}\). The sender for the \(i^{\mathsf{th}}\) message \(\mathbf{v}_{i}\in\mathbb{Z}_{q}\) creates its ciphertext as \((\mathbf{\epsilon}_{i}\), \(\mathbf{\gamma}_{i}\), \((\mathsf{ev}^{(k)}_{i}\), \(\mathsf{er}^{(k)}_{i})_{k\in[m]}\), \(\mathbf{\rho}_{\mathbf{\gamma}_{i}}\), \(\mathbf{\epsilon}_{\mathbf{r}_{i}}\)), where \(\mathbf{\epsilon}_{i}\) is an encryption of \(\mathbf{v}_{i}\) under \(\mathsf{E}^{\mathsf{th}}_{\mathsf{Pa}}\) (\(\mathbf{v}_{i}\) is interpreted as an element of \(\mathsf{E}^{\mathsf{th}}_{\mathsf{Pa}}\)'s message space \(\mathbb{Z}_{N}\)); \(\mathbf{\gamma}_{i}\) is a Pedersen commitment to \(\mathbf{v}_{i}\) under some randomness \(\mathbf{r}_{i}\); \(\mathsf{ev}^{(k)}_{i},\mathsf{er}^{(k)}_{i}\) encrypt shares \(\mathbf{v}^{(k)}_{i},\mathbf{r}^{(k)}_{i}\) of \(\mathbf{v}_{i},\mathbf{r}_{i}\) under \(\mathcal{M}_{k}\)'s public key \(\mathsf{pk}^{(k)}\); \(\mathbf{\rho}_{\mathbf{\gamma}_{i}}\) is a proof of knowledge of the opening of \(\mathbf{\gamma}_{i}\); and \(\mathbf{\epsilon}_{\mathbf{r}_{i}}\) is an encryption of \(\mathbf{r}_{i}\) under \(\mathsf{E}^{\mathsf{th}}_{\mathsf{Pa}}\). The list of senders' input ciphertexts \(\mathbf{\epsilon}\) is mixed (shuffled and decrypted) by the mix-servers exactly as in Figure 6: The Shuffle protocol. a standard re-encryption mixnet [44] to produce the permuted list of plaintexts \(\mathbf{v}^{\prime}_{j}=(\mathbf{v}_{\pi(j)})_{j\in[n]}\), where \(\pi\) is composed of the secret permutations \(\pi^{(k)}\) applied by each \(\mathcal{M}_{k}\). Each \(\mathcal{M}_{k}\) also decrypts its shares \(\mathbf{v}^{(k)},\mathbf{r}^{(k)}\) from \(\mathbf{\mathsf{e}v}^{(k)},\mathbf{\mathsf{e}r}^{(k)}\). Given the knowledge of the secret keys, permutation \(\pi^{(k)}\) and shares \(\mathbf{v}^{(k)},\mathbf{r}^{(k)}\), \((\mathcal{M}_{k})_{k\in[m]}\) can provably answer the \(\mathsf{BTraceln}\), \(\mathsf{BTraceOut}\) queries using our distributed (and batched) ZKPs of set membership and reverse set membership, called \(\mathsf{DB-SM}\) and \(\mathsf{DB-RSM}\) respectively (see Sections 4.1 and 4.2). \(\mathsf{DB-SM}\) allows \((\mathcal{M}_{k})_{k\in[m]}\) to answer a \(\mathsf{BTraceln}\) query with index sets \(I,J\) by engaging in a distributed proof for each \(\gamma\in\mathbf{\gamma}_{I}\) that \(\gamma\) commits a value in set \(\mathbf{v}^{\prime}_{J}\); \(\mathcal{Q}\)'s output set \(\mathbf{c}_{I^{*}}\) corresponds to the commitments \(\mathbf{\gamma}_{I^{*}}\) for which the proof passed. Similarly, \(\mathsf{DB-RSM}\) allows \((\mathcal{M}_{k})_{k\in[m]}\) to answer a \(\mathsf{BTraceOut}\) query by engaging in a distributed proof for each \(v^{\prime}\in\mathbf{v}^{\prime}_{J}\) that \(v^{\prime}\) was committed in one of the commitments in set \(\mathbf{\gamma}_{I}\). \(\mathsf{DB-RSM}\) also utilises the values \(\mathbf{\rho_{\gamma}}\) and \(\mathbf{\epsilon_{r}}\) uploaded in the senders' ciphertexts. Note that the soundness of our ZKPs ensures that even when all \((\mathcal{M}_{k})_{k\in[m]}\) are cheating, \(\mathcal{Q}\) cannot output an entry that did not map to its requested set, i.e., a commitment in \(\mathbf{\gamma}_{I^{*}}\) that did not commit a value in \(\mathbf{v}^{\prime}_{J}\) (for \(\mathsf{BTraceln}\)) or a value in \(\mathbf{v}^{\prime}_{J^{*}}\) that was not committed by any commitment in \(\mathbf{\gamma}_{I}\) (for \(\mathsf{BTraceOut}\)). However, this does not prevent \((\mathcal{M}_{k})_{k\in[m]}\) from deliberately failing proofs for entries that actually mapped to the requested set, forcing \(\mathcal{Q}\) to output a smaller set \(\mathbf{c}_{I^{*}}/\mathbf{v}^{\prime}_{J^{*}}\) than the correct one (see Definition 5). We solve this issue by simply re-running our ZKPs against the complement of the requested set and making \(\mathcal{Q}\) abort if for some entry, proofs against both the runs failed. Also note that the construction in Figure 7 protects secrecy only in the honest-but-curious setting, i.e., when all parties follow the protocol. In Appendix 0.B, we discuss steps required to make it secure against arbitrary malicious adversaries. We now describe our ZKPs of set membership and reverse set membership. ### ZKP of set membership #### 4.1.1 Single prover case We first recall the ZKP of set membership due to Camenisch et al. [15] where given a Pedersen commitment \(\gamma\) and a set of values \(\phi\), a _single_ prover proves knowledge of \(v,r\) such that \(\gamma=g_{1}^{v}h_{1}^{r}\) and \(v\in\phi\). The main idea is that the verifier generates a _fresh_ Boneh-Boyen signature key pair \(x\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q},y\gets g_{2}^{x}\) and sends to the prover the verification key \(y\) and signatures \(\sigma_{v^{\prime}}\gets g_{1}^{\frac{1}{x+v^{\prime}}}\) on each \(v^{\prime}\in\phi\). The prover chooses a blinding factor \(b\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}\) and sends to the verifier a blinded version \(\tilde{\sigma}_{v}\) of the signature on the value \(v\) committed by \(\gamma\), as \(\tilde{\sigma}_{v}\leftarrow\sigma_{v}^{b}=g_{1}^{\frac{b}{x+v}}\). Both then engage in a ZKP of knowledge \(\mathsf{PK}\{(v,r,b):\gamma=g_{1}^{v}h_{1}^{r}\wedge e((\tilde{\sigma}_{v})^{ 1/b},yg_{2}^{v})=e(g_{1},g_{2})\}\), which proves knowledge of a valid signature \((\tilde{\sigma}_{v})^{1/b}\) on the value committed by \(\gamma\) (see Boneh-Boyen signature verification equation in Section 3.2). This is a proof of set membership because if \(\gamma\) does not commit a member of \(\phi\) then the proof fails since the prover does not obtain signatures on non-members of the set and cannot forge them. The scheme is an honest-verifier ZKP of set membership if \(|\phi|\)-Strong Diffie Hellman assumption holds in \((\mathbb{G}_{1},\mathbb{G}_{2})\)[15]. \(\mathsf{Re}(\mathsf{Imk})=(\mathsf{Imk})\). Figure 7: Our construction of a traceable mixnet. A nice property of the scheme is that multiple proofs for a set \(\Phi\) of commitments against the same set \(\phi\) of values can be given efficiently by reusing verifier signatures. After obtaining signatures \((\sigma_{v})_{v\in\phi}\), the prover can precompute \((\tilde{\sigma}_{v})_{v\in\phi}\), so that for each commitment \(\gamma\in\Phi\) committing a value \(v\in\phi\), the corresponding \(\tilde{\sigma}_{v}\) can be looked up and the ZKP of knowledge can be constructed in \(O(1)\) time. This results in an \(O(|\phi|+|\Phi|)\) amortised complexity for proving set membership for \(|\Phi|\) commitments. #### 4.2.2 Distributed case (protocol DB-Sm) In the single prover case, the prover knows \(v\) committed by \(\gamma\), which allows it to look-up \(\tilde{\sigma}_{v}\) in \(O(1)\) time. In our distributed mixnet setting, no prover (mix-server) knows \(v\), commitment randomness \(r\) or permutation \(\pi\) between lists \(\mathbf{\gamma}\) and \(\mathbf{v}^{\prime}\). The challenge is to efficiently obtain blinded signatures on values committed by the commitments without letting \(\mathcal{Q}\) or any set of less than \(m\) mix-servers learn these secrets. This is achieved as follows, for a DB-SM call on index sets \(I,J\) (see Figure 8). In stage 1, \(\mathcal{Q}\) publishes Boneh-Boyen signatures \(\boldsymbol{\sigma}^{\prime}_{j}\gets g_{1}^{\frac{1}{x+\boldsymbol{v}^{ \prime}_{j}}}\) for each \(j\in J\). The goal of stage 1 is to obtain \(\boldsymbol{\tilde{\sigma}}_{i}\) corresponding to \(\boldsymbol{\gamma}_{i}\)s such that \(\boldsymbol{\tilde{\sigma}}_{i}\) is a blinded signature on the value \(\boldsymbol{v}_{i}\) committed by \(\boldsymbol{\gamma}_{i}\), while protecting all the secrets. Towards this end, we require \(\mathcal{Q}\) to also publish encryptions \(\boldsymbol{\epsilon}^{\prime}_{\boldsymbol{\sigma}_{j}}\) of its signatures \(\boldsymbol{\sigma}^{\prime}_{j}\) on \(\boldsymbol{v}^{\prime}_{j}\) for \(j\in J\); for \(j\not\in J\), we require it to publish "fake" signatures (i.e., a fixed group element) and their encryptions (see remark about fake signatures below). The encryptions are under \(\mathsf{E}^{\text{th}}_{\mathsf{EG}}\) against the public key \(\mathsf{pk}_{\mathsf{EG}}\) generated by the mix-servers earlier. The mix-servers shuffle these (real and fake) encrypted signatures _in the reverse direction_, going from \(\mathcal{M}_{m}\) to \(\mathcal{M}_{1}\), with each \(\mathcal{M}_{k}\) using permutation \((\pi^{(k)})^{-1}\), where \(\pi^{(k)}\) is the permutation it applied during the initial Mix protocol. The encrypted signatures \(\boldsymbol{\epsilon}_{\boldsymbol{\sigma}}\) thus obtained at the end of the shuffle match the order of the commitment list \(\boldsymbol{\gamma}\), i.e., \(\boldsymbol{\epsilon}_{\boldsymbol{\sigma}_{i}}\) is an encryption of a signature on the value \(\boldsymbol{v}_{i}\) committed by \(\boldsymbol{\gamma}_{i}\) (the encrypted signature being real only if \(\boldsymbol{v}_{i}\in\boldsymbol{v}^{\prime}_{J}\)). \((\mathcal{M}_{k})_{k\in[m]}\) then use the multiplicative homomorphism of \(\mathsf{E}^{\text{th}}_{\mathsf{EG}}\) to jointly obtain encryptions \(\boldsymbol{\tilde{\epsilon}}_{\boldsymbol{\sigma}_{i}}\leftarrow\boldsymbol{ \epsilon}_{\boldsymbol{\sigma}_{i}}^{\sum_{k\in[m]}\boldsymbol{b}_{i}^{(k)}}\) of blinded signatures \(\boldsymbol{\tilde{\sigma}}_{i}\), where each \(\mathcal{M}_{k}\) contributes blinding factors \(\boldsymbol{b}^{(k)}\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}^{n}\). The plaintext blinded signatures \(\boldsymbol{\tilde{\sigma}}_{i}\) are finally obtained by threshold decryption of \(\boldsymbol{\tilde{\epsilon}}_{\boldsymbol{\sigma}_{i}}\) and published alongside \(\boldsymbol{\gamma}_{i}\). _Remark about fake signatures._ Without the fake signatures, \(\boldsymbol{\tilde{\sigma}}_{i}\) would be published only against \(\boldsymbol{\gamma}_{i}\) that committed a value in \(\boldsymbol{v}^{\prime}_{J}\). This would reveal even for \((\boldsymbol{\gamma}_{i})_{i\not\in I}\) whether they committed a value in \(\boldsymbol{v}^{\prime}_{J}\) or not, violating our secrecy definition. In stage 2, \(\boldsymbol{\tilde{\sigma}}_{i}\) corresponding to each \((\boldsymbol{\gamma}_{i})_{i\in I}\) are looked up in \(O(1)\) time and a DPK of a Boneh-Boyen signature on the value committed by \(\boldsymbol{\gamma}_{i}\) is jointly carried out. Each \(\mathcal{M}_{k}\) knows additive shares \(\boldsymbol{v}_{i}^{(k)},\boldsymbol{r}_{i}^{(k)}\) for the commitment openings and \(\boldsymbol{b}_{i}^{(k)}\) for the blinding factors, which enables efficient DPKs (see Section 3.2). All indices for which the DPK passed (i.e., where \(\boldsymbol{\tilde{\sigma}}_{i}\) was not the blinded version of a fake signature) are included in \(\mathcal{Q}\)'s output \(I^{*}\). The amortised complexity of the entire protocol is \(O(n)\). Further, as shown in Section 3.2, if \(\mathcal{Q}\) is not corrupted, any set of less than \(m\) mix-servers do not learn whether a DPK passed or not and thus do not even learn \(I^{*}\). ### ZKP of reverse set membership #### 4.2.1 Single prover case Now we show how to extend the idea of signature-based set membership from Section 4.1 to prove reverse set membership (first in the single prover case). First note that the Boneh-Boyen signatures used in the ZKP of set membership require messages to be in group \(\mathbb{Z}_{q}\). Therefore an approach that attempts to use Boneh-Boyen signatures on elements of the set of commitments for the reverse set membership proof would not work, because commitments are members of \(\mathbb{G}_{1}\). Recall, however, from Section 3.2 that the BBS+ signature scheme [3] allows one to present a commitment \(\gamma=g_{1}^{v}h_{1}^{r}\) along with NIZK \(\rho_{\gamma}:=\mathsf{NIZKPK}\{(v,r):\gamma=g_{1}^{v}h_{1}^{r}\}\) to the signer and obtain a BBS+ signature on the value \(v\), without leaking \(v\) to the signer. We exploit this property for the reverse set membership proof. Our reverse set membership verifier generates BBS+ signature key pairs \(x\xleftarrow{\$}\)\(\mathbb{Z}_{q}\), \(y\gets f_{2}^{x}\) and send quasi-BBS+ signatures \(\hat{\sigma}_{\gamma}:=(S,c,\hat{r})\leftarrow((f_{1}h_{1}^{\hat{r}}\gamma)^{ \frac{1}{c+x}},c,\hat{r})\) for each \(\gamma=g_{1}^{v}h_{1}^{r}\in\varPhi\), after verifying \(\rho_{\gamma}\). If the prover knows commitment randomness \(r\) for each \(\gamma\in\varPhi\), it can use \(\hat{\sigma}_{\gamma}\) to derive a valid BBS+ signature \(\sigma_{v}\leftarrow(S,c,\mathsf{r}:=\hat{r}+r)=((f_{1}g_{1}^{v}h_{1}^{\hat{r }+r})^{\frac{1}{c+x}},c,\hat{r}+r)\) on message \(v\) committed by \(\gamma\) and store \(\sigma_{v}\) indexed by \(v\). To prove that a given \(v\) is committed by some commitment in \(\varPhi\), the prover looks up \(\sigma_{v}\) in O(1) time, blinds each component of \(\sigma_{v}\) to obtain a blinded signature \(\tilde{\sigma}_{v}\) and proves knowledge of a BBS+ signature on \(v\) by revealing only \(\tilde{\sigma}_{v}\) to the verifier. This is a proof of reverse set membership because the prover can obtain valid BBS+ signatures only on values committed by \(\gamma\in\varPhi\). Thus, the proof of knowledge fails for \(v\) if no \(\gamma\in\varPhi\) committed \(v\). This protocol also enjoys \(O(|\phi|+|\varPhi|)\) amortised complexity for proving reverse set membership for each \(v\in\phi\) against the same set of commitments \(\varPhi\). See Appendix A for the detailed protocol. #### 3.2.2 Distributed case (protocol DB-Rsm) To extend the above idea for the distributed case, we encrypt \(\mathcal{Q}\)'s signatures and shuffle them as in Section 4.1, but with some additional caveats (see Figure 9). First, to obtain BBS+ quasi-signatures, knowledge of commitment openings must be shown, for which we use \(\boldsymbol{\rho_{\gamma}}_{i}\) supplied by the senders (see Figure 7). \(\mathcal{Q}\) checks \(\boldsymbol{\rho_{\gamma}}_{i}\) before sending its quasi-signatures \(\boldsymbol{\hat{\sigma}}_{i}\) on \(\boldsymbol{\gamma}_{i}\), providing real quasi-signatures for \((\boldsymbol{\gamma}_{i})_{i\in I}\) and fake ones for \((\boldsymbol{\gamma}_{i})_{i\not\in I}\). \(\mathcal{Q}\) also encrypts each component \((\boldsymbol{S}_{i},\boldsymbol{c}_{i},\boldsymbol{\hat{r}}_{i})\) of \(\boldsymbol{\hat{\sigma}}_{i}\) independently to create encrypted quasi-signatures \(\boldsymbol{\epsilon_{\hat{\sigma}_{i}}}:=(\boldsymbol{\epsilon_{S_{i}}}, \boldsymbol{\epsilon_{c_{i}}},\boldsymbol{\epsilon_{\hat{r}_{i}}})\) using corresponding homomorphic encryption schemes, i.e., \(\mathsf{E_{EG}^{th}}\) for \(\boldsymbol{S}_{i}\) and \(\mathsf{E_{Pa}^{th}}\) for \(\boldsymbol{c_{i}},\boldsymbol{\hat{r}}_{i}\). Next, using \(\boldsymbol{\epsilon_{\hat{\sigma}_{i}}}\), encrypted valid BBS+ signatures on the committed values are derived. Since this simply requires adding the commitment randomness \(\boldsymbol{r}_{i}\) to \(\boldsymbol{\hat{r}}_{i}\), we can do this homomorphically, for which we use \(\boldsymbol{\epsilon_{r}}_{i}\leftarrow\mathsf{E_{EG}^{th}}\).\(\mathsf{Enc}(\mathsf{pk_{EG}},\boldsymbol{r}_{i})\) uploaded by the senders (see Figure 7). Thus, encrypted (real and fake) BBS+ signatures \(\boldsymbol{\epsilon_{\sigma_{i}}}:=(\boldsymbol{\epsilon_{S_{i}}}, \boldsymbol{\epsilon_{c_{i}}},\boldsymbol{\epsilon_{r_{i}}}:=\boldsymbol{ \epsilon_{\hat{r}_{i}}}\boldsymbol{\epsilon_{r_{i}}})_{i\in[n]}\) are obtained next to each commitment \(\boldsymbol{\gamma}_{i}\) in the input list. Now we need to obtain blinded BBS+ signatures on plaintext values in the permuted list \(\boldsymbol{v}^{\prime}\). This is done by letting \((\mathcal{M}_{k})_{k\in[m]}\) engage in another mixing protocol, this time for the encrypted BBS+ signatures \(\boldsymbol{\epsilon_{\sigma}}_{i}\) and in the forward direction from \(\mathcal{M}_{1}\) to \(\mathcal{M}_{m}\), where each \(\mathcal{M}_{k}\) uses permutation \(\pi^{(k)}\) they used during mixing. This time, each component \((\boldsymbol{\epsilon_{S}},\boldsymbol{\epsilon_{c}},\boldsymbol{\epsilon_{r}})\) is re-encrypted individually using encryption schemes \((\mathsf{E_{EG}^{th}},\mathsf{E_{Pa}^{th}},\mathsf{E_{Pa}^{th}})\) respectively. The permuted and re-encrypted signatures \((\boldsymbol{\epsilon_{S}}^{\prime},\boldsymbol{\epsilon_{c}}^{\prime}, \boldsymbol{\epsilon_{r}}^{\prime})\) thus obtained at the end of the shuffle are then blinded homomorphically to obtain \((\boldsymbol{\tilde{\epsilon_{S}}}^{\prime},\boldsymbol{\tilde{\epsilon_{c}}}^ {\prime},\boldsymbol{\tilde{\epsilon_{r}}}^{\prime})\), which are then individually threshold-decrypted. The blinded BBS+ signatures \(\boldsymbol{\tilde{\sigma}}_{j}^{\prime}:=(\boldsymbol{\tilde{S}}_{j}^{\prime },\boldsymbol{\tilde{c}}_{j}^{\prime},\boldsymbol{\tilde{r}}_{j}^{\prime})\) thus obtained are published alongside the corresponding \(\boldsymbol{v}_{j}^{\prime}\). **Participants:** Mix-servers \((\mathcal{M}_{k})_{k\in[m]}\), Quierier \(\mathcal{Q}\) **Common input:** \((\mathsf{pk}_{\mathbb{G}},\mathsf{pk}_{\mathbb{R}^{n}},\mathbf{\gamma}=(\mathbf{\gamma}_{ i})_{i\in[n]},\mathbf{\rho}_{\gamma}=(\mathbf{\rho}_{\gamma})_{i\in[n]},\) \(\mathbf{e}_{r}=(\mathbf{e}_{r})_{i\in[n]},\mathbf{\nu}^{\prime}=(\mathbf{\nu}^{\prime})_{i\in[n ]},I\)\(J\). s.t.: \(\mathbf{\gamma}_{i}\in\mathbb{G}_{i},\mathbf{\rho}_{\gamma}=\mathbb{N}\mathbb{L}\mathbb{ K}\mathbb{P}\mathbb{K}\{(v,r):\mathbf{\gamma}_{i}=g_{1}^{\mathbb{R}}h_{1}^{r}\},\mathbf{e}_{r _{i}}\in\mathbb{C}(\mathbb{E}_{\mathbb{R}^{n}}^{\mathbb{R}})\) \(\mathbf{v}^{\prime}_{j}\in\mathbb{Z}_{0},I,J\subset[n]\) \(\mathbf{\mathcal{M}}_{k}\)'s input: \((\mathsf{sk}_{\mathbb{R}}^{(k)},\mathsf{sk}_{\mathbb{R}}^{(n)},\mathbf{\pi}^{(k) },\mathbf{\pi}^{(k)},\mathbf{\pi}^{(k)})\) s.t. letting \(\mathbf{v}:=\sum\limits_{k\in[m]}\mathbf{v}^{(k)},\mathbf{r}:=\sum\limits_{k\in[m]}\mathbf{r} ^{(k)},\mathbf{\pi}:=\pi^{(m)}\circ\cdots\circ\pi^{(1)}\): 1) \(\forall i\in[n]:\mathbf{\gamma}_{i}=g_{1}^{\mathbb{R}}h_{1}^{r}\) and \(\mathbf{e}_{r_{i}}\leftarrow\mathbb{E}_{\mathbb{R}^{n}}^{\mathbb{R}}\mathsf{Enc}( \mathsf{pk}_{\mathbb{R}^{n}},\mathbf{r}_{i})\) 2) \(\forall j\in[n]:\mathbf{\nu}^{\prime}_{j}\leftarrow\mathbf{v}_{\pi(j)}\) \(\mathcal{Q}\)**'s output:**\(J:=\{j\in J\mid\mathbf{v}^{\prime}_{j}=\mathbf{v}_{i}\text{ for some }i\in I\}\) **Stage 1:** _Signature generation_ \(\mathcal{Q}\): **for each**\(i\in I\): **abort** if \(\mathbb{N}\mathbb{I}\mathbb{K}\mathbb{V}\mathbb{V}\mathbb{(}\mathbf{\gamma}_{i}, \mathbf{\rho}_{\gamma_{i}})\neq 1\) \(x\stackrel{{\$}}{{\mathbb{R}}}\mathbb{Z}_{q}\); \(y\gets f_{2}^{x}\) ; \(\mathbf{c}\stackrel{{\$}}{{\mathbb{R}}}\mathbb{Z}_{q}^{n};\hat{\mathbf{ \nu}}\stackrel{{\$}}{{\mathbb{R}}}\mathbb{Z}_{q}^{n}\) \(\mathbf{S}\leftarrow(\mathbf{S}_{i})_{i\in[n]}\) where \(\mathbf{S}_{i}:=\begin{cases}f_{0}^{1}&\text{otherwise}\\ \mathbf{(\epsilon_{S},\epsilon_{e},\epsilon_{e})}\leftarrow(\mathbb{E}_{\mathbb{ R}^{n}}^{\mathbb{R}}\mathsf{Enc}(\mathsf{pk}_{\mathbb{R}^{n}},\mathbf{S}),\mathbb{E}_{\mathbb{ R}^{n}}^{\mathbb{R}}\mathsf{Enc}(\mathsf{pk}_{\mathbb{R}^{n}},\mathbf{c}),\\ \mathbb{E}_{\mathbb{R}^{n}}^{\mathbb{R}}\mathsf{Enc}(\mathsf{pk}_{\mathbb{ R}^{n}},\hat{\mathbf{\nu}}))\end{cases}\) publish \(y,\hat{\mathbf{\sigma}}:=(\mathbf{S},\mathbf{c},\hat{\mathbf{\tau}}),\mathbf{\epsilon_{e}}:=(\mathbf{ \epsilon_{S},\epsilon_{e},\epsilon_{e}})\) \((\mathcal{M}_{k})_{k\in[m]}\): \(\mathbf{\epsilon}_{\epsilon}\leftarrow\mathbf{\epsilon_{e}}\mathbf{\epsilon_{e}}\) // \((\mathbf{\epsilon_{S},\epsilon_{e},\epsilon_{e}},\epsilon_{e})\) encrypt \(((f_{1}g_{1}^{\mathbb{R}}h_{1}^{\mathbb{R}+r_{1}})^{\frac{1}{r+r_{1}}},\mathbf{c} _{1},\hat{\mathbf{r}}_{i}+\mathbf{r}_{i})\) if \(i\in I\) _/ else \((f_{1}^{\mathbb{R}},\mathbf{c}_{i},\hat{\mathbf{r}}_{i}+\mathbf{r}_{i})\)_ _Shuffling_ \((\mathbf{\epsilon}^{\prime},\mathbf{c}^{\prime}_{e},\mathbf{\epsilon}^{\prime}_{e},\mathbf{ \epsilon}^{\prime}_{e})\leftarrow\text{Shuffle}([\mathbb{E}_{\mathbb{R}^{n}}^{ \mathbb{R}},\mathbb{E}_{\mathbb{R}^{n}}^{\mathbb{R}}),(\mathsf{pk}_{\mathbb{ R}^{n}},\mathsf{pk}_{\mathbb{R}^{n}},\mathsf{pk}_{\mathbb{R}^{n}}),(\mathbf{ \epsilon_{S},\epsilon_{e},\epsilon_{e}}),\) \(\mathcal{M}_{1}[\pi^{(1)}],\ldots,\mathcal{M}_{m}[\pi^{(m)}]\) // shuffle all three components _Homomorphic binding_ \((\mathcal{M}_{k})_{k\in[m]}\): \((\mathbf{b}^{(k)}_{\mathcal{S}},\mathbf{b}^{(k)}_{\mathcal{S}},\mathbf{b}^{(k)}_{\mathcal{ S}})\stackrel{{\$}}{{\leftarrow}}\left(\mathbb{Z}_{q}^{n}\times \mathbb{Z}_{q}^{n}\times\mathbb{Z}_{q}^{n}\right)\); \(\mathbf{\chi}^{(k)}_{\epsilon},\mathbf{\chi}^{(k)}_{\epsilon}\,\mathbf{\chi}^{(k)}_{ \epsilon}\,\mathbf{\chi}^{(k)}_{\epsilon}\,\mathbf{\chi}^{(k)}_{\epsilon}\,\mathbf{\chi}^{ (k)}_{q}\stackrel{{\$}}{{\mathbb{R}}}\mathbb{Z}_{q}^{n}\) publish \((\mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)} _{\mathbf{\epsilon}^{(k)}}}}},\mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{\mathbf{ \epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{ \mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{ \mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{ \mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{ \mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{\mathbf{ \epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon}^{(k)}_{\mathbf{\epsilon^{(})}_{\mathbf{ _Remark about homomorphic blinding._ Note that the encryption scheme \(\mathbf{E}_{\mathbf{p}a}^{\mathsf{th}}\) of ciphertexts \(\boldsymbol{\epsilon_{c}}^{\prime},\boldsymbol{\epsilon_{r}}^{\prime}\) is homomorphic in group \(\mathbb{Z}_{N}\), which induces addition modulo \(N\) in the plaintext space, not modulo \(q\). Thus, blinding by blinding factors drawn from \(\mathbb{Z}_{q}\) would not be perfectly hiding. To circumvent this issue, we first ensure that \(N\) is much larger than \(q\) so that all additions remain integer additions and do not wrap around \(N\) (this is anyway the case since \(N\) is usually a 2048-bit modulus, \(q\) is of 254 bits and we perform a small number of homomorphic additions per ciphertext). Second, we pad blinding factors \(\boldsymbol{b_{c}}_{j},\boldsymbol{b_{r}}_{j}\in\mathbb{Z}_{q}\) for \(\boldsymbol{\epsilon_{c_{j}}^{\prime}},\boldsymbol{\epsilon_{r_{j}}^{\prime}}\) by much larger offsets \(q\boldsymbol{\chi_{c_{j}}},q\boldsymbol{\chi_{r_{j}}}\in\mathbb{Z}_{q(q-1)}\) so that the padded blinding factors \(\boldsymbol{b_{c}}_{j}+q\boldsymbol{\chi_{c_{j}}},\boldsymbol{b_{r}}_{j}+q \boldsymbol{\chi_{r_{j}}}\) are identically distributed to uniform samples from \(\mathbb{Z}_{q^{2}}\) and provide an almost perfect integer blinding for the messages \(\boldsymbol{c_{\pi(j)}}\in\mathbb{Z}_{q}\) and \(\boldsymbol{\hat{r}_{\pi(j)}}+\boldsymbol{r_{\pi(j)}}\in\mathbb{Z}_{2q}\). The offsets are removed by reducing decryptions \(\boldsymbol{\tilde{c}_{j}^{\prime\prime}},\boldsymbol{\tilde{r}_{j}^{\prime \prime}}\) of \(\boldsymbol{\epsilon_{c_{j}}^{\prime}},\boldsymbol{\epsilon_{r_{j}}^{\prime}}\) modulo \(q\). In stage 2, for each \(j\in J\), a DPK is given that effectively proves that the provers know shares \(\boldsymbol{b_{S_{j}}^{(k)}},\boldsymbol{b_{c_{j}}^{(k)}},\boldsymbol{b_{r_{j} }^{(k)}}\) of \(\boldsymbol{b_{S_{j}}},\boldsymbol{b_{c_{j}}},\boldsymbol{b_{r_{j}}}\) such that \((\boldsymbol{\tilde{S_{j}^{\prime}}}{}_{j}^{-\boldsymbol{b_{S_{j}}}},\ \boldsymbol{\tilde{c}_{j}^{\prime}}- \boldsymbol{b_{c_{j}}}\bmod q,\ \boldsymbol{\tilde{r}_{j}^{\prime}}- \boldsymbol{b_{r_{j}}}\bmod q)\) is a valid BBS+ signature on message \(\boldsymbol{v_{j}^{\prime}}\) under \(\mathcal{Q}\)'s verification key \(y\), i.e., it satisfies the BBS+ signature verification equation \(e(\boldsymbol{\tilde{S_{j}^{\prime}}}{}_{j}^{-\boldsymbol{b_{S_{j}}}},\ yf_{2 }^{\boldsymbol{\tilde{c}_{j}^{\prime}}-\boldsymbol{b_{c_{j}}}})=e(f_{1}g_{1} ^{\boldsymbol{v_{j}^{\prime}}}h_{1}^{\boldsymbol{\tilde{r}_{j}^{\prime}}- \boldsymbol{b_{r_{j}}}},f_{2})\) (see Equation 4; Lemma 2). The actual proof of knowledge we use in Figure 9 is designed so that it follows the format of Section 3.2 for which DPKs can be constructed efficiently if each prover has the requisite shares of the witness. This is indeed the case: shares for \(\boldsymbol{b_{S_{j}}},\boldsymbol{b_{c_{j}}},\boldsymbol{b_{r_{j}}},\delta_{0}\) are directly available to each mix-server; shares of \(\delta_{1}\) and \(\delta_{2}\) are obtained using Beaver's \(\mathsf{Mult}\) algorithm for multiplicative secret sharing (see Section 3.2). ## 5 Security analysis Let \(\Pi_{\mathsf{TM}}\) be the construction presented in Figure 7. Theorem 5.1 (Completeness): \(\Pi_{\mathsf{TM}}\) _satisfies completeness (Definition 4)._ Proof: It can be inspected that when all the parties are honest, inputs to protocols \(\mathsf{DB\text{-}SM}\) and \(\mathsf{DB\text{-}RSM}\) satisfy the preconditions mentioned in Figures 8 and 9, respectively. This is also true for the reruns of \(\mathsf{DB\text{-}SM}\) and \(\mathsf{DB\text{-}RSM}\) in the \(\mathsf{BTraceln}/\mathsf{BTracOut}\) calls against the complement sets. Thus, by Lemma 1, \(I^{*}\) and \(I^{*}_{\mathsf{c}}\) obtained by \(\mathcal{Q}\) in a \(\mathsf{BTraceln}\) call satisfy \(I^{*}=\{i\in I\mid\boldsymbol{v}_{i}\in\boldsymbol{v}_{J}^{\prime}\}\) and \(I^{*}_{\mathsf{c}}=\{i\in I\mid\boldsymbol{v}_{i}\in\boldsymbol{v}_{[n] \setminus J}^{\prime}\}\). Also, the correctness of \(\mathsf{Mix}\) implies that for each \(i\in I\subseteq[n]\), \(\boldsymbol{v}_{i}\in\{\boldsymbol{v}_{j}^{\prime}\mid j\in[n]\}\), i.e., \(\boldsymbol{v}_{i}\in\boldsymbol{v}_{J}^{\prime}\cup\boldsymbol{v}_{[n] \setminus J}^{\prime}\) for any \(J\subseteq[n]\). Thus, \(I^{*}\cup I^{*}_{\mathsf{c}}=I\), which implies that \(\mathcal{Q}\) does not abort and outputs a \(\boldsymbol{c}_{I^{*}}\) that satisfies the first condition of \(\mathsf{Exp}_{\mathsf{completeness}}\) (Figure 3). By a similar argument using Lemma 2, it follows that \(\mathcal{Q}\) does not abort in a \(\mathsf{BTraceOut}\) call and outputs a \(\boldsymbol{v}_{J^{*}}^{\prime}\) that satisfies the second condition of \(\mathsf{Exp}_{\mathsf{completeness}}\). Lemma 1: _If inputs \((\mathsf{pk_{EG}},\ \boldsymbol{\gamma},\ \boldsymbol{v}^{\prime},\ I,\)\(J,\)\((\mathcal{M}_{k}[\mathsf{sk}_{\mathsf{EG}}^{(k)},\ \pi^{(k)},\ \boldsymbol{v}^{(k)},\ \boldsymbol{v}^{(k)},\ \boldsymbol{r}^{(k)}] )_{k\in[m]})\) of a \(\mathsf{DB\text{-}SM}\) invocation satisfy the preconditions mentioned in Figure 8 and all the parties are honest then \(\mathcal{Q}\) outputs \(I^{*}=\{i\in I\mid\exists j\in J:\sum_{k\in[m]}\boldsymbol{v}_{i}^{(k)}= \boldsymbol{v}_{j}^{\prime}\}\)._ Proof: Note that for honest mix-servers, the \(i^{\text{th}}\) DPK passes iff \(\mathcal{M}_{k}\) uses \((\boldsymbol{v}_{i}^{(k)}\), \(\boldsymbol{r}_{i}^{(k)},\boldsymbol{b}_{i}^{(k)})\) such that \((\boldsymbol{v}_{i},\boldsymbol{r}_{i},\boldsymbol{b}_{i}):=(\sum_{k\in[m]} \boldsymbol{v}_{i}^{(k)},\sum_{k\in[m]}\boldsymbol{r}_{i}^{(k)},\sum_{k\in[m]} \boldsymbol{b}_{i}^{(k)})\) satisfy the predicate \(p_{\texttt{BB}_{i}}\). The equation \(\boldsymbol{\gamma}_{i}=g_{1}^{\boldsymbol{v}_{i}}h_{1}^{\boldsymbol{r}_{i}}\) of \(p_{\texttt{BB}_{i}}\) is trivially satisfied by the correctness of \((\boldsymbol{v}_{i}^{(k)},\boldsymbol{r}_{i}^{(k)})\). Next, note that \(\boldsymbol{\tilde{\sigma}}_{i}=(\boldsymbol{\sigma^{\prime}}_{\pi^{-1}(i)})^ {\sum_{k\in[m]}\boldsymbol{b}_{i}^{(k)}}\), by the homomorphism of \(\mathsf{E}_{\texttt{EG}}^{\text{th}}\). Let \(j\in[n]\) be the index to which index \(i\) is mapped after permutation, i.e., \(i=\pi(j)\) or equivalently \(\pi^{-1}(i)=j\). Correctness of input conditions implies \(\boldsymbol{v}_{j}^{\prime}=\sum_{k\in[m]}\boldsymbol{v}_{\pi(j)}^{(k)}=\sum_{ k\in[m]}\boldsymbol{v}_{i}^{(k)}\). Thus, \(\boldsymbol{\tilde{\sigma}}_{i}=(\boldsymbol{\sigma}_{j}^{\prime})^{\sum_{k \in[m]}\boldsymbol{b}_{i}^{(k)}}\), which equals \(g_{1}^{\frac{\sum_{k\in[m]}\boldsymbol{b}_{i}^{(k)}}{x+\boldsymbol{v}_{j}^{ \prime}}}=g_{1}^{\frac{\sum_{k\in[m]}\boldsymbol{b}_{i}^{(k)}}{x+\sum_{k\in[m ]}\boldsymbol{v}_{i}^{(k)}}}\) if \(j\in J\) and \(g_{1}^{\sum_{k\in[m]}\boldsymbol{b}_{i}^{(k)}}\) if \(j\not\in J\). In the first case, the second equation of \(p_{\texttt{BB}_{i}}\) passes; in the second case, it fails. Since the DPK is run only for \(i\in I\), \(I^{*}\) is exactly as claimed. Lemma 2: _If inputs \((\mathsf{pk}_{\texttt{EG}}\), \(\mathsf{pk}_{\texttt{Pa}}\), \(\boldsymbol{\gamma}\), \(\boldsymbol{\rho}_{\boldsymbol{\gamma}}\), \(\boldsymbol{\epsilon}_{\boldsymbol{r}}\), \(\boldsymbol{v}^{\prime}\), \(I\), \(J\), \((\mathcal{M}_{k}[\mathsf{sk}_{\texttt{EG}}^{(k)},\mathsf{sk}_{\texttt{Pa}}^{(k)},\pi^{(k)},\boldsymbol{v}^{(k)},\)\(\boldsymbol{r}^{(k)}\), \(\boldsymbol{r}^{(k)}\)])_{k\in[m]})\) of a \(\mathsf{DB}\)-\(\mathsf{RSM}\) invocation satisfy the preconditions mentioned in Figure 9 and all the parties are honest then \(\mathcal{Q}\) outputs \(J^{*}=\{j\in J\mid\exists i\in I:\sum_{k\in[m]}\boldsymbol{v}_{i}^{(k)}= \boldsymbol{v}_{j}^{\prime}\}\)._ Proof: As in Lemma 1, let \(i\) be s.t. \(i=\pi(j)\). By correctness of inputs, \(\boldsymbol{v}_{j}^{\prime}=\sum_{k\in[m]}\boldsymbol{v}_{i}^{(k)}\), \(\boldsymbol{\epsilon}_{r_{i}}\leftarrow\mathsf{E}_{\texttt{Pa}}^{\text{th}} \mathsf{Enc}(\mathsf{pk}_{\texttt{Pa}},\sum_{k\in[m]}\boldsymbol{r}_{i}^{(k)})\). Then, it can be inspected that the following equalities hold: * \(\boldsymbol{\tilde{S}}_{j}^{\prime}=(f_{1}g_{1}^{\sum_{k\in[m]}\boldsymbol{v} _{i}^{(k)}}h_{1}^{\boldsymbol{\hat{r}}_{i}+\sum_{k\in[m]}\boldsymbol{r}_{i}^{ (k)}})^{\frac{1}{x+\boldsymbol{c}_{i}}}\ g_{1}^{\sum_{k\in[m]}\boldsymbol{b} _{S}_{j}^{(k)}}\) if \(i\in I\) else \(f_{1}^{0}\), * \(\boldsymbol{\tilde{c}}_{j}^{\prime}=\boldsymbol{c}_{i}+\sum_{k\in[m]} \boldsymbol{b}_{\boldsymbol{c}_{j}^{(k)}}\), * \(\boldsymbol{\tilde{r}}_{j}^{\prime}=\boldsymbol{\hat{r}}_{i}+\sum_{k\in[m]} \boldsymbol{r}_{i}^{(k)}+\sum_{k\in[m]}\boldsymbol{b}_{\boldsymbol{r}_{j}^{(k)}}\) Therefore, \((\boldsymbol{\tilde{S}}_{j}^{\prime}g_{1}^{-\boldsymbol{b}_{S}_{j}}, \boldsymbol{\tilde{c}}_{j}^{\prime}-\boldsymbol{b}_{\boldsymbol{c}_{j}}, \boldsymbol{\tilde{r}}_{j}^{\prime}-\boldsymbol{b}_{\boldsymbol{r}_{j}})\) satisfies the BBS+ verification equation \(e(\boldsymbol{\tilde{S}}_{j}^{\prime}g_{1}^{-\boldsymbol{b}_{S}_{j}},yf_{2}^ {\boldsymbol{\tilde{c}}_{j}^{\prime}-\boldsymbol{b}_{\boldsymbol{c}_{j}}})=e(f_ {1}g_{1}^{\boldsymbol{v}_{j}^{\prime}}h_{1}^{\boldsymbol{\tilde{r}}_{j}^{ \prime}-\boldsymbol{b}_{\boldsymbol{r}_{j}}},f_{2})\) if \(i\in I\), where \((\boldsymbol{b}_{\boldsymbol{S}_{j}},\boldsymbol{b}_{\boldsymbol{c}_{j}}, \boldsymbol{b}_{\boldsymbol{r}_{j}}):=(\sum_{k\in[m]}\boldsymbol{b}_{\boldsymbol {S}_{j}^{(k)}}\), \(\sum_{k\in[m]}\boldsymbol{b}_{\boldsymbol{c}_{j}^{(k)}}\), \(\sum_{k\in[m]}\boldsymbol{b}_{\boldsymbol{r}_{j}^{(k)}})\). So: \[e(\boldsymbol{\tilde{S}}_{j}^{\prime}g_{1}^{-\boldsymbol{b}_{ \boldsymbol{S}_{j}}},yf_{2}^{\boldsymbol{\tilde{c}}_{j}^{\prime}-\boldsymbol{b }_{\boldsymbol{c}_{j}}})=e(f_{1}g_{1}^{\boldsymbol{v}_{j}^{\prime}}h_{1}^{ \boldsymbol{\tilde{r}}_{j}^{\prime}-\boldsymbol{b}_{\boldsymbol{r}_{j}}},f_{2}) \tag{4}\] \[\Leftrightarrow\frac{e(\boldsymbol{\tilde{S}}_{j}^{\prime},yf_{2}^{ \boldsymbol{\tilde{c}}_{j}^{\prime}-\boldsymbol{b}_{\boldsymbol{c}_{j}}})}{e(g_{1} ^{\boldsymbol{b}_{\boldsymbol{s}_{j}}},yf_{2}^{\boldsymbol{\tilde{c}}_{j}^{ \prime}-\boldsymbol{b}_{\boldsymbol{c}_{j}}})}=e(f_{1}g_{1}^{\boldsymbol{v}_{j}^{ \prime}}h_{1}^{\boldsymbol{\tilde{r}}_{j}^{\prime}-\boldsymbol{b}_{\boldsymbol{r}_{j }}},f_{2})\] \[\Leftrightarrow\frac{e(\boldsymbol{\tilde{S}}_{j}^{\prime},yf_{2}^ {\boldsymbol{\tilde{c}}_{j}^{\prime}})}{e(f_{1}g_{1}^{\boldsymbol{v}_{j}^{ \prime}}h_{1}^{\boldsymbol{\tilde{r}}_{j}^{\prime}},f_{2})}=\] \[\qquad e(\boldsymbol{\tilde{S}}_{j}^{\prime},f_{2})^{\boldsymbol{ b}_{\boldsymbol{c}_{j}}}e(g_{1},yf_{2}^{\boldsymbol{\tilde{c}}_{j}^{\prime}})^{ \boldsymbol{b}_{\boldsymbol{s}_{j}}}e(h_{1},f_{2})^{-\boldsymbol{b}_{ \boldsymbol{r}_{j}}}e(g_{1},f_{2})^{-\boldsymbol{b}_{\boldsymbol{s}_{j}} \boldsymbol{b}_{\boldsymbol{c}_{j}}}\] \[\Leftrightarrow\boldsymbol{\mathfrak{z}}_{2}=\boldsymbol{\mathfrak{g}}_{1 }^{\boldsymbol{b}_{\boldsymbol{c}_{j}}}\boldsymbol{\mathfrak{g}}_{2}^{\boldsymbol{ b}_{\boldsymbol{s}_{j}}}\boldsymbol{\mathfrak{h}_{\boldsymbol{r}_{j}}}\boldsymbol{ \mathfrak{h}_{\boldsymbol{z}_{j}}}\boldsymbol{\mathfrak{b}_{\boldsymbol{z}_{j}}}\] Thus, with the mix-servers holding shares of \((\boldsymbol{b}_{\boldsymbol{S}_{j}},\boldsymbol{b}_{\boldsymbol{c}_{j}}, \boldsymbol{b}_{\boldsymbol{r}_{j}})\) and \(\delta_{1}:=\boldsymbol{b}_{\boldsymbol{S}_{j}}\boldsymbol{b}_{\boldsymbol{c}_{j}}\) and \(\delta_{2}:=\delta_{0}\delta_{1}\), all equations of the predicate \(p_{\texttt{BBS+}_{j}}\) of the DPK are satisfied if \(i\in I\). If \(i\not\in I\), the last equation of \(p_{\texttt{BBS+}_{j}}\) is not satisfied. The claim follows. **Theorem 2**: **(Soundness).** _Under the discrete logarithm assumption in \(\mathbb{G}_{1}\) and the \(n\)-SDH assumption in \((\mathbb{G}_{1},\mathbb{G}_{2})\)[11], \(\Pi_{\mathsf{TM}}\) satisfies soundness (Definition 5)._ _Proof._ Suppose for contradiction that there is a PPT adversary \(\mathcal{A}\) such that \(\mathsf{Exp}_{\mathsf{soundness}}\) (Figure 4) outputs \(1\) with non-negligible probability. Note first that \(\mathcal{Q}\) always outputs \(I^{*}\subseteq I\) in \(\mathsf{DB}\)-\(\mathsf{SM}\) and \(J^{*}\subseteq J\) in \(\mathsf{DB}\)-\(\mathsf{RSM}\). Thus, \(\boldsymbol{c}_{I^{*}}\subseteq\boldsymbol{c}_{I}\) and \(\boldsymbol{v}^{\prime}_{J^{*}}\subseteq\boldsymbol{v}^{\prime}_{J}\). We now consider the following cases: _Case 1: \(\boldsymbol{c}_{I^{*}}\neq\{\boldsymbol{c}_{i}\in\boldsymbol{c}_{I}\mid \boldsymbol{v}_{i}\in\boldsymbol{v}^{\prime}_{J}\}\)._ This leads to the following sub-cases: * _Case 1.1:_ \(\exists\boldsymbol{c}_{i}\in\boldsymbol{c}_{I^{*}}:\boldsymbol{v}_{i}\not\in \boldsymbol{v}^{\prime}_{J}\). As per Figure 7, \(\boldsymbol{c}_{i}\) contains \(\boldsymbol{\gamma}_{i}=g_{1}^{\boldsymbol{v}_{i}}h_{1}^{\boldsymbol{r}_{i}}\) for some \(\boldsymbol{r}_{i}\in\mathbb{Z}_{q}\). Since \(\boldsymbol{c}_{i}\in\boldsymbol{c}_{I^{*}}\), \(\boldsymbol{\gamma}_{i}\in\boldsymbol{\gamma}_{I^{*}}\). Thus, by Lemma 3, a PPT extractor can extract a tuple \((j^{*},r^{*})\) such that \(\boldsymbol{\gamma}_{i}=g_{1}^{\boldsymbol{v}^{\prime}_{j^{*}}}h_{1}^{r^{*}}\) and \(\boldsymbol{v}^{\prime}_{j^{*}}\in\boldsymbol{v}^{\prime}_{J}\). The requirement \(\boldsymbol{v}_{i}\not\in\boldsymbol{v}^{\prime}_{J}\) thus implies that \(\boldsymbol{v}_{i}\not=\boldsymbol{v}^{\prime}_{j^{*}}\). This allows producing two different openings \((\boldsymbol{v}^{\prime}_{j^{*}},r^{*})\) and \((\boldsymbol{v}_{i},\boldsymbol{r}_{i})\) for Pedersen commitment \(\boldsymbol{\gamma}_{i}\), which is a contradiction under the discrete logarithm assumption in \(\mathbb{G}_{1}\). * _Case 1.2:_ \(\exists\boldsymbol{c}_{i}\in\boldsymbol{c}_{I\setminus I^{*}}:\boldsymbol{v}_ {i}\in\boldsymbol{v}^{\prime}_{J}\). Note that since \(\mathcal{Q}\) produces a \(\boldsymbol{c}_{I^{*}}\) and does not abort, it must be that \(I^{*}\cup I^{*}_{\mathsf{c}}=I\) during the \(\mathsf{BTraclen}\) call. Thus, \(I^{*}_{\mathsf{c}}=I\setminus I^{*}\). Further, since all \(\boldsymbol{v}^{\prime}_{i}\)s are distinct, \(\boldsymbol{v}_{i}\in\boldsymbol{v}^{\prime}_{J}\implies\boldsymbol{v}_{i} \not\in\boldsymbol{v}^{\prime}_{[n]\setminus J}\). Thus, this case can be restated as follows: \(\exists\boldsymbol{c}_{i}\in\boldsymbol{c}_{I^{*}_{\mathsf{c}}}:\boldsymbol{v} _{i}\not\in\boldsymbol{v}^{\prime}_{[n]\setminus J}\). Thus, by applying Lemma 3 for the second \(\mathsf{DB}\)-\(\mathsf{SM}\) call in \(\mathsf{BTraclen}\) and proceeding as the previous case, we conclude that this case is not possible. _Case 2:_ \(\boldsymbol{v}^{\prime}_{J^{*}}\neq\{\boldsymbol{v}^{\prime}_{j}\in \boldsymbol{v}^{\prime}_{J}\mid\boldsymbol{v}^{\prime}_{j}\in\boldsymbol{v} _{I}\}\). This leads to the following sub-cases: * _Case 2.1:_ \(\exists\boldsymbol{v}^{\prime}_{j}\in\boldsymbol{v}^{\prime}_{J^{*}}: \boldsymbol{v}^{\prime}_{j}\not\in\boldsymbol{v}_{I}\). Since \(\boldsymbol{v}^{\prime}_{j}\in\boldsymbol{v}^{\prime}_{J^{*}}\), by Lemma 4, a PPT extractor can extract a tuple \((i^{*},r^{*})\) such that \(\boldsymbol{\gamma}_{i^{*}}=g_{1}^{\boldsymbol{v}^{\prime}_{j}}h_{1}^{r^{*}}\) and \(i^{*}\in I\). Since \(i^{*}\in I\), the requirement \(\boldsymbol{v}^{\prime}_{j}\not\in\boldsymbol{v}_{I}\) implies \(\boldsymbol{v}^{\prime}_{j}\not=\boldsymbol{v}_{i^{*}}\). As per Figure 7, \(\boldsymbol{\gamma}_{i^{*}}\gets g_{1}^{\boldsymbol{v}_{1}^{\prime}}h_{1}^{ \boldsymbol{r}_{i}^{\prime}}\) for some \(\boldsymbol{r}_{i^{*}}\in\mathbb{Z}_{q}\). This allows producing two different openings \((\boldsymbol{v}^{\prime}_{j},r^{*})\) and \((\boldsymbol{v}_{i^{*}},\boldsymbol{r}_{i^{*}})\) for \(\boldsymbol{\gamma}_{i^{*}}\), which leads to a contradiction. * _Case 2.2:_ \(\exists\boldsymbol{v}^{\prime}_{j}\in\boldsymbol{v}^{\prime}_{J^{*}}: \boldsymbol{v}^{\prime}_{j}\in\boldsymbol{v}_{I}\). Note that since \(\mathcal{Q}\) produces a \(\boldsymbol{v}^{\prime}_{J^{*}}\) and does not abort, it must be that \(J^{*}\cup J^{*}_{\mathsf{c}}=J\) during the \(\mathsf{BTraclen}\) call. Thus, \(J^{*}_{\mathsf{c}}=J\setminus J^{*}\). Further, since all \(\boldsymbol{v}_{i}\)s are distinct, \(\boldsymbol{v}^{\prime}_{j}\in\boldsymbol{v}_{I}\implies\boldsymbol{v}^{\prime} _{j}\not\in\boldsymbol{v}_{[n]\setminus I}\). Thus, this case can be restated as follows: \(\exists\boldsymbol{v}^{\prime}_{j}\in\boldsymbol{v}_{J^{*}_{\mathsf{c}}}: \boldsymbol{v}^{\prime}_{j}\not\in\boldsymbol{v}_{[n]\setminus I}\). Thus, by applying Lemma 4 for the second \(\mathsf{DB}\)-\(\mathsf{RSM}\) call in \(\mathsf{BTraceOut}\) and proceeding as the previous case, we conclude that this case is not possible. **Lemma 3**: _If \(\mathcal{Q}\) participates in the \(\mathsf{DB}\)-\(\mathsf{SM}\) protocol with common input \((\mathsf{pk}_{\mathsf{EG}}\), \(\boldsymbol{\gamma}\), \(\boldsymbol{v}^{\prime}\), \(I\), \(J)\) and outputs \(I^{*}\), then for all PPT adversaries \(\mathcal{A}\) controlling \((\mathcal{M}_{k})_{k\in[m]}\) and for all \(\boldsymbol{\gamma}_{i}\in\boldsymbol{\gamma}_{I^{*}}\), there exists a PPT extractor \(\mathcal{E}\) that outputs a \((j^{*},r^{*})\) such that \(\boldsymbol{\gamma}_{i}=g_{1}^{\boldsymbol{v}^{\prime}_{j^{*}}}h_{1}^{r^{*}} \wedge\boldsymbol{v}^{\prime}_{j^{*}}\in\boldsymbol{v}^{\prime}_{J}\)._ _Proof._ Note that the verification steps of \(\mathcal{Q}\) in \(\mathsf{DB}\)-\(\mathsf{SM}\) for a given \(\boldsymbol{\gamma}_{i}\in\boldsymbol{\gamma}_{I}\) are exactly those of the verifier of the single-prover ZKP of set membership [15] for commitment \(\boldsymbol{\gamma}_{i}\) against set \(\phi:=\boldsymbol{v}^{\prime}_{J}\) when extended to our asymmetric pairing setting. Thus, for each \(\boldsymbol{\gamma}_{i}\in\boldsymbol{\gamma}_{I^{*}}\), by the special soundness of the single-prover ZKP [15] under the \(n\)-Strong Diffie-Hellman assumption in \((\mathbb{G}_{1},\mathbb{G}_{2})\), a PPT extractor \(\mathcal{E}^{\prime}\) can extract \(v^{*},r^{*}\) such that \(\boldsymbol{\gamma}_{i}=g_{1}^{v^{*}}h_{1}^{r^{*}}\wedge v^{*}\in\phi\). \(\mathcal{E}\) simply runs \(\mathcal{E}^{\prime}\), finds \(j^{*}\) such that \(v^{*}=\boldsymbol{v}^{\prime}_{j^{*}}\) and outputs \((j^{*},r^{*})\). **Lemma 4**.: _If \(\mathcal{Q}\) participates in the \(\mathsf{DB}\)-\(\mathsf{RSM}\) protocol with common input \((\mathsf{pk}_{\mathsf{EG}}\), \(\boldsymbol{\gamma}\), \(\boldsymbol{\rho_{\gamma}}\), \(\boldsymbol{\epsilon_{r}}\), \(\boldsymbol{v}^{\prime}\), \(I\), \(J)\) and outputs \(J^{*}\), then for all PPT adversaries \(\mathcal{A}\) controlling \((\mathcal{M}_{k})_{k\in[m]}\) and for all \(\boldsymbol{v}^{\prime}_{j}\in\boldsymbol{v}^{\prime}_{J^{*}}\), there exists a PPT extractor \(\mathcal{E}\) that outputs an \((i^{*},r^{*})\) such that \(\boldsymbol{\gamma}_{i^{*}}=g_{1}^{\boldsymbol{v}^{\prime}_{j}}h_{1}^{r^{*}} \wedge i^{*}\in I\)._ Proof.: We construct an algorithm \(\mathcal{E}\) that for each \(\boldsymbol{v}^{\prime}_{j}\in\boldsymbol{v}^{\prime}_{J^{*}}\) either outputs a desired tuple \((i^{*},r^{*})\) or forges a BBS+ signature, using extractors \(\mathcal{E}_{1}\) for the NIZK proofs of knowledge of commitment openings \(\boldsymbol{\rho}_{\gamma_{i}}\) and \(\mathcal{E}_{2}\) for the DPKs for \(p_{\mathsf{BBS+}_{j}}\). On one end, \(\mathcal{E}\) interacts with adversary \(\mathcal{A}\) controlling \((\mathcal{M}_{k})_{k\in[m]}\) in the \(\mathsf{DB}\)-\(\mathsf{RSM}\) protocol; on the other, with the challenger \(\mathcal{C}\) of the BBS+ signature unforgeability game. See Figure 10. For each \(i\in I\), \(\mathcal{E}\) first extracts opening \((\boldsymbol{v}_{i},\boldsymbol{r}_{i})\) of commitment \(\boldsymbol{\gamma}_{i}\) from the NIZK proof \(\boldsymbol{\rho_{\gamma_{i}}}\) using extractor \(\mathcal{E}_{1}\) (line 2). It then obtains BBS+ public key \(y\) and signatures for each \((\boldsymbol{v}_{i})_{i\in I}\) from the BBS+ challenger \(\mathcal{C}\) (lines 3-4), derives quasi-signatures from them using \(\boldsymbol{r}_{i}\) (line 6) and forwards the quasi-signatures and their encrypted versions to \(\mathcal{A}\), along with fake signatures for \(i\not\in I\) -- similar to \(\mathcal{Q}\) (lines 5-10). \(\mathcal{A}\) responds with blinded permuted signatures \(\boldsymbol{\tilde{\sigma}}^{\prime}\) (line 11), as \((\mathcal{M}_{k})_{k\in[m]}\) do in the real protocol at the end of stage 1. In stage 2, for each \(j\in J\), \(\mathcal{E}\) extracts blinding factors for the blinded signature \(\boldsymbol{\tilde{\sigma}}^{\prime}_{j}\) using extractor \(\mathcal{E}_{2}\) (lines 13-14), from which it obtains an unblinded signature \(\boldsymbol{\sigma}^{\prime}_{j}\) (line 15). \(\mathcal{E}\) then attempts to find an opening \(\boldsymbol{v}_{i}\) extracted by \(\mathcal{E}_{1}\) for some commitment \(\boldsymbol{\gamma}_{i}\in\boldsymbol{\gamma}_{I}\) such that \(\boldsymbol{v}_{i}=\boldsymbol{v}^{\prime}_{j}\), and returns the corresponding tuple \((i,\boldsymbol{r}_{i})\) (lines 16-18). If no such \(\boldsymbol{v}_{i}\) exists, it outputs the message-signature tuple \((\boldsymbol{v}^{\prime}_{j},\boldsymbol{\sigma}^{\prime}_{j})\) (line 19). Note that \(\mathcal{Q}\) produced some \(J^{*}\) and did not abort. In this case, the view produced by \(\mathcal{E}\) to \(\mathcal{A}\) is identical to that produced by \(\mathcal{Q}\) to \((\mathcal{M}_{k})_{k\in[m]}\). Further, for some \(\mathbf{v}^{\prime}_{j}\in\mathbf{v}^{\prime}_{J^{*}}\), if \(\mathbf{v}_{i}=\mathbf{v}^{\prime}_{j}\) for some \(i\in I\), \((i,\mathbf{r}_{i})\) output in line 17 is a desired tuple since \(\mathbf{\gamma}_{i}=g_{1}^{\mathbf{v}_{i}}h_{1}^{\mathbf{r}_{i}}=g_{1}^{\mathbf{v}^{\prime}_{j }}h_{1}^{\mathbf{r}_{i}}\) and \(i\in I\) (the first equality follows by the soundness of the proof of knowledge of commitment openings; the second because \(\mathbf{v}_{i}=\mathbf{v}^{\prime}_{j}\)). If no such \(\mathbf{v}_{i}\) exists, we show that the tuple \((\mathbf{v}^{\prime}_{j},\mathbf{\sigma}^{\prime}_{j})\) output in line 19 is a valid BBS+ signature forgery: * Since for all \(i\in I\), \(\mathbf{v}_{i}\neq\mathbf{v}^{\prime}_{j}\), a BBS+ signature for \(\mathbf{v}^{\prime}_{j}\) was not queried from \(\mathcal{C}\) in line 3. * Since \(\mathbf{v}^{\prime}_{j}\in\mathbf{v}^{\prime}_{J^{*}}\), the DPK for \(p_{\texttt{BBS+}_{j}}\) must have passed. Thus, by the soundness of DPKs: \[\mathfrak{z}_{1} =\mathfrak{h}_{2}^{\mathbf{b}_{Sj}}\mathfrak{h}_{3}^{\delta_{0}}\] (5) \[\mathfrak{z}_{1}^{\mathbf{b}_{ej}} =\mathfrak{h}_{2}^{\delta_{1}}\mathfrak{h}_{3}^{\delta_{2}},\] (6) \[\mathfrak{z}_{2} =\mathfrak{g}_{1}^{\mathbf{b}_{ej}}\mathfrak{g}_{2}^{\mathbf{b}_{Sj}} \mathfrak{h}_{1}^{\mathbf{b}_{rj}}\mathfrak{h}_{2}^{\delta_{1}}\] (7) From Equations 5 and 6, \(\mathfrak{h}_{2}^{\mathbf{b}_{Sj}\mathbf{b}_{ej}}\mathfrak{h}_{3}^{\delta_{0}\mathbf{b}_{ ej}}=\mathfrak{h}_{2}^{\delta_{1}}\mathfrak{h}_{3}^{\delta_{2}}\). It must be that \(\delta_{1}=\mathbf{b}_{\mathbf{S}_{j}}\mathbf{b}_{ej}\), otherwise two different openings \((\delta_{1},\delta_{2})\) and \((\mathbf{b}_{\mathbf{S}_{j}}\mathbf{b}_{ej},\delta_{0}\mathbf{b}_{ej})\) for the Pedersen commitment \(\mathfrak{z}_{1}^{\mathbf{b}_{ej}}\) can be produced. Thus, \(\mathfrak{z}_{2}=\mathfrak{g}_{1}^{\mathbf{b}_{ej}}\mathfrak{g}_{2}^{\mathbf{b}_{Sj}} \mathfrak{h}_{1}^{\mathbf{b}_{rj}}\mathfrak{h}_{2}^{\mathbf{b}_{Sj}\mathbf{b}_{ej}}\), which implies that \(\mathbf{\sigma}^{\prime}_{j}:=(\vec{\mathbf{S}}^{\prime}_{j}g_{1}^{-\mathbf{b}_{Sj}},\vec {\mathbf{c}}^{\prime}_{j}-\mathbf{b}_{\mathbf{c}j},\vec{\mathbf{r}}^{\prime}_{j}-\mathbf{b}_{\mathbf{ r}j})\) satisfies the BBS+ signature verification equation on message \(\mathbf{v}^{\prime}_{j}\) under public key \(y\): \(e(\vec{\mathbf{S}}^{\prime}_{j}g_{1}^{-\mathbf{b}_{Sj}},yf_{2}^{\vec{\mathbf{c}}^{\prime}_ {j}-\mathbf{b}_{ej}})=e(f_{1}g_{1}^{\mathbf{v}^{\prime}_{j}}h_{1}^{\vec{\mathbf{r}}^{\prime }_{j}-\mathbf{b}_{\mathbf{r}j}},f_{2})\) (see Equation 4; Lemma 2). Since at most \(n\) signature queries could have been made, forging a BBS+ signature is not possible under the \(n\)-Strong Diffie Hellman assumption in \((\mathbb{G}_{1},\mathbb{G}_{2})\)[2]. Thus, for each \(\mathbf{v}^{\prime}_{j}\in\mathbf{v}^{\prime}_{J^{*}}\), some \(i\in I\) such that \(\mathbf{v}_{i}=\mathbf{v}^{\prime}_{j}\) must exist and a desired tuple \((i,\mathbf{r}_{i})\) must have been produced. Theorem 3.1 (Secrecy): _Under the IND-CPA security of \(\mathsf{E}\), the DDH assumption in \(\mathbb{G}_{1}\) and the DCR assumption [60], \(\Pi_{\mathsf{TM}}\) satisfies secrecy (Definition 6) against honest-but-curious adversaries in the random oracle model._ Proof: We prove the theorem by considering the following sequence of hybrid experiments (the complete hybrid experiments are shown in Figures 14 to 33 in Appendix C): * \(E_{1}\) (Figure 14): \(E_{1}\) is the original secrecy game \(\mathsf{Exp}_{\mathsf{secrecy}}\) instantiated for our protocol. * \(E_{2}\) (Figure 15): In \(E_{2}\), \(\Pi\)ZKPKs \(\mathbf{\rho}_{\mathbf{\gamma}_{i_{0}}}\) and \(\mathbf{\rho}_{\mathbf{\gamma}_{i_{1}}}\) are simulated. \(E_{2}\) is indistinguishable from \(E_{1}\) in the random oracle model because of the ZK property of the NIZK proof. * \(E_{3}\) (Figure 16): In \(E_{3}\), shares of \(\mathbf{v}_{i},\mathbf{r}_{i}\) for \(i\in\{i_{0},i_{1}\}\) are drawn as \((\mathbf{v}_{i}^{(k)},\mathbf{r}_{i}^{(k)})_{k\neq k^{*}}\)\(\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}\) and \(\mathbf{v}_{i}^{(k^{*})}\leftarrow\mathbf{v}_{i}-\sum_{k\neq k^{*}}\mathbf{v}_{i}^{(k)}\), \(\mathbf{r}_{i}^{(k^{*})}\leftarrow\mathbf{r}_{i}-\sum_{k\neq k^{*}}\mathbf{r}_{i}^{(k)}\). \(E_{3}\) is indistinguishable from \(E_{2}\) because the additive secret sharing is information-theoretically secure. * \(E_{4}\) (Figure 17): In \(E_{4}\), instead of decrypting \(\textbf{ev}_{i}^{(k^{*})}\), \(\textbf{er}_{i}^{(k^{*})}\) for \(i\in\{i_{0},i_{1}\}\) during Mix, \(\textbf{v}_{i}^{(k^{*})}\), \(\textbf{r}_{i}^{(k^{*})}\) are obtained by directly using their corresponding values in the Enc call for \(i_{0}/i_{1}\). \(E_{4}\) is indistinguishable from \(E_{3}\) because of correctness of decryption. * \(E_{5}\) (Figure 18): In \(E_{5}\), instead of decrypting \(\textbf{ev}_{i}^{(k^{*})}\), \(\textbf{er}_{i}^{(k^{*})}\) for \(i\in[n]\setminus\{i_{0},i_{1}\}\) during Mix, \(\textbf{v}_{i}^{(k^{*})}\), \(\textbf{r}_{i}^{(k^{*})}\) are obtained by rerunning the Enc algorithm on input \(\textbf{v}_{i}\) for sender \(S_{i}\) using the randomness from the random tape issued to \(\mathcal{A}\). \(E_{5}\) is indistinguishable from \(E_{4}\) because of correctness of decryption. * \(E_{6}\) (Figure 19): In \(E_{6}\), encrypted shares \(\textbf{ev}_{i_{0}}^{(k^{*})},\textbf{er}_{i_{0}}^{(k^{*})},\textbf{ev}_{i_{1} }^{(k^{*})},\textbf{er}_{i_{1}}^{(k^{*})}\) in the Enc calls for \(i_{0},i_{1}\) are replaced by encryptions of \(0\). \(E_{6}\) is indistinguishable from \(E_{5}\) by the IND-CPA security of E. * \(E_{7}\) (Figure 20): In \(E_{7}\), responses of \(\mathcal{M}_{k}^{*}\) in \(\textsf{E}_{\textsf{EG}}^{\textsf{th}}\).\(\textsf{TDec}\) and \(\textsf{E}_{\textsf{Pa}}^{\textsf{th}}\).\(\textsf{TDec}\) protocols are simulated by first obtaining the correct decryption \(m\) of the given ciphertext \(c\) using ideal decryption oracles \(\mathcal{F}_{\textsf{E}_{\textsf{E}_{\textsf{E}}}^{\textsf{th}}\).\(\textsf{Dec}^{\textsf{c}}\), \(\mathcal{F}_{\textsf{E}_{\textsf{Pa}}^{\textsf{th}}\).\(\textsf{Dec}^{\textsf{c}}\) and then simulating \(\mathcal{M}_{k}^{*}\)'s responses using \(c,m\) and secret keys for \((\mathcal{M}_{k})_{k\neq k^{*}}\). \(E_{7}\) is indistinguishable from \(E_{6}\) by the security of the threshold decryption protocols of \(\textsf{E}_{\textsf{EG}}^{\textsf{th}}\) and \(\textsf{E}_{\textsf{Pa}}^{\textsf{th}}\). For \(\textsf{E}_{\textsf{EG}}^{\textsf{th}}\), it follows straightforwardly because given \(c=(c_{0},c_{1})\in\mathbb{C}(\textsf{E}_{\textsf{EG}}^{\textsf{th}})\) and \(m=c_{1}/\prod_{k\in[m]}c_{0}^{\textsf{sk}_{\textsf{EG}}^{(k)}}\) obtained from \(\mathcal{F}_{\textsf{E}_{\textsf{EG}}^{\textsf{th}}\).\(\textsf{Dec}^{\textsf{th}}\), the simulator can output \(\frac{c_{1}}{m\prod\limits_{k\neq k^{*}}c_{0}^{\textsf{sk}_{\textsf{EG}}^{(k)}}}= \frac{c_{1}\prod\limits_{k\in[m]}c_{0}^{\textsf{sk}_{\textsf{EG}}^{(k)}}}{c_{1} \prod\limits_{k\neq k^{*}}c_{0}^{\textsf{sk}_{\textsf{EG}}^{(k)}}}=c_{0}^{ \textsf{sk}_{\textsf{EG}}^{(k^{*})}}\), which is exactly the decryption share output by the real \(\mathcal{M}_{k^{*}}\). For \(\textsf{E}_{\textsf{Pa}}^{\textsf{th}}\), Damgard et al. provide a proof (see Theorem 4; [25]). * \(E_{8}\) (Figure 21): In \(E_{8}\), instead of using \(\textbf{v}^{\prime}\) given by \(\mathcal{F}_{\textsf{E}_{\textsf{Pa}}^{\textsf{th}}\).\(\textsf{Dec}\) during Mix, it is set as \(\textbf{v}^{\prime}\leftarrow(\textbf{v}_{\pi(j)})_{j\in[n]}\), where \(\pi\leftarrow\pi^{(m)}\circ\cdots\circ\pi^{(1)}\) is obtained using the experimenter-selected \(\pi^{(k^{*})}\) and \((\pi^{(k)})_{k\neq k^{*}}\) obtained from the random tape issued to \(\mathcal{A}\), \((\textbf{v}_{i})_{i\in\{i_{0},i_{1}\}}\) are obtained from their values in Enc calls for \(i_{0},i_{1}\), and \((\textbf{v}_{i})_{i\in[n]\setminus\{i_{0},i_{1}\}}\) are obtained from inputs given to \(\mathcal{A}\) for senders \((S_{i})_{i\in[n]\setminus\{i_{0},i_{1}\}}\). \(E_{8}\) is indistinguishable from \(E_{7}\) by the correctness of Shuffle and \(\textsf{E}_{\textsf{Pa}}\).\(\textsf{TDec}\) protocols. * \(E_{9}\) (Figure 22): In \(E_{9}\), instead of using \(\boldsymbol{\tilde{\sigma}}\) given by \(\mathcal{F}_{\textsf{E}_{\textsf{EC}}^{\textsf{th}}\).\(\textsf{TDec}\) during DB-SM, it is set as \(\boldsymbol{\tilde{\sigma}}\leftarrow((\boldsymbol{\sigma}_{\pi^{-1}(i)}^{ \prime})_{i\in[n]}\), where \(\boldsymbol{\sigma}^{\prime}\) denotes Boneh-Boyen signatures sent by \(\mathcal{A}\) at the beginning of DB-SM, \(\pi\) is as obtained in \(E_{8}\) and \(\textbf{b}_{i}:=\sum_{k\in[m]}\textbf{b}_{i}^{(k)}\) is obtained using the experimenter-selected \(\textbf{b}_{i}^{(k^{*})}\) and \((\textbf{b}_{i}^{(k)})_{k\neq k^{*}}\) obtained from the random tape given to \(\mathcal{A}\). \(E_{9}\) is indistinguishable from \(E_{8}\) by the correctness of Shuffle and \(\textsf{E}_{\textsf{EG}}^{\textsf{th}}\).\(\textsf{TDec}\) protocols. * \(E_{10}\) (Figure 23): In \(E_{10}\), instead of using \(\boldsymbol{\tilde{S}}^{\prime},\boldsymbol{\tilde{c}}^{\prime\prime}, \boldsymbol{\tilde{r}}^{\prime\prime}\) given by \(\mathcal{F}_{\textsf{E}_{\textsf{EC}}^{\textsf{th}}\).\(\textsf{TDec}\) and \(\mathcal{F}_{\textsf{E}_{\textsf{Pa}}^{\textsf{th}}\).\(\textsf{Dec}\) during DB-RSM, they are set as \(\boldsymbol{\tilde{S}}^{\prime}\leftarrow(\textbf{S}_{\pi(j)}g_{1}^{\textbf{bs}_{j}})_{j \in[n]}\), \(\boldsymbol{\tilde{c}}^{\prime\prime}\leftarrow(\textbf{c}_{\pi(j)}+\textbf{b}_{ \boldsymbol{c}}^{\prime})_{j\in[n]}\), \(\boldsymbol{\tilde{r}}^{\prime\prime}\leftarrow(\boldsymbol{\tilde{r}}_{\pi(j)}+ \textbf{r}_{\pi(j)}+\textbf{b}_{\boldsymbol{r}}^{\prime})_{j\in[n]}\), where \((\textbf{S},\boldsymbol{c},\boldsymbol{\hat{r}})\) denote BBS+ quasi-signatures sent by \(\mathcal{A}\) at the beginning of DB-RSM, \(\pi\) is as obtained in \(E_{8}\), \(\textbf{b}_{\boldsymbol{S}_{j}}\leftarrow\sum_{k\in[m]}\textbf{b}_{\boldsymbol{S} _{j}}^{(k)},\textbf{b}_{\boldsymbol{c}}^{\prime}\leftarrow\sum_{k\in[m]} \textbf{b}_{\boldsymbol{c}j}^{\prime}\bmod N\) are obtained using experimenter-selected \(\mathbf{b_{S}}_{j}^{(k^{*})}\), \(\mathbf{b^{\prime}}_{\mathbf{c}j}^{(k^{*})}:=\mathbf{b_{c}}_{j}^{(k^{*})}+q\mathbf{\chi_{c}}_{j}^ {(k^{*})}\), \(\mathbf{b^{\prime}}_{\mathbf{r}j}^{(k^{*})}:=\mathbf{b_{r}}_{j}^{(k^{*})}+q\mathbf{\chi_{r}}_{j }^{(k^{*})}\) and \((\mathbf{b_{S}}_{j}^{(k)},\mathbf{b^{\prime}}_{\mathbf{c}j}^{(k)}:=\mathbf{b_{c}}_{j}^{(k)}+q \mathbf{\chi_{c}}_{j}^{(k)},\mathbf{b^{\prime}}_{\mathbf{r}j}^{(k)}:=\mathbf{b_{r}}_{j}^{(k)}+q \mathbf{\chi_{r}}_{j}^{(k)})_{k\neq k^{*}}\) computed using the random tape given to \(\mathcal{A}\), \((\mathbf{r}_{i})_{i\in\{i_{0},i_{1}\}}\) are obtained from the \(\mathsf{Enc}\) calls for \(i_{0},i_{1}\) and \((\mathbf{r}_{i})_{i\in[n]\setminus\{i_{0},i_{1}\}}\) are obtained from the random tape given to \(\mathcal{A}\). \(E_{10}\) is indistinguishable from \(E_{9}\) by the correctness of Shuffle, \(\mathsf{E_{EG}^{th}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! * For \(j\in[n]\setminus\{j_{0},j_{1}\}\), the DPK is not simulated, i.e., values \((\mathbf{b_{S}}^{(k^{*})}_{j},\,\mathbf{b_{c}}^{(k^{*})}_{j},\)\(\mathbf{b_{r_{j}}}^{(k^{*})},\)\(\delta_{0}^{(k^{*})},\delta_{1}^{(k^{*})},\delta_{2}^{(k^{*})})\) are used exactly as \(\mathcal{M}_{k^{*}}\) does. \(E_{14}\) is indistinguishable from \(E_{13}\) because \(a)\) by the correctness of the blinded signatures, the DPK for \(j_{0}\) passes iff \(i_{0}=\pi(j_{0})\in I\) (otherwise the blinded signature is fake), \(b)\) by the assert condition in the \(\mathsf{OTraceOut}\) call, the DPK for \(j_{1}\) passes iff the DPK for \(j_{0}\) passes, and \(c)\) even when these DPKs do not pass, their first two equations do pass. * \(E_{15}\) (Figure 29): In \(E_{15}\), \(\mathfrak{z}_{1}^{(k^{*})}\) in \(\mathsf{DB}\)-\(\mathsf{RSM}\) is chosen uniformly at random from \(\mathbb{Z}_{q}\). \(E_{15}\) is indistinguishable from \(E_{14}\) because of perfect blinding using \(\delta_{0}^{(k^{*})}\). * \(E_{16}\) (Figure 30): In \(E_{16}\), shares \(\mathbf{v}_{i_{0}}^{(k^{*})},\mathbf{r}_{i_{0}}^{(k^{*})},\mathbf{v}_{i_{1}}^{(k^{*})},\bm {r}_{i_{1}}^{(k^{*})}\) during the \(\mathsf{Enc}\) calls for \(i_{0},i_{1}\) are set as \(0\). \(E_{16}\) is indistinguishable from \(E_{15}\) because these shares are not used anymore. * \(E_{17}\) (Figure 31): In \(E_{17}\), commitments \(\mathbf{\gamma}_{i_{0}},\mathbf{\gamma}_{i_{1}}\) in above \(\mathsf{Enc}\) calls commit \(0\). \(E_{17}\) is indistinguishable from \(E_{16}\) because Pedersen commitments are perfectly hiding and the committed values are not used anywhere. * \(E_{18}\) (Figure 32): In \(E_{18}\), \(\mathbf{\tilde{\sigma}}_{i_{0}}\), \(\mathbf{\tilde{\sigma}}_{i_{1}}\) during \(\mathsf{DB}\)-\(\mathsf{SM}\) are replaced by randomly drawn elements from \(\mathbb{G}_{1}\). \(E_{18}\) is indistinguishable from \(E_{17}\) because \(\mathbf{b}_{i_{0}}^{(k^{*})},\mathbf{b}_{i_{1}}^{(k^{*})}\) used in computing \(\mathbf{\tilde{\sigma}}_{i_{0}}\), \(\mathbf{\tilde{\sigma}}_{i_{1}}\) are chosen uniformly at random from \(\mathbb{Z}_{q}\). * \(E_{19}\) (Figure 33): In \(E_{19}\), \(\mathbf{\tilde{S}}_{j_{0}}^{\prime}\), \(\mathbf{\tilde{S}}_{j_{1}}^{\prime}\) during \(\mathsf{DB}\)-\(\mathsf{RSM}\) are replaced by randomly drawn elements from \(\mathbb{G}_{1}\); \(\mathbf{\tilde{c}}_{j_{0}}^{\prime\prime}\), \(\mathbf{\tilde{c}}_{j_{1}}^{\prime\prime}\) are computed as \(\mathbf{\tilde{c}}_{j_{0}}^{\prime\prime\prime}+\sum_{k\neq k^{*}}\mathbf{b^{\prime} }_{\mathbf{c}j_{0}}^{(k)}\), \(\mathbf{\tilde{c}}_{j_{1}}^{\prime\prime\prime}+\sum_{k\neq k^{*}}\mathbf{b^{\prime} }_{\mathbf{c}j_{1}}^{(k)}\) where \(\mathbf{\tilde{c}}_{j_{0}}^{\prime\prime\prime},\mathbf{\tilde{c}}_{j_{1}}^{\prime \prime\prime}\xleftarrow{\$}\)\(\mathbb{Z}_{q+q^{2}}\) and \((\mathbf{b^{\prime}}_{\mathbf{c}j_{0}}^{(k)},\mathbf{b^{\prime}}_{\mathbf{c}j_{1}}^{(k)})_{k\neq k^ {*}}\) are the ones obtained in \(E_{10}\); and \(\mathbf{\tilde{r}}_{j_{0}}^{\prime\prime},\mathbf{\tilde{r}}_{j_{1}}^{\prime\prime}\) are computed as \(\mathbf{\tilde{r}}_{j_{0}}^{\prime\prime\prime}+\sum_{k\neq k^{*}}\mathbf{b^{\prime} }_{\mathbf{r}j_{0}}^{(k)}\), \(\mathbf{\tilde{r}}_{j_{1}}^{\prime\prime\prime}+\sum_{k\neq k^{*}}\mathbf{b^{\prime} }_{\mathbf{r}j_{1}}^{(k)}\) where \(\mathbf{\tilde{r}}_{j_{0}}^{\prime\prime\prime},\mathbf{\tilde{r}}_{j_{1}}^{\prime \prime\prime}\xleftarrow{\$}\)\(\mathbb{Z}_{2q+q^{2}}\) and \((\mathbf{b^{\prime}}_{\mathbf{r}j_{0}}^{(k)},\mathbf{b^{\prime}}_{\mathbf{r}j_{1}}^{(k)})_{k\neq k ^{*}}\) are the ones obtained in \(E_{10}\). \(E_{19}\) is indistinguishable from \(E_{18}\) because \(\mathbf{b_{S}}_{j_{0}}^{(k^{*})},\mathbf{b_{S}}_{j_{1}}^{(k^{*})}\) used in computing \(\mathbf{\tilde{S}}_{j_{0}}^{\prime}\), \(\mathbf{\tilde{S}}_{j_{1}}^{\prime}\) are chosen uniformly at random from \(\mathbb{Z}_{q}\) and \(\mathbf{b^{\prime}}_{\mathbf{c}j_{0}}^{(k^{*})},\mathbf{b^{\prime}}_{\mathbf{c}j_{1}}^{(k^{*})}\) (resp. \(\mathbf{b^{\prime}}_{\mathbf{r}j_{0}}^{(k^{*})},\mathbf{b^{\prime}}_{\mathbf{r}j_{1}}^{(k^{*})}\)) used in computing \(\mathbf{\tilde{c}}_{j_{0}}^{\prime\prime}\), \(\mathbf{\tilde{c}}_{j_{1}}^{\prime\prime}\) (resp. \(\mathbf{\tilde{r}}_{j_{0}}^{\prime\prime}\), \(\mathbf{\tilde{r}}_{j_{1}}^{\prime\prime\prime}\)) are chosen uniformly at random from an exponentially larger space than the corresponding messages \(\mathbf{c}_{\pi(j_{0})},\mathbf{c}_{\pi(j_{1})}\) (resp. \(\mathbf{r}_{\pi(j_{0})},\mathbf{r}_{\pi(j_{1})}\)). In \(E_{19}\), \(\mathcal{A}\)'s view is identical for both \(b=0\) and \(b=1\) because in this experiment only \(\mathbf{v}^{\prime}\) sent at the end of the \(\mathsf{Mix}\) protocol depends on \(v_{i_{0}},v_{i_{1}}\) and \(\mathbf{v}^{\prime}\) is identically distributed for any choice of \(b\) because \(\pi^{(k^{*})}\) is uniformly distributed over \(\mathsf{Perm}(n)\). Thus, by a standard hybrid argument, the theorem holds. Theorem 4 (Output secrecy): _Under the same assumptions as Theorem 3, \(\Pi_{\mathsf{TM}}\) satisfies output secrecy (Definition 7)._ Proof: In this case, the proof proceeds exactly as above except that in experiment \(E_{13}\), DPKs for both \(i_{0},i_{1}\) are always simulated completely and in experiment \(E_{14}\), DPKs for both \(j_{0},j_{1}\) are always simulated completely. \(E_{13}\) and \(E_{14}\) are indistinguishable from their previous experiments because in this case, even though \(\mathcal{A}\) is not restricted by the constraints at \(\mathsf{OTraceIn}/\mathsf{OTraceOut}\) calls, \(\mathcal{A}\) does not control \(\mathcal{Q}\) anymore and thus does not even learn whether a DPK passed or not by the property of the DPKs (see Section 3.2). In Appendix B.1 we also sketch a proof that after applying the honest-but-curious to malicious model conversion steps of Appendix B, our construction satisfies secrecy as per Definition 6 against general malicious adversaries. ## 6 Implementation and benchmarks ### Techniques and optimisations We implemented our DB-SM and DB-RSM protocols using the Charm cryptographic library [1] with the PBC backend [53] for pairing operations. Our comprehensive implementation is available at [https://github.com/agrawalprash/traceable-mixnets](https://github.com/agrawalprash/traceable-mixnets). We chose the BN254 curve [4, 46] to instantiate pairing groups \((\mathbb{G}_{1},\mathbb{G}_{2},\mathbb{G}_{T})\), which gives a group order \(q\) of 254 bits. Recall from Section 3.2 that we are working with the standard threshold ElGamal encryption scheme [30] for \(\mathsf{E}^{\mathsf{th}}_{\mathsf{EG}}\) and the optimised threshold Paillier scheme suggested by Damgard et al. in [25] for \(\mathsf{E}^{\mathsf{th}}_{\mathsf{Pa}}\). For \(\mathsf{E}\), we used an optimised non-threshold Paillier encryption scheme [54], but more efficient non-Paillier schemes could also be chosen as we use \(\mathsf{E}\) as a blackbox. We implemented a slight simplification to the Damgard et al. scheme [25] that works for the \(m\)-out-of-\(m\) threshold case. Also, we did not implement distributed key generation (the scheme in [25] does not directly support it), but since this is a one-time step, it could be done using MPC protocols such as [26]. We now describe some optimisations we implemented while converting our honest-but-curious protocol to the malicious setting (refer Appendix B). First, we used standard \(\Sigma\)-protocol techniques to let senders prove knowledge of their uploaded ciphertexts. Second, we used batch-verification techniques of [5, 35] to let each mix-server efficiently verify the validity of the querier's signatures/quasi-signatures. Third, we used the _permutation commitment_-based techniques of [67, 64] to let each mix-server efficiently prove that they created their shuffles consistently. Fourth, we used standard \(\Sigma\)-protocols for proving knowledge of blinding factors during the homomorphic blinding steps. Fifth, during threshold decryption, we used the techniques of [5] to let each mix-server batch-verify correctness of other mix-servers' decryption shares. We ran all our benchmarks on an Intel(R) Xeon(R) W-1270 CPU @ 3.40GHz with 64 GB RAM on a single core. Figure 11 shows the runtime of different components in DB-SM/DB-RSM calls for \(n=10000\) ciphertexts and \(m=4\) mix-servers, in the worst case when \(I=J=[n]\). The total time for BTraceln/BTraceOut should thus be double the time taken by DB-SM/DB-RSM. This can be optimised further by interleaving the steps of DB-SM/DB-RSM executions for the requested set and its complement into a single call: e.g., in DB-SM the querier could generate signature key pairs \(x,y\) for index set \(J\) and \(x_{\mathsf{c}},y_{\mathsf{c}}\) for index set \([n]\setminus J\) and send signatures using key \(x_{\mathsf{c}}\) for \(j\not\in J\) instead of sending fake signatures, avoiding extraneous shuffling/decryption of fake signatures in stage 1. We report per-mix-server times, which captures real-world latencies more accurately as multiple independent operations can be run in parallel at each mix-server and operations such as shuffling can be pipelined. We do not report preprocessing time. We also do not explicitly benchmark proofs of well-formedness of input ciphertexts (see Section 2) because these proofs are generated offline by individual senders and can be efficiently verified before mixing, e.g., using zkSNARKs. Finally, we do not model the authenticated broadcast channel but since the uploaded datasets are moderate in size and the round complexity is small, this should not be a bottleneck. ### Comparison To the best of our knowledge, the state-of-the-art technique for our distributed setting is collaborative zkSNARKs [59], which achieve a per-prover time roughly double the prover time of the corresponding single-prover zkSNARK, assuming each prover has a share of the SNARK witness. Thus, we conservatively estimate our performance with respect to collaborative zkSNARKs by comparing against single-prover zkSNARKs for set membership and reverse set membership, where we do not count the time to distribute shares of the SNARK witness in a collaborative zkSNARK approach. We thus implemented zkSNARKs for proving \(\rho_{\mathsf{SM-Acc}}\) and \(\rho_{\mathsf{RSM-Acc}}\) statements for Merkle accumulators (see Section 1.3) using the ZoKrates toolchain [33] and the Groth16 proof system [41]. We used the Baby Jubjub curve [65] here, which has similar order as BN254 and allows efficient computation of Merkle hashes for commitments (this is required for proving \(\rho_{\mathsf{RSM-Acc}}\)). For creating Merkle hashes, we used the highly optimised Poseidon hash function [40]. Figure 11: Benchmarks for \(n=10000\) input ciphertexts and \(m=4\) mix-servers. Entries marked \(\mathcal{M}_{k}\) and \(\mathcal{Q}\) respectively denote time taken by each mix-server and the querier. Figure 12 compares per-mix-server and querier times of our DB-SM and DB-RSM protocols with the prover and verifier times in proving \(n\)\(\rho_{\textsf{SM-Acc}}\) and \(\rho_{\textsf{RSM-Acc}}\) statements using the above zkSNARKs. Our DB-SM and DB-RSM proving times are faster than even these single-prover zkSNARKs by about 43x and 9x respectively, from which we estimate that they will be faster than the corresponding collaborative zkSNARKs by at least 86x and 18x respectively, even discounting the time to distribute witness shares in a collaborative zkSNARK. We also ran the official implementation [43] of the Benarroch et al. scheme [9], which allows proving \(\rho_{\textsf{SM-Acc}}\) in the single-prover setting (but not \(\rho_{\textsf{RSM-Acc}}\)). This takes 2200 s for proving \(n=10000\)\(\rho_{\textsf{SM-Acc}}\) statements and 290 s for verifying them. However, this does not include the time to generate the corresponding RSA accumulator witnesses, which would actually make this scheme \(O(n^{2})\) for batched set membership queries (see Section 1.3). [43] also provides a faster variant for when the set elements are prime numbers, but this restriction is not justified for our use-case. The batching techniques of [19] also show about 30x speed-up over Merkle-tree based zkSNARKs for proving \(\rho_{\textsf{SM-Acc}}\) using the Poseidon hash function. However, as mentioned in Section 1.3, it is not easy to extend these techniques to the distributed setting because the batch-prover needs to know upfront which commitments passed the set membership test. Finally, if each mix-server and the querier is equipped with multiple cores, they can enjoy the high degree of task parallelism in our technique since DPKs in stage 2 are independent of each other and stage 1 operations require only a fixed number of communications among the cores. ## 7 Conclusion We introduced and formalised the notion of traceable mixnets which extend traditional mixnets to provably answer many useful subset queries in privacy-preserving applications. We also proposed a traceable mixnet construction using novel primitives of distributed set membership and reverse set membership, which we believe will be useful in other settings as well. We implemented our technique and showed that it is significantly faster than the state-of-the-art. Figure 12: Comparison with single-prover zkSNARKs for \(n=10000\) (and \(m=4\) for DB-SM/DB-RSM). ## Acknowledgments We wish to thank Rohit Vaish, Kabir Tomer and Mahesh Sreekumar Rajasree for helpful discussions and comments. We also thank Aarav Varshney for helping set up the benchmarks. The first author is supported by the Pankaj Jalote Doctoral Grant.
2305.10777
Analytical models for pressure-driven Stokes flow through superhydrophobic and liquid infused tubes and annular pipes
Analytical expressions for the velocity field and the effective slip length of pressure-driven Stokes flow through slippery pipes and annuli with rotationally symmetrical longitudinal slits are derived. Specifically, the developed models incorporate a finite local slip length or shear stress along the slits and thus go beyond the assumption of perfect slip commonly employed for superhydrophobic surfaces. Thereby, they provide the possibility to assess the influence of both the viscosity of the air or other fluid that is modelled to fill the slits as well as the influence of the micro-geometry of these slits. Firstly, expressions for tubes and annular pipes with superhydrophobic or slippery walls are provided. Secondly, these solutions are combined to a tube-within-a circular pipe scenario, where one fluid domain provides a slip to the other. This scenario is interesting as an application to achieve stable fluid-fluid interfaces. With respect to modelling, it illustrates the specification of the local slip length depending on a linked flow field. The comparisons of the analytically calculated solutions with numerical simulations shows excellent agreement. The results of this article thus represent an important instrument for the design and optimization of slippage along surfaces in circular geometries.
Sebastian Zimmermann, Clarissa Schönecker
2023-05-18T07:38:32Z
http://arxiv.org/abs/2305.10777v1
Analytical models for pressure-driven Stokes flow through superhydrophobic and liquid infused tubes and annular pipes ###### Abstract Analytical expressions for the velocity field and the effective slip length of pressure-driven Stokes flow through slippery pipes and annuli with rotationally symmetrical longitudinal slits are derived. Specifically, the developed models incorporate a finite local slip length or shear stress along the slits and thus go beyond the assumption of perfect slip commonly employed for superhydrophobic surfaces. Thereby, they provide the possibility to assess the influence of both the viscosity of the air or other fluid that is modelled to fill the slits as well as the influence of the micro-geometry of these slits. Firstly, expressions for tubes and annular pipes with superhydrophobic or slippery walls are provided. Secondly, these solutions are combined to a tube-within-a circular pipe scenario, where one fluid domain provides a slip to the other. This scenario is interesting as an application to achieve stable fluid-fluid interfaces. With respect to modelling, it illustrates the specification of the local slip length depending on a linked flow field. The comparisons of the analytically calculated solutions with numerical simulations shows excellent agreement. The results of this article thus represent an important instrument for the design and optimization of slippage along surfaces in circular geometries. keywords: ## 1 Introduction Flows over micro- or nanostructured surfaces are of great technical importance. For example, grooves, posts or holes are incorporated into no-slip walls whose structures contain a secondary immiscible fluid, as in the case of superhydrophobic surfaces (Shirtcliffe _et al._, 2010), where air is trapped between the primary water flow and the wall structuring. This wetting scenario is called the Cassie state, where the primary fluid wets only the upper surface of the structuring, forming a heterogeneous solid-liquid and solid-gas interface (Cassie & Baxter, 1944). Such configurations lead to a lower surface wettability and thus to a self-cleaning and water-repellent behavior as for the Lotus leaf (Koch _et al._, 2009). Furthermore, the relative surface fraction of the no-slip wall is reduced and the primary fluid can slide over air cushions on the surface, reducing drag significantly (Rothstein, 2010; Karatay _et al._, 2013; Ou _et al._, 2004). This is of great importance since hydrodynamic drag is directly related to the energy required to transport a fluid through a domain bounded by walls, implying a potential high economic incentive. Air is a suitable secondary fluid due to its low viscosity. However, air dissipation from the microstructures can occur. The interface then migrates into the microstructure (sagging) and drives the remaining air out. The result is the Wenzel state (Wenzel, 1936), i.e. the primary fluid has conquered and filled all microstructures. The slipping effect is then significantly reduced. Surfaces impregnated with a lubricant instead of air (liquid-infused surfaces - LIS) have proven to be a useful tool to prevent such Cassie-Wenzel transitions (Wong _et al._, 2011) while still allowing slippage (Asmolov _et al._, 2018) and are therefore receiving increasing attention (Hardt & McHale, 2022). LIS promote anti-icing (L Matthe _et al._, 2019) and anti-biofouling (Epstein _et al._, 2012) effects, which are major challenges in various industries such as transportation, agriculture and energy (Ras & Marmur, 2016; Cao _et al._, 2009; Agbe _et al._, 2020). To maximize the area fraction of the secondary fluid and to account for possible interface collapse, numerous microstructures are incorporated onto the surface. Since these are much smaller than the general geometry, complete numerical resolution of the full domain is not feasible. One way to solve this problem is to average the cumulative effect of all microstructures across the patterned wall and implement it into the numerical model using the Navier slip boundary condition \[w=\lambda_{\rm eff}\frac{\partial w}{\partial\mathbf{n}}. \tag{1}\] The effective slip length \(\lambda_{\rm eff}\) can be understood as an virtual depth below the surface where the velocity is extrapolated to zero (figure 1) and connects the velocity \(w\) with its normal derivative on the slip-wall. Thus, it is an important input quantity for numerical studies of microstructured surfaces and can be extracted from analytical models. Superhydrophobic structures of particular importance for application are parallel longitudinal stripes on pipe surfaces, see figure 3. An important result on this subject is given by Philip (1972_a_) who derived an analytical solution for the pressure-driven flow field of a pipe containing \(N\) rotationally symmetric no-shear wall sections on an otherwise no-slip wall. Lauga & Stone (2003) took up that formula and derived the effective slip length for such a configuration using (Philip, 1972_b_) \[\tilde{\lambda}_{\rm eff}=\frac{\lambda_{\rm eff}}{R_{0}}=\frac{2}{N}\ln\left( \sec\left(\frac{\theta}{2}\right)\right), \tag{2}\] with the pipe radius \(R_{0}\), number of no-shear slits \(N\) and the angle \(\theta\) as a measure for the no-shear fraction with \(\theta/\pi\) being the proportion of the total wall surface occupied by Figure 1: Schematic illustration of the slip boundary condition no-shear slits. Crowdy (2021) derives, among other things, analytical solutions for the effective slip length of pressure-driven annular flows containing longitudinal stripes on the inner or outer wall. The former is given by \[\tilde{\lambda}_{\rm eff}=\tilde{R}_{1}\ln(\tilde{R}_{1})-2\tilde{R}_{1}\frac{S} {I_{-1}}, \tag{3}\] with \(\tilde{\lambda}_{\rm eff},\tilde{R}_{1}\) in equation 3 being normalized with respect to the outer radius of the annulus. \(\tilde{R}_{1}\) denotes the dimensionless inner radius of the annulus. The outer wall is set to be the unit disk. \(S\) is a scaling constant and \(I_{-1}\) the coefficient in a Laurent expansion performed by Crowdy (2021), both solely dependent on geometric parameters. For further details, see section SS2.2.3. However, both approaches assume perfect slip along certain boundary parts, i.e. the primary flow experiences no shear stress there. This is, of course, a condition which cannot be achieved in reality, but represents an ideal limit where the enclosed fluid is decoupled from the bulk flow. The local slip length is infinite. Thus, in such models, it does not matter whether the microstructures are impregnated by air, oil or water, the effect remains unchanged. In addition to the viscous interaction of both fluids, the influence of the microstructure geometry is also not considered, although it plays a crucial role (see Schonecker & Hardt, 2013, 2015). To close this gap, this work derives analytic equations for the pressure-driven pipe flow field that have rotationally symmetric finite slip slits on a wall using a superposition approach. Classical pipe (section SS2.1) and annular pipe flows (section SS2.2) are considered, the latter having grooves on the inner wall. The quantity of the effective slip length as a function of a finite local slip length is given for both cases. Such equations are elementary for practical applications. These finite slip solutions are then linked so that a coupled velocity field of a pipe-within-pipe is obtained (section SS3). The local slip lengths act as a linking coefficient and depend on the connected flow of the other domain. This connection opens the computability of a wide range of possible applications. Finally, the results of this work are discussed, analyzed and illustrated in section SS4. ## 2 Mathematical description of the flow field and effective slip length A pressure driven Stokes flow \((0,0,w(x,y))\) of a fluid of viscosity \(\mu\) along the \(Z\) axis of an arbitrary Cartesian domain \((x,y,Z)\) having a cross section in the \((x,y)\) plane is goverened by the Poisson equation \[\nabla^{2}w(x,y)=-\frac{s}{\mu}, \tag{4}\] where \(-s\) is the negative pressure gradient along the \(Z\) axis and \(\nabla\) being the Nabla operator \[\nabla^{2}=\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{\partial y ^{2}}. \tag{5}\] It is assumed that the Reynolds number is sufficiently small to neglect inertial forces acting on the fluid. Velocities are non-dimensionalized with respect to \(sL^{2}/\mu\), with \(L\) being a characteristic length of the respective flow regime. Furthermore, the finite slip boundaries follow the shape given by the pipe geometry. That is, the surface tension is large enough to prevent further curvature of the interface. Due to the linearity of the Poisson equation, the resulting velocity field is also linear under consideration of appropriate boundary conditions. Thus, flow fields can be superposed and the result is also a linear solution of the governing partial differential equation. Consider \[\mu\nabla^{2}w_{1}+\mu\nabla^{2}w_{2}=\nabla p_{1}+\nabla p_{2}, \tag{3}\] with additivity \(f(x_{1})+f(x_{2})=f(x_{1}+x_{2})\) yielding \[\mu\nabla^{2}(w_{1}+w_{2})=\nabla(p_{1}+p_{2}), \tag{4}\] with the superposed velocity field \(w=w_{1}+w_{2}\) and pressure gradient \(p=p_{1}+p_{2}\). In this work, available no-shear velocity fields are superposed with suitable solutions of the Poisson equation. With this method, previous no-shear solutions will be extended in such a way that they possess a finite local slip length instead of an infinite one along their interface. ### LIS pipe with patterned wall A flow in a circular tube with \(N\) longitudinal rotationally symmetric slits on the outer wall is first considered, as illustrated in figure 2a, where \(N\geqslant 1\) is an integer. #### 2.1.1 Velocity field - No-shear solution A very important theoretical reference for modelling superhydrophobic or slippery surfaces is Philip (1972_a_). He offers analytic solutions to a variety of mixed value boundary problems which are composed of a mixture of no-shear and no-slip boundaries. One of these solutions describes the aforementioned circular tube with longitudinal no-shear slits on the outer wall, which will be referred to as Philips pipe flow hereafter. The velocity field for an arbitrary dimensionless pipe radius \(0<|\tilde{z}|\leqslant\tilde{R}\) can be written as (Lauga & Stone, 2003) \[w_{\rm pc}(\tilde{z})=\frac{sR_{0}^{2}}{\mu}\left(\frac{1}{4}\left(\tilde{R}^ {2}-|\tilde{z}|^{2}\right)+\frac{1}{N}\tilde{R}^{2}\tau(\tilde{z})\right) \tag{5}\] where \(R_{0}\) is the dimensionally dependent pipe radius, \(\tilde{R}=R/R_{0}\) the normalized outer pipe radius and \(\tilde{z}=z/R_{0}=\tilde{x}+{\rm i}\tilde{y}\) the dimensionless complex coordinate. The index pc indicating the circular Philip solution. It consists of a rotationally axis-symmetric Poisseuille flow superposed by an asymmetric second part with \(\tau(\tilde{z})\) governing the no Figure 2: LIS pipes with mixed boundary conditions, no-shear (or finite constant shear) slits shown as red bold line segments along the otherwise no-slip boundaries. a) Pipe flow case with \(N=2\). b) The annular pipe flow with \(N=2\) slits on the inner wall. shear slit influence on the flow, given by \[\tau(\tilde{z})=\mathrm{Im}\Bigg{[}\mathrm{cos}^{-1}\left(\frac{\mathrm{cos}( \kappa(\tilde{z}))}{\mathrm{cos}\left(\frac{\theta}{2}\right)}\right)-\kappa( \tilde{z})\Bigg{]} \tag{6}\] and \[\kappa(\tilde{z})=-\frac{\mathrm{i}}{2}\ln\left(\frac{\zeta}{\tilde{R}^{N}} \right)=-\frac{\mathrm{i}}{2}\ln\left(\frac{\tilde{z}^{N}}{\tilde{R}^{N}} \right)=-\frac{\mathrm{i}N}{2}\ln\left(\frac{\tilde{z}}{\tilde{R}}\right). \tag{7}\] \(\zeta=\tilde{z}^{N}\) is the transformed coordinate as a result of the conformal mapping performed by Philip and Im denotes the imaginary part of an expression. As in section SS1, \(\theta=N\phi\) is a measure for the no-shear fraction, with \(\phi\) being the slit half angle. #### 2.1.2 Velocity field - Finite shear solution Philips pipe flow solution has perfect slip along the fluid-fluid interfaces, resulting in an infinite local slip length. As mentioned, this assumption corresponds to an ideal state of fluid-fluid interaction, non-existent in reality (Bolognesi _et al._, 2014). To close this gap, one needs to find a solution with finite slip along the slit parts of the boundary. Such a flow field can be described by a superposition ansatz (Schonecker _et al._, 2014) \[w(\tilde{z})=A_{1}w_{1}(\tilde{z})+A_{2}w_{2}(\tilde{z}), \tag{8}\] with \(w_{1}(\tilde{z})\) being the aforementioned Philip pipe flow solution \(w_{\mathrm{pc}}(\tilde{z})\). The underlying assumption of this model is that the shear stress across the fluid-fluid interface is constant. Such a condition has already been used to describe finite-shear models, such as Schonecker & Hardt (2013). Numerical simulations also show that the shear stress is indeed almost constant along the interface (Schonecker _et al._, 2014; Higdon, 1985). Exceptions are the regions near the corners of the groove. However, these regions are much smaller than the interface, so the deviation from our assumption is negligible. Furthermore, the approach is closer to reality compared to the assumption of constant slip length along the interface, as the latter inevitably leads to slip length discontinuities at the groove corners. To account for constant shear along the interface, \(w_{2}(\tilde{z})\) is chosen to be Poiseuille flow \[w_{2}(\tilde{z})=\frac{1}{4}\frac{s_{2}R_{0}^{2}}{\mu}\left(\tilde{R}^{2}-| \tilde{z}|^{2}\right). \tag{9}\] The superposition ansatz thus gives \[w(\tilde{z})=A_{1}\frac{1}{4}\frac{s_{1}R_{0}^{2}}{\mu}\left(\left(\tilde{R}^ {2}-|\tilde{z}|^{2}\right)+\frac{\tilde{R}^{2}}{N}\tau(\tilde{z})\right)+A_{2 }\frac{1}{4}\frac{s_{2}R_{0}^{2}}{\mu}\left(\tilde{R}^{2}-|\tilde{z}|^{2}\right) \tag{10}\] with constants \(A_{1}\) and \(A_{2}\) to be determined. Note that \(w_{1}\) and \(w_{2}\) are driven by (different) pressure gradients \(s_{1}\) and \(s_{2}\), respectively. Under the condition that the combined flow is driven by a combined pressure gradient \[s=A_{1}\ s_{1}+A_{2}\ s_{2}, \tag{11}\] and that the Navier-slip condition applies in the center of the slit at \(\tilde{\mathfrak{z}}=\tilde{R}+i0\) \[w(\tilde{\mathfrak{z}})=\lambda\frac{\partial w(\tilde{\mathfrak{z}})}{- \partial\boldsymbol{n}}, \tag{12}\] the constants \(A_{1}\) and \(A_{2}\) are readily determined. The resulting superposed velocity field solution is \[w(\tilde{z})=\frac{sR_{0}^{2}}{\mu}\left(\frac{1}{4}\left(\tilde{R}^{2}-| \tilde{z}|^{2}\right)+\alpha\frac{\tilde{R}^{2}}{N}\tau(\tilde{z})\right), \tag{13}\] and normalized \[\tilde{w}(\tilde{z})=w(\tilde{z})\left(\frac{sR_{0}^{2}}{\mu}\right)^{-1}=\frac{1 }{4}\left(\tilde{R}^{2}-|\tilde{z}|^{2}\right)+\alpha\frac{\tilde{R}^{2}}{N} \tau(\tilde{z}), \tag{14}\] with \[\alpha=\frac{\tilde{\lambda}N}{\tilde{\lambda}N+2\tilde{R}\tau(\tilde{\mathfrak{ j}})}, \tag{15}\] where \(\tilde{\lambda}=\lambda/R_{0}\) is the dimensionless local slip length at the slit centre and \[\tau(\tilde{R}+\mathrm{i}0)=\tau(\tilde{\mathfrak{j}})=\cosh^{-1}\left(\sec \left(\frac{\theta}{2}\right)\right). \tag{16}\] By comparing the superposed flow field with Philips pipe flow, we find that both solutions only differ by the coefficient \(\alpha\) in the second term. As \(\tilde{\lambda}\to\infty\), \(\alpha\) converges to \(1\) and \(w(\tilde{z})\to w_{\mathrm{pe}}(\tilde{z})\), yielding the no-shear solution as it should. \(\alpha\) can therefore be interpreted as an imperfection coefficient adjusting the slit influence on the velocity field depending on a potentially finite local slip length. #### 2.1.3 Effective slip length With the flow field in dependence on a local slip length at hand, it is now possible to calculate the effective slip length. It represents the averaged influence of the slip boundary parts on the velocity field and involves equating the total volume flux \(\dot{V}\) caused by a given pressure gradient to that of a suitable comparison flow \((0,0,w^{*}(x,y))\) with the same pressure gradient, containing a no-slip wall at \(|z|=R\). The effective slip length is the value of \(\lambda_{\mathrm{eff}}\), for which \(\dot{V}^{*}=\dot{V}(\lambda_{\mathrm{eff}})\). In our case, however, it is not necessary to calculate the volume flow. Instead we consider a pressure-driven pipe flow with some constant effective slip length \(\lambda_{\mathrm{eff}}\) at the outer wall. This will provide a simple formula that can be used to calculate the effective slip length. First, a general axisymmetrical pipe flow solution is given by \[w^{*}=-\frac{s}{4\mu}|z|^{2}+c_{1}\ln(|z|)+c_{2}, \tag{17}\] subject to \[\frac{\partial w^{*}(|z|=0)}{\partial|z|}=0,\qquad w^{*}(|z|=R)=\lambda_{ \mathrm{eff}}\frac{\partial w^{*}(|z|=R)}{-\partial|z|},\] (18a,b) yields \[w^{*}=\frac{1}{4}\frac{s}{\mu}(R^{2}-|z|^{2})+\frac{1}{2}\frac{s}{\mu}R \lambda_{\mathrm{eff}}, \tag{19}\] which corresponds to a pipe flow with constant slip \(\lambda_{\mathrm{eff}}\) at the outer wall. This can be interpreted as the effective slip length. From equation 19, an expression for \(\lambda_{\mathrm{eff}}\) at \(|z|=R\) is easily determined to be \[\lambda_{\mathrm{eff}}=\frac{\mu}{s}\frac{2}{R}w^{*}(R). \tag{20}\] As mentioned above, all groove influence is abstracted to an average effect on the given wall. So all left to do is to average equation 13 along the boundary \[w^{*}(R)=\frac{1}{2\pi}\int_{0}^{2\pi}w(R)d\varphi=\alpha\frac{s}{\mu}\frac{R^ {2}}{N}\frac{1}{2\pi}\int_{0}^{2\pi}\tau(\tilde{z})\ \mathrm{d}\varphi, \tag{21}\] where \(\varphi\) is the coordinate angle. Solving the above integral involves using the integral relation derived by Philip (1972_b_), stating that the averaged slip influence is given by \[\frac{1}{2\pi}\int_{0}^{2\pi}\tau(\tilde{z})\ \mathrm{d}\varphi=\bar{\tau}( \tilde{z})=\ln\left(\sec\left(\frac{\theta}{2}\right)\right). \tag{22}\] The resulting normalized effective slip length \(\tilde{\lambda}_{\mathrm{eff}}=\lambda_{\mathrm{eff}}/R\) is \[\tilde{\lambda}_{\mathrm{eff}}=\alpha\frac{2}{N}\ln\left(\sec\left(\frac{ \theta}{2}\right)\right). \tag{23}\] Equation 23 again transitions into Philip solution if the local slip length diverges. However, there is a more straightforward way to calculate the effective slip length. Equation 20 shows, that the Poisseuille part is not contributing to the effective slip length, since it is zero at \(|z|=R\), while the second part is non-zero on the boundary and contributes with the factor of \(\alpha\). The normalized slip length can therefore be directly determined by \[\tilde{\lambda}_{\mathrm{eff}}=\alpha\ \tilde{\lambda}_{\mathrm{eff},P}, \tag{24}\] with \(\tilde{\lambda}_{\mathrm{eff},P}\) being the effective slip length of Philip's (Philip 1972_a_) no-shear solution. #### 2.1.4 Volume flow Another useful result is the total volume flux generated by the superposed flow of eq. 13. For this purpose we consider the volume flow to be given by \[\dot{V}=2\pi\int_{0}^{R}w(x,y)\ r\ \mathrm{d}r=2\frac{s\pi}{\mu}\int_{0}^{R} \left(\frac{1}{4}\left(R^{2}-|z|^{2}\right)+\alpha\frac{R^{2}}{N}\bar{\tau}( \tilde{z})\right)\ r\ \mathrm{d}r, \tag{25}\] where we again use the integral identity of Philip (1972_b_). The associated total volume flux therefore is \[\dot{V}=\frac{s\pi}{\mu}\left(\frac{1}{8}R^{4}+\alpha\frac{R^{4}}{N}\ln\left( \sec\left(\frac{\theta}{2}\right)\right)\right). \tag{26}\] As in 13, with \(\tilde{\lambda}\to\infty\) the volume flux transitions into the Philip solution (Philip 1972_b_), representing the ideal no-shear limit. With the comparison volume flux of a Poiseuille flow with no-slip at the outer wall being \[\dot{V}_{\mathrm{ns}}=\frac{1}{8}\frac{s\pi}{\mu}R^{4}, \tag{27}\] the effective slip length can alternatively be calculated with \[\lambda_{\mathrm{eff}}=\frac{R}{4}\left(\frac{\dot{V}}{\dot{V}_{\mathrm{ns}}} -1\right), \tag{28}\] which also results in equation 23. ### Annular LIS pipe with patterned inner wall A second flow regime of great interest is an annular slippery pipe with \(N\) rotationally symmetric slits on the inner boundary wall, as illustrated in figure 2b. #### 2.2.1 Velocity field - No-shear solution Crowdy (2021) provides an analytic flow field solution for annular superhydrophobic pipes of radius \(\bar{R}_{1}\leq|\tilde{z}|\leq\bar{R}_{2}\) with \(\bar{R}_{2}=1\) and \(N\geq 1\) no-shear boundary slits on the inner wall. The velocity field is given to be \[w_{\rm ca}(\tilde{z})=\frac{sR_{0}^{2}}{\mu}\left(\frac{1}{4}\left(1-|\tilde{z}|^ {2}\right)+\frac{1}{2}{\rm Re}(H(\zeta))\right), \tag{29}\] where the subscript ca indicates the Crowdy annular flow field (Crowdy, 2021). Like equation 5, this solution consists of a rotationally symmetric Poiseuille flow and a superposed asymmetric second term with the real part \({\rm Re}\) of an analytic function \(H(\zeta)\). This function represents the slit influence on the flow field and is \[H(\zeta)=\frac{1}{N}\int_{-1}^{\zeta}\left[\tilde{R}_{1}^{2}-M\left(\frac{P \left(\frac{\zeta^{\prime}}{q},q\right)P\left(\frac{\zeta^{\prime}}{q},q \right)}{P\left(\frac{\zeta^{\prime}}{a},q\right)P\left(\frac{\zeta^{\prime}} {a},q\right)}\right)^{1/2}\right]\frac{\mathrm{d}\zeta^{\prime}}{\zeta}, \tag{30}\] where \(a=\tilde{R}_{1}^{N}e^{i\theta}\) and \(q=\tilde{R}_{1}^{N}\). The angle \(\theta=\phi N\) is defined in the same way as for Philip's pipe solution. The variable \(\zeta=z^{N}\) is, as before, the transformed complex coordinate. \(M\) is a scaling factor determined by imposing \(w_{\rm ca}(x,y)=0\) on the no-slip portions of the inner wall \[M=\frac{\frac{(1-\tilde{R}_{1}^{2})}{4}+\frac{1}{2}\tilde{R}_{1}^{2}\ln( \tilde{R}_{1})}{S}, \tag{31}\] with \[S=\frac{1}{2N}\int_{-1}^{-q}\left(\frac{P\left(\frac{\zeta}{q},q\right)P\left( \frac{\zeta}{q},q\right)}{P\left(\frac{\zeta}{a},q\right)P\left(\frac{\zeta}{ a},q\right)}\right)^{1/2}\frac{d\zeta}{\zeta}. \tag{32}\] \(P(\zeta,q)\) is called the prime function for the concentric annulus and is defined as a convergent infinite product for any \(\zeta\neq 0\) \[P(\zeta,q)=(1-\zeta)\prod_{n=1}^{\infty}(1-q^{2n}\cdot\zeta)(1-q^{2n}/\zeta), \quad 0\leqslant q<1. \tag{33}\] For further information on prime functions, see Crowdy (2020). #### 2.2.2 Velocity field - Finite shear solution As for Philip's pipe flow, we seek a solution that does not necessarily impose an infinite local slip length on the slit portions of the boundary, but rather has a potentially finite local slip length. As before, constant shear stress along the fluid-fluid interface is assumed. We again choose a superposition approach \[w(\tilde{z})=B_{1}\ w_{1}(\tilde{z})+B_{2}\ w_{2}(\tilde{z}), \tag{34}\] with \(w_{1}(\tilde{z})\) being the no-shear solution for the annulus provided by Crowdy (2021). \(w_{2}(\tilde{z})\) is chosen to be the Poiseuille solution for an annular domain, with \(w_{2}(\tilde{z})=0\) at \(|\tilde{z}|=\tilde{R}_{1}\) and \(|\tilde{z}|=1\), resulting in \[w_{2}(\tilde{z})=\frac{1}{4}\frac{s_{2}R_{0}^{2}}{\mu}\left(\left(1-|\tilde{z }|^{2}\right)-\left(1-\tilde{R}_{1}^{2}\right)\frac{\ln(|\tilde{z}|)}{\ln( \tilde{R}_{1})}\right). \tag{35}\] The superposition ansatz thus gives \[w(\tilde{z}) = B_{1}\frac{s_{1}R_{0}^{2}}{\mu}\left(\frac{1}{4}(1-|\tilde{z}|^ {2})+\frac{1}{2}{\rm Re}(H(\zeta))\right) \tag{36}\] \[+B_{2}\frac{1}{4}\frac{s_{2}R_{0}^{2}}{\mu}\left(\left(1-|\tilde{ z}|^{2}\right)-\left(1-\tilde{R}_{1}^{2}\right)\frac{\ln(|\tilde{z}|)}{\ln( \tilde{R}_{1})}\right),\] with constants \(B_{1}\) and \(B_{2}\) again to be determined. \(w_{1}\) and \(w_{2}\), as before, are driven by the pressure gradients \(s_{1}\) and \(s_{2}\), respectively. Under the condition that the combined flow is driven by a combined pressure gradient \[s=B_{1}\ s_{1}+B_{2}\ s_{2}, \tag{37}\] and with the Navier-slip condition imposed at the center of the slit \(\tilde{\mathfrak{z}}=\tilde{R_{1}}+i0\) \[w(\tilde{\mathfrak{z}})=\lambda\frac{\partial w(\tilde{\mathfrak{z}})}{ \partial\boldsymbol{n}}, \tag{38}\] the constants \(B_{1}\) and \(B_{2}\) are determined, yielding the corresponding superposed flow field \[w(\tilde{z})=\frac{1}{4}\frac{sR_{0}^{2}}{\mu}\left((1-|\tilde{z}|^{2})+2 \beta_{1}\text{Re}(H(\zeta))-\beta_{2}\left(1-\tilde{R}_{1}^{2}\right)\frac{ \ln(|\tilde{z}|)}{\ln(\tilde{R}_{1})}\right), \tag{39}\] with \[\beta_{1}=\frac{\tilde{\lambda}\left(\tilde{R}_{1}^{2}+2\tilde{R}_{1}^{2}\ln \left(\frac{1}{\tilde{R}_{1}}\right)-1\right)}{\left[\tilde{R}_{1}^{2}\tilde {\lambda}-\tilde{\lambda}+\tilde{R}_{1}\ln\left(\frac{1}{\tilde{R}_{1}} \right)\left(\tilde{R}_{1}^{2}-2\text{Re}(H(q))+2\tilde{\lambda}\tilde{R}_{1} -1\right)\right]}, \tag{40}\] and \[\beta_{2}=(1-\beta_{1})=\frac{\tilde{R}_{1}\ln\left(\frac{1}{\tilde{R}_{1}} \right)\left(\tilde{R}_{1}^{2}-2\text{Re}(H(q))-1\right)}{\left[\tilde{R}_{1} ^{2}\tilde{\lambda}-\tilde{\lambda}+\tilde{R}_{1}\ln\left(\frac{1}{\tilde{R}_ {1}}\right)\left(\tilde{R}_{1}^{2}-2\text{Re}(H(q))+2\tilde{\lambda}\tilde{R} _{1}-1\right)\right]}, \tag{41}\] where \(\tilde{\lambda}=\lambda/R_{0}\) is the normalized local slip length and \(H(q)\) is the value of the analytic function \(H(\zeta)\) evaluated at the center of the slit boundary portion, at coordinate \(\tilde{\mathfrak{z}}\), corresponding to \(\zeta(\tilde{\mathfrak{z}})=q\). Analysing the behaviour of equation 39 in dependence of coefficients \(\beta_{1}\) and \(\beta_{2}\) shows that for \(\tilde{\lambda}\rightarrow\infty\), \(\beta_{1}\to 1\) and \(\beta_{2}\to 0\) delivering the no-shear solution of equation 29. Therefore both coefficients can be referred to as weighting coefficients, steering the influence of the no-shear 29 and no-slip solution 35 on the superposed flow field. The normalized velocity field is easily calculated to be \[\tilde{w}(\tilde{z})=\frac{1}{4}\left((1-|\tilde{z}|^{2})+2\beta_{1}\text{Re }(H(\zeta))-\beta_{2}\left(1-\tilde{R}_{1}^{2}\right)\frac{\ln(|\tilde{z}|)}{ \ln(\tilde{R}_{1})}\right). \tag{42}\] #### 2.2.3 Effective slip length As with the pipe flow in section SS2.1.3, a suitable comparison flow is needed to determine the effective slip length for the superposed annular flow above. For that, the general axisymmetrical solution of eq. 17 is solved again, with velocity normalized with respect to \(sR_{0}^{2}/\mu\) and length scales with \(R_{0}\). This solution is subject to no-slip on the wall at \(|\tilde{z}|=\tilde{R}_{2}=1\) and a Navier-slip boundary condition at \(|\tilde{z}|=\tilde{R}_{1}\), so \[\tilde{w}^{*}(|\tilde{z}|=\tilde{R}_{2})=0,\qquad\tilde{w}^{*}(|\tilde{z}|= \tilde{R}_{1})=\tilde{\lambda}\ \frac{\partial\tilde{w}^{*}(|\tilde{z}|=\tilde{R}_{1})}{ \partial|\tilde{z}|}.\] (43a, \[b\] ) Implementing both boundary conditions yields an annular axissymmetric solution with a constant slip length at \(|\tilde{z}|=\tilde{R}_{1}\), which can be interpreted as an effective slip along that boundary \(\tilde{\lambda}\equiv\tilde{\lambda}_{\text{eff}}\). The solution for the flow field is found to be \[\tilde{w}^{*}=\frac{1-|\tilde{z}|^{2}}{4}-\tilde{R}_{1}\ln(|\tilde{z}|)\frac{ \tilde{R}_{1}^{2}-2\tilde{R}_{1}\tilde{\lambda}_{\text{eff}}-1}{4(\tilde{ \lambda}_{\text{eff}}-\tilde{R}_{1}\ln(\tilde{R}_{1}))}. \tag{44}\] Evaluating this solution at the inner slip wall \(|\tilde{z}|=\tilde{R}_{1}\) gives \[\tilde{w}^{*}(|\tilde{z}|=\tilde{R}_{1})=\frac{1-\tilde{R}_{1}^{2}}{4}-\tilde{R}_ {1}\ln(\tilde{R}_{1})\frac{\tilde{R}_{1}^{2}-2\tilde{R}_{1}\tilde{\lambda}_{ \mathrm{eff}}-1}{4(\tilde{\lambda}_{\mathrm{eff}}-\tilde{R}_{1}\ln(\tilde{R}_{ 1}))}. \tag{45}\] A rearrangement provides a formula for the effective slip length on the inner wall of the annulus \(\tilde{R}_{1}\leq|\tilde{z}|\leq 1\) as a function of the normalized velocity \(\tilde{w}^{*}\) on that very wall \[\tilde{\lambda}_{\mathrm{eff}}=-\frac{4\tilde{R}_{1}\ln(\tilde{R}_{1})\tilde{ w}^{*}}{(1-\tilde{R}_{1}^{2}-4\tilde{w}^{*}+2\tilde{R}_{1}^{2}\ln(\tilde{R}_{1}))}. \tag{46}\] As mentioned earlier, the effective slip length abstracts the cumulative groove influence on the flow field as an average effect along the respective wall. Equation 42, however, is based on mixed-boundary conditions along the groove containing wall. It is easy to see that the velocity is not constant, since it is zero at the no-slip portions and non-zero along the slits. Therefore, it is necessary to average the velocity along \(|\tilde{z}|=\tilde{R}_{1}\) and thus indirectly the groove effect. A closer look at the equations 29 and 42 reveals that the only rotationally asymmetric terms are those containing the real part of the analytic function \(H(\zeta)\). The no-shear and finite shear solutions are thus closely related with respect to their asymmetric behaviour. Therefore, it makes sense to first consider the effective slip length of the former. The finite slip solution can then be considered as a simple extension of it. Reconstructing the effective slip length of the no-shear solution derived by Crowdy (2021), we must first identify the average of \(\mathrm{Re}(H(\zeta))\) at \(|\tilde{z}|=\tilde{R}_{1}\) along the inner wall \[\mathrm{Re}(H(\zeta))_{\mathrm{avg}}=\frac{1}{2\pi}\int_{0}^{2\pi}\mathrm{Re} (H(\zeta))\mathrm{d}\varphi. \tag{47}\] Although not explicitly given in Crowdy (2021), it is readily determined to be \[\mathrm{Re}(H(\zeta))_{\mathrm{avg}}=\tilde{R}_{1}^{2}\ln(\tilde{R}_{1})-MI_{ -1}\ln(\tilde{R}_{1}), \tag{48}\] with the aforementioned scaling factor \(M\) from equation 31 and \[I_{-1}=\frac{1}{2\pi i}\oint_{C}\left(\frac{P\left(\frac{\zeta}{q},q\right)P \left(\frac{\zeta}{q},q\right)}{P\left(\frac{\zeta}{q},q\right)P\left(\frac{ \zeta}{q},q\right)}\right)^{1/2}\frac{\mathrm{d}\zeta}{\zeta}, \tag{49}\] where \(C\) is any closed circle inside the annulus \(q<|\zeta|<1\) enclosing the origin (Crowdy, 2021). Accordingly, the averaged velocity for the no-shear solution along the inner wall is given by \[\tilde{w}_{\mathrm{ca}}^{*}(|\tilde{z}|=\tilde{R}_{1})=\frac{1}{4}(1-\tilde{R }_{1}^{2})+\frac{1}{2}(\tilde{R}_{1}^{2}\ln(\tilde{R}_{1})-MI_{-1}\ln(\tilde{R }_{1})), \tag{50}\] and the associated normalized effective slip length for the no-shear case is \[\tilde{\lambda}_{\mathrm{eff},\mathrm{ca}}=\frac{\lambda_{\mathrm{eff}, \mathrm{ca}}}{R_{0}}=\tilde{R}_{1}\ln(\tilde{R}_{1})-2\tilde{R}_{1}\frac{S}{I_ {-1}}, \tag{51}\] with \(S\) from equation 32, as is given in Crowdy (2021). Determining the effective slip for the superposed annular flow is performed similarly. Equation 42 shows, that the annular flow multiplied by \(\beta_{2}\) (third term in brackets) does not contribute to \(\lambda_{\mathrm{eff}}\), since it is zero at the inner wall. Therefore \[\tilde{w}^{*}=\beta_{1}\tilde{w}_{\mathrm{ca}}^{*}=\beta_{1}\frac{1}{4}(1- \tilde{R}_{1}^{2})+\beta_{1}\frac{1}{2}(\tilde{R}_{1}^{2}\ln(\tilde{R}_{1})- MI_{-1}\ln(\tilde{R}_{1})) \tag{52}\] at \(|\tilde{z}|=\tilde{R}_{1}\). From that, with equation 46, the effective slip length for the finite shear case is easily calculated to be \[\tilde{\lambda}_{\rm eff}=\frac{\lambda_{\rm eff}}{R_{0}}=\tilde{R}_{1}\ln( \tilde{R}_{1})\beta_{1}\frac{I_{-1}\ln(\tilde{R}_{1})-2S}{I_{-1}\ln(\tilde{R}_ {1})\beta_{1}-2S(\beta_{1}-1)}. \tag{53}\] For \(\beta_{1}\to 1\), representing an infinite local slip length on the slit boundary parts, equation 53 converges to the effective slip length of the no-shear solution in equation 51, as expected. ## 3 Coupled flow - Pipe within a pipe Section SS2 provides, among other things, analytic mathematical expressions for the pressure-driven flow field of a circular pipe (section SS2.1) and an annular pipe flow (section SS2.2), both containing an periodic array of longitudinal grooves along one boundary. Both solutions presented stand for themselves. However, they do not assume perfect slip but rather a constant finite shear-stress along these grooves, contrary to most literature (Lee _et al._ (2016); Lauga & Stone (2003); Schnitzer & Yariv (2019); Crowdy (2021); Sbragaglia & Prosperetti (2007); Crowdy (2016); Yariv & Schnitzer (2018)). This is represented by a finite local slip length at the fluid-fluid interface, which is to be determined depending on the respective application. In the case of LIS, this could be longitudinal grooves or other roughness features along the surface filled with a second immiscible fluid, usually air or oil. However, the practical application of such surfaces is challenging, as the trapped fluid can be pushed out of the microstructure. The interfacial collapse then leads to a Cassie-Wenzel transition. This ruins the desired effects of these surfaces, such as drag reduction. One way to prevent this is to use bottomless surfaces, where the downward walls inside the microstructures have been removed. This provides the operator with a greater degree of control. External pressure control, for example, can prevent the interface from collapsing. A possible configuration of such a surface is a pipe within a pipe, as illustrated in figure 3. Such a domain is composed of a pipe flow (a) and an annular flow (b), which are connected along certain boundary areas, illustrated as red circular arcs in figure 3. This makes it possible, for example, to increase the interfacial stability by controlling both flow fields accordingly. Obviously, the pipe within a pipe geometry is composed of the solutions for the pipe flow and the annular flow derived in section 2, both of which have a local slip length as an unknown to be determined. These local slip lengths can be readily calculated as parameters depending on the corresponding linked flow field, so \(\lambda_{a}=f(w_{b})\) and vice versa. To emphasize that \(\lambda_{a,b}\) follow from the connection of the two flow fields, it will be referred to here as \(\Lambda_{a,b}\). Accordingly, a pipe within a pipe is considered, connected through an periodic array of rotationally symmetric longitudinal slits on the outer wall of the inner circle. The no-slip wall portions of this boundary are assumed to be infinitely thin. Both domains are assumed to be pressure-driven. The outer radius of the governing equation of the inner pipe flow (eq. 13) is scaled to be of radius \(\tilde{R}_{1}\), which corresponds to the inner radius of the annular flow solution. It is given by \[w_{a}(\tilde{z})=\frac{s_{a}R_{0}^{2}}{\mu_{a}}\left(\frac{1}{4}\left(\tilde{R }_{1}^{2}-|\tilde{z}|^{2}\right)+\alpha(\tilde{\Lambda}_{a})\frac{\tilde{R}_{1 }^{2}}{N}\tau(\tilde{z})\right), \tag{54}\] with \(s_{a},\mu_{a}\) being the negative pressure gradient and viscosity within fluid domain \(a\). The imperfection coefficient \(\alpha\) is now defined in dependence on the normalized local connection slip length \(\tilde{\Lambda}_{a}=\Lambda_{a}/R_{0}\) for the inner pipe. \(R_{0}\) is set to be the dimensional radius of the outer pipe. The outer pipe flow is governed by the annular pipe flow solution of eq. 39 \[w_{b}(\tilde{z})=\frac{1}{4}\frac{s_{b}R_{0}^{2}}{\mu_{b}}\left((1-|\tilde{z}|^{2 })+2\beta_{1}(\tilde{\Lambda}_{b})\mathrm{Re}(H(\zeta))-\beta_{2}(\tilde{ \Lambda}_{b})\left(1-\tilde{R}_{1}^{2}\right)\frac{\ln(|\tilde{z}|)}{\ln( \tilde{R}_{1})}\right), \tag{40}\] with \(s_{b},\mu_{b}\) of domain \(b\) and \(\beta_{1},\beta_{2}\) depending on the normalized local connection slip length \(\tilde{\Lambda}_{b}=\Lambda_{b}/R_{0}\) for the outer pipe flow. A coupling of both flows has two unknowns, \(\tilde{\Lambda}_{a}(w_{b})\) and \(\tilde{\Lambda}_{b}(w_{a})\). Two connection conditions are consequently needed. Specifically, both the velocity and shear stress at a single point on the interface must be equal, so \[w_{a}(\tilde{\mathfrak{z}})=w_{b}(\tilde{\mathfrak{z}}),\qquad\mu_{a}\frac{ \partial w_{a}(\tilde{\mathfrak{z}})}{-\partial\mathbf{n}}=-\mu_{b}\frac{\partial w _{b}(\tilde{\mathfrak{z}})}{\partial\mathbf{n}},\] (41a,b) evaluated in the centre of the groove at \(\tilde{\mathfrak{z}}=\tilde{R}_{1}+i0\). Solving the system of equations (eqs. 41a,b) yields expressions for the local slip lengths \[\tilde{\Lambda}_{a}(w_{b})=\mu_{a}\Omega_{a},\qquad\tilde{\Lambda}_{b}(w_{a})= \mu_{b}\Omega_{b},\] (41a,b) where \(\Omega_{a},\Omega_{b}\) are the connection coefficients of fluid domain \(a\) and \(b\), respectively. They are given by \[\Omega_{a}=-\frac{2\tilde{R}_{1}\cosh^{-1}\left(\sec\left(\frac{\theta}{2} \right)\right)\left(\tilde{R}_{1}^{2}-2\mathrm{Re}(H(q))-1\right)\biggl{(}( \tilde{R}_{1}^{2}-1)s_{b}+2\tilde{R}_{1}^{2}\ln\left(\frac{1}{\tilde{R}_{1}} \right)(s_{b}-s_{a})\biggr{)}}{\left(\tilde{R}_{1}^{2}+2\tilde{R}_{1}^{2}\ln \left(\frac{1}{\tilde{R}_{1}}\right)-1\right)\biggl{(}Ns_{b}\mu_{a}(\tilde{R}_ {1}^{2}-2\mathrm{Re}(H(q))-1)+4\tilde{R}_{1}^{2}s_{a}\mu_{b}\cosh^{-1}\left( \sec\left(\frac{\theta}{2}\right)\right)\biggr{)}}, \tag{42}\] and \(\Omega_{a}=-\Omega_{b}\). Both coefficients are only dependent on already known quantities and can be easily evaluated. The local connection slip lengths are in proportion to the viscosities of both fluid domains. Although unequal pressure gradients \(s_{a},s_{b}\) are not explicitly excluded in equation 42, the original assumption to neglect additional flow induced interface curvature is no longer valid, if \(s_{a}\gg s_{b}\) or \(s_{a}\ll s_{b}\). However, these equations can be used to consider cases where the local pressure difference \(\Delta s=|s_{a}-s_{b}|\) at the fluid-fluid interface is less than the Figure 3: Fluid domain connection - Pipe flow (domain a) and annular flow (domain b) are connected via two rotationally symmetric shear slits on the inner wall, illustrated as bold red slits. Laplace pressure. In the special case \(s_{a}=s_{b}\), the connection coefficients can be simplified to \[\Omega_{a,s}=-\frac{2\tilde{R}_{1}\cosh^{-1}\left(\sec\left(\frac{\theta}{2} \right)\right)\left(\tilde{R}_{1}^{2}-2\mathrm{Re}(H(q))-1\right)\!\left( \tilde{R}_{1}^{2}-1\right)}{\left(\tilde{R}_{1}^{2}+2\tilde{R}_{1}^{2}\ln \left(\frac{1}{R_{1}}\right)-1\right)\!\left(N\mu_{a}(\tilde{R}_{1}^{2}-2 \mathrm{Re}(H(q))-1)+4\tilde{R}_{1}^{2}\mu_{b}\cosh^{-1}\left(\sec\left(\frac{ \theta}{2}\right)\right)\right)}, \tag{10}\] where, again, \(\Omega_{a,s}=-\Omega_{b,s}\). The subscript \(s\) indicates the same pressure gradient in both flow domains. With both connection coefficients at hand, the normalized local connection slip length for both flow domains can easily be determined, using equations 11\(a,b\). An interesting special case occurs when the same fluid flows in the inner and outer pipe, again with \(s_{a}=s_{b}\). The coefficients further simplify and yield \[\tilde{A}_{a,s,\mu}=-\frac{2\tilde{R}_{1}\cosh^{-1}\left(\sec\left(\frac{ \theta}{2}\right)\right)\left(\tilde{R}_{1}^{2}-2\mathrm{Re}(H(q))-1\right) \!\left(\tilde{R}_{1}^{2}-1\right)}{\left(\tilde{R}_{1}^{2}+2\tilde{R}_{1}^{2 }\ln\left(\frac{1}{R_{1}}\right)-1\right)\!\left(N(\tilde{R}_{1}^{2}-2\mathrm{ Re}(H(q))-1)+4\tilde{R}_{1}^{2}\cosh^{-1}\left(\sec\left(\frac{\theta}{2}\right) \right)\right)}, \tag{11}\] with \(\tilde{A}_{a,s,\mu}=-\tilde{A}_{b,s,\mu}\). Index \(\mu\) means that \(\mu_{a}=\mu_{b}\). ## 4 Results and discussion Section SS2 and section SS3 provide mathematical expressions for the flow field and the effective slip length for pipe and annular flow along rotationally symmetric longitudinal grooves as well as their connection at the slit boundary parts to model a pipe-within-pipe geometry. A flow through aforementioned geometries is thus fully described as a function of the pipe radii, number of grooves and a maximum local (connection) slip length at the slit center \(\tilde{\mathfrak{z}}\). ### Imperfection and weighting coefficients Contrary to most literature, the solutions derived in this work do not necessarily have an infinite local slip length at the slit boundary parts, but a potentially finite one. For \(\theta=\pi/2\), radius \(R_{1}=0.5\) and two slits, figure 4 illustrates the progression of the imperfection coefficient \(\alpha\) of the superposed pipe flow (eq. 13) and the weighting coefficients \(\beta_{1},\beta_{2}\) of the superposed annular flow (eq. 39) with increasing local slip length. We consider only \(\tilde{\lambda}\geqslant 0\). The dashed horizontal red line indicates the coefficient limit value of one. With no local slip length at the boundary, \(\alpha\to 0\), accordingly the second part of the superposed pipe flow determining the groove influence vanishes, leaving behind a Poiseuille flow with no slip at \(|z|=R\). In contrast, the flow field at infinite slip length corresponds to the no-shear solution of Philip (1972\(a\)), since \(\alpha\to 1\). For the superposed annular flow, \(\beta_{1}\) and \(\beta_{2}\) weight the influence of the no-shear solution provided by Crowdy (2021) and the superposed annular Poiseuille flow, respectively. For \(\tilde{\lambda}=0\), \(\beta_{1}\) converges to zero and \(\beta_{2}\) to one, accordingly eq. 39 transitions into the annular Poiseuille flow with no-slip at \(|z|=R_{1}\) and \(|z|=R_{2}\). However, at infinite local slip lengths, we obtain the no-shear solution of Crowdy (2021), since \(\beta_{1}\to 1\) and \(\beta_{2}\to 0\), representing the ideal limit of an non-viscous interface interaction. As mentioned, the quantity of the local maximum slip length \(\tilde{\lambda}\) is to be determined with respect to the system under consideration. The flow field of the pipe or annulus can therefore be calculated by specifying the local maximum slip length imposed by a microstructure, which is dependent on the properties of the enclosed fluid and the groove/structure geometry (Ybert _et al._, 2007; Schonecker _et al._, 2014). ### Flow fields for the pipe and annular flow For the pipe flow solution, examples of velocity contour lines are illustrated in figure 5. The left column shows contour plots of the axial velocity, assuming \((N=1,2,4)\) no-shear slits along the pipe wall (red), at \(|z|=R\). The right column of figure 5 corresponds to the same pipe geometries, however, with finite local slip lengths along the grooves. It is easy to see that a finite slip length reduces the overall velocity of the flow compared to the no-shear solution, as expected. For \(\tilde{\lambda}\to\infty\), the right column transitions into the left, consequently corresponding to the ideal limit of non-viscous fluid-fluid interface interaction. Figure 6 shows contour plots of normalized pressure driven annular flow solutions, containing \(N=2,3,6\) slits (red) at \(|z|=R_{1}\) and no-slip walls (blue) along the remaining boundaries. Whereas the left column assumes no-shear grooves according to Crowdys solution, the right features finite slip lengths. ### Connected flow field Now we will consider cases where the inner pipe flow is connected to the outer annular flow field and vice versa. Figure 7 illustrated such a case for two slits, an inner radius of \(R_{1}=0.5\) and \(\theta=\pi/2\). In the present example it is assumed that \(\mu_{a}=\mu_{b}\) and \(s_{a}=s_{b}\), which would correspond to the case of both pipe flows containing the same fluid and are driven by an equal pressure gradient. The analytically calculated coupled flow field is shown in figure 7(a). To verify the derived analytic solutions, numerical calculations have been performed with the commercial finite-element solver COMSOL Multiphysics\({}^{\circledR}\). For that, the two-dimensional Poisson equation has been solved for the inner and the outer flow domain, as illustrated in figure 7(b). Both fluid domains are connected at the red boundary parts. It should be noted that the mathematical connection of both flow fields in the numerical calculation is done along the complete fluid-fluid interface. In contrast, the analytical solution is only coupled at one point, the center of the groove at coordinate \(\tilde{\mathfrak{z}}\). The inner and outer no-slip walls (blue) impose \(\tilde{w}=0\) and are assumed to be infinitely thin, as is the assumption for the connected analytic flow field solution. The triangular mesh was strongly refined along the interface to adequately resolve the connection conditions, resulting in 374752 mesh elements. Figure 7(b) shows Figure 4: Progression of imperfection and weighting coefficients with increasing local slip length \(\tilde{\lambda}\) for \(\theta=\pi/2\), \(R_{1}=0.5\) and \(N=2\). a) Imperfection coefficient \(\alpha\). b) Weighting coefficients \(\beta_{1}\) and \(\beta_{2}\). Figure 5: Variation of velocity contour lines through a pipe along rotationally symmetric slits (red) and no-slip boundaries (blue). Comparing no-shear (normalized eq. 2.5) with finite-shear (from eq. 2.14) solution. a) No-shear slit: \(N=1,R=0.5,\theta=\pi/4\). b) Finite shear slit: as a), with \(\tilde{\lambda}=0.2\). c) No-shear slits: \(N=2,R=0.5,\theta=\pi/2\). d) Finite shear slits: as c), with \(\tilde{\lambda}=0.2\). e) No-shear slits: \(N=4,R=0.3,\theta=\pi/2\). f) Finite shear slits: as e), with \(\tilde{\lambda}=0.2\). Figure 6: Variation of velocity contour lines through an annular pipe along rotationally symmetric slits (red) and no-slip boundaries (blue). Comparing no-shear (normalized eq. 29) with finite-shear (from eq. 42) solution. a) No-shear slits: \(N=2,R=0.5,\theta=\pi/2\). b) Finite shear slits: as a), with \(\tilde{\lambda}=0.2\). c) No-shear slits: \(N=3,R=0.3,\theta=\pi/3\). d) Finite shear slits: as c), with \(\tilde{\lambda}=0.1\). e) No-shear slits: \(N=4,R=0.7,\theta=\pi/2\). f) Finite shear slits: as e), with \(\tilde{\lambda}=0.1\). the numerically calculated velocity contour plot. The comparison in figure 7 clearly shows the excellent agreement of both flow fields. To investigate the agreement more in detail, a comparison of analytically and numerically calculated shear stresses, velocities and the corresponding local connection slip length distributions along the fluid-fluid interface is given in figure 8. All analytical solutions are based on the combined annular flow field of equation 3.2. The left column of figure 8 examines a pipe-within-pipe flow regime for \(N=2,R_{1}=0.4,\theta=\pi/2\) and the right column for \(N=2,R_{1}=0.7,\theta=\pi/3\), assuming \(\mu_{a}=\mu_{b}\) and \(s_{a}=s_{b}\) in both cases. The derived analytical solutions assume constant shear along the interface, as assumed in the ansatz to ensure an explicit solution. The numerical results from figure 8 (a) and (b) show that the shear stress along the interface is not constant. The results differ most at the corners of the grooves. Nevertheless, the comparison for both cases considered shows that the plots match remarkably well. This is especially remarkable since the largest deviations are to be expected along the interface anyway, which further underlines the quality of the assumption made. Analyzing the averaged shear stress along the interfaces further reveals the cumulative error as being rather small, since \(\Delta\tilde{\tau}=|\bar{\tilde{\tau}}_{\text{num.}}-\bar{\tilde{\tau}}_{ \text{ana.}}|=0.0377-0.0365=0.0012\) for the left column and \(\Delta\tilde{\tau}=-0.159-(-0.165)=0.006\) for the right column in figure 8, with \[\bar{\tilde{\tau}}=\frac{1}{2\phi}\int_{-\phi}^{\phi}\tilde{\tau}\ \mathrm{d}\phi, \tag{4}\] where \(\phi\) still is the half slit angle. The difference can therefore be considered negligible, thus supporting our original assumption of constant shear. The agreement between the numerical and analytical results for the normalized velocity along the interface is excellent, as can be seen in figure 8 (c) and (d) for both geometries. Examining the plots for the local connection slip length distribution in figure 8 (e) and (f), it is readily seen that both curves agree well. The discrepancy of the local connection slip length averaged over the interface \[\bar{\tilde{\Lambda}}=\frac{1}{2\phi}\int_{-\phi}^{\phi}\tilde{\Lambda}\ \mathrm{d}\phi, \tag{5}\] Figure 7: Illustration of the normalized velocity contour lines with \(N=2,R=0.4\) and \(\theta=\pi/2\). Assuming \(\mu_{a}=\mu_{b}\) and \(s_{a}=s_{b}\). a) Analytically calculated flow field, from Equation 3.1 and 3.2. b) Numerically calculated streamlines are shown for comparison. is \(\Delta\tilde{\Lambda}=|\bar{\Lambda}_{\rm num.}-\bar{\Lambda}_{\rm ana.}|=1.7710-1.82 0=0.049\) for the left column and \(\Delta\tilde{\Lambda}=-0.350-(-0.349)=0.001\) for the right column, respectively. The small observable deviations can be explained by the aforementioned assumption of constant shear stresses at the interface. However, considering the derived flow field solutions being the result of a superposition approach, the agreement is remarkable. The previous consideration has been limited to the case \(\mu_{a}=\mu_{b}\) and \(s_{a}=s_{b}\). The interaction of air and water is shown in figure 9, but still with the same pressure gradient. Figure 8: Comparison of analytically and numerically calculated results along the fluid-fluid interface at \(\tilde{z}=R_{1}\)\(\exp(i\varphi)\) with \(\varphi\in[-\phi,\phi]\). The analytical solution is based on the combined annular flow field of Equation 3.2. Left column: \(N=2,R_{1}=0.4\) and \(\theta=\pi/2\). Right column: \(N=2,R_{1}=0.7\) and \(\theta=\pi/3\). Assuming \(\mu_{a}=\mu_{b}\) and \(s_{a}=s_{b}\) for both cases. a) and b) show the normalized shear stresses, c) and d) the normalized velocities and e) and f) the corresponding local connection slip length distributions. Contour plots of the normalized axial velocities are illustrated, with both flow fields in a row being connected to each other. Each velocity is normalized with the corresponding viscosity of the associated domain, so \(\tilde{w}_{a}=w_{a}\) (\(\mu_{a}/sR_{0}^{2}\)) and \(\tilde{w}_{b}=w_{b}\) (\(\mu_{b}/sR_{0}^{2}\)). The first row of figure 9 shows an air flow (\(\mu=1.8\)\(\times\)\(10^{-5}\) Pa s) in the annular pipe connected to a pipe water flow (\(\mu=10^{-3}\) Pa s), with \(N=4,R_{1}=0.5\) and \(\theta=\pi/4\). Although both flows have the same pressure gradient, it can be clearly observed that the air is comparatively strongly accelerated by the connection of both domains, resulting in an almost rotationally symmetrical velocity field in the annulus. While not immediately apparent, the velocities at the center of each groove are equal, which can be easily verified by dividing by the viscosity. Figures 9(c) and (d) show the corresponding reverse case, with \(N=8,R_{1}=0.8\) and \(\theta=\pi/2\). Again, the air now in the inner tube is disproportionately accelerated compared to the water in the annulus. Now, in figure 10, the interaction of water with oil (\(\mu=5\times 10^{-3}\) Pa s) is illustrated, each with the same pressure gradient. The associated contour plots of the normalized velocities are illustrated, with both domains being connected at the center of the red boundary parts Figure 9: Illustration of normalized flow field contour lines of a pipe flow (domain a) connected to an annular flow (domain b) with \(\tilde{w}_{a}=w_{a}\) (\(\mu_{a}/sR_{0}^{2}\)), \(\tilde{w}_{b}=w_{b}\) (\(\mu_{b}/sR_{0}^{2}\)) for water (\(\mu=10^{-3}\) Pa s) and air (\(\mu=1.8\)\(\times\)\(10^{-5}\) Pa s). a) Air flow through an annular pipe for \(N=4,R_{1}=0.5\) and \(\theta=\pi/4\). b) Water flow through a pipe connected to a). c) Water flow through an annular pipe for \(N=8,R_{1}=0.8\) and \(\theta=\pi/2\). d) Air flow through a pipe connected to c). \(\mathfrak{z}\). Again, each velocity is normalized with the corresponding viscosity in its domain and both flow fields in a row being connected to each other. It should be noted, that for better visibility, the illustrations 10(b) and (d) in the right column are shown enlarged. The first row considers a water flow in the outer pipe, oil correspondingly in the inner, with \(N=3,R_{1}=0.4\) and \(\theta=\pi/3\). The bottom row is geometrically unchanged, but the fluids are reversed. ### Local slip length for the connected case The local slip length \(\tilde{\lambda}\) at the slit centre represents a certain, yet unknown, groove influence on the bulk pipe or annular flow and is to be determined accordingly. As discussed earlier, this depends on the fluid properties of the enclosed fluid as well as the microstructure geometry. Conventionally, confined grooves without their own pressure gradient are considered. This means that the bulk flow is not additionally accelerated by the groove, but merely slowed down less by the enclosed fluid that is passively dragged Figure 10: Illustration of normalized flow field contour lines of a pipe flow (domain a) connected to an annular flow (domain b) with \(\tilde{w}_{a}=w_{a}\) (\(\mu_{a}/sR_{0}^{2}\)), \(\tilde{w}_{b}=w_{b}\) (\(\mu_{b}/sR_{0}^{2}\)) for water (\(\mu=10^{-3}\) Pa s) and oil (\(\mu=\)5 \(\times\)\(10^{-3}\) Pa s). Note: For better visibility, the illustrations in the right column are shown enlarged. a) Oil flow through an annular pipe for \(N=3,R_{1}=0.4\) and \(\theta=\pi/3\). b) Water flow through a pipe connected to a). c) Water flow through an annular pipe for \(N=3,R_{1}=0.4\) and \(\theta=\pi/2\). b) Oil flow through a pipe connected to a). along compared to a solid no-slip wall. The considered velocity profile of the primary fluid then has a strictly positive normal derivative (normal vector points into the fluid zone by definition), as illustrated in figure 1. Assuming a velocity in positive coordinate direction, the Navier-slip boundary condition dictates the local slip length to be defined as positive. The linked pipe-within-pipe scenario is more general than this classical consideration. In the following, by linking the derived pipe flow and annular flow solution of Section 2, mathematical expressions for \(\tilde{\Lambda}_{a}(w_{b})\) and \(\tilde{\Lambda}_{b}(w_{a})\) are obtained. The local connection slip lengths result intrinsically as a function of both individually pressure driven linked flow fields. Both flow regimes can thus experience a sliding effect as well as an added acceleration effect at the fluid-fluid interface due to an additional pressure gradient in the other fluid zone. From the perspective of one fluid regime, it is now possible that a higher axial velocity exists beyond the interface. This is obviously not possible with purely shear-driven grooves. Following the previous notation, region \(a\) denotes the inner pipe flow and region \(b\) the outer annular flow. Figure 11 shows two examples for the velocity in \(\tilde{Z}\)-direction as one walks along the real axis, starting from the centre of the inner tube to the outer no-slip wall of the annulus. The red dashed line marks the transition between the two regions, i.e. the interface. When examining the velocity normal gradient at the interface, there are two cases to distinguish. Considering the condition 3.3(b), either \[\frac{\partial w_{a}(\tilde{\mathfrak{z}})}{-\partial\mathbf{n}}=-\frac{\partial w _{b}(\tilde{\mathfrak{z}})}{\partial\mathbf{n}}\quad\mbox{or}\quad\frac{\partial w _{a}(\tilde{\mathfrak{z}})}{-\partial\mathbf{n}}=\frac{\partial w_{b}(\tilde{ \mathfrak{z}})}{\partial\mathbf{n}}.\] (4.3a,b) The first case follows from the fact that the velocity gradient of the inner pipe flow is defined in negative radial coordinate direction, contrary to the annular flow part. This means that the normal derivative must always have a different sign in both fluid zones. This becomes evident when looking at e.g. figure 11(a). The local derivative of the axial velocity profile at the interface of the inner flow is negative as the profile decreases, since the derivative is defined in the negative x-direction (\(\partial\tilde{w}/-\partial x\)). In contrast, the derivative Figure 11: Change in the normalized axial velocity \(\tilde{w}\) as \(\tilde{z}\) traverses along the real axis from the centre of the pipe (region \(a\)) at \(\tilde{z}=0+\mathrm{i}0\) to the outer no-slip wall in the annular domain (region \(b\)) at \(\tilde{z}=1+\mathrm{i}0\). The red dashed line marks the transition between the two regions. The intersection of both curves marks the velocity in the centre of the interface \(\tilde{z}=\tilde{\mathfrak{z}}\). Assuming \(\mu_{a}=\mu_{b}\) and \(s_{a}=s_{b}\). a) \(N=2,R_{1}=0.4\) and \(\theta=\pi/2\). b) \(N=2,R_{1}=0.7\) and \(\theta=\pi/3\). is positive for the outer flow, since the velocity still increases until reaching a maximum at \(\tilde{x}\approx 0.5\). The exact opposite signs arise when considering the case of figure 11(b). Whereas equation 4.3(b) only occurs when both velocity gradients are equal at the interface, meaning both must be zero and it therefore must be an inflection point of the axial velocity gradient function. Hence, the shear stress at the interface is also zero. Locally, this corresponds to the no-shear solution derived by Philip (1972_a_) and Crowdy (2021), respectively. Such a case corresponds to a special case that can be brought about by the appropriate choice of geometry, fluid properties and/or pressure gradients. It should be noted, that the derived formulas of section SS3 can potentially result in negative local connection slip lengths. This is somewhat unintuitive at first. To clarify, let's consider the Navier-slip condition at the slit center of the case illustrated in figure 11(a). As established, the velocity profile of the inner pipe flow has a local negative normal derivative at the interface and the outer annular flow a positive one, respectively. At the interface the following must apply \[\tilde{A}_{a}\underbrace{\frac{\partial w_{a}(\tilde{\mathfrak{Z}})}{- \partial\mathbf{n}}}_{<0}=\tilde{A}_{b}\underbrace{\frac{\partial w_{b}(\tilde{ \mathfrak{Z}})}{\partial\mathbf{n}}}_{>0}. \tag{4.4}\] Since \(w_{a}(\tilde{\mathfrak{Z}})=w_{b}(\tilde{\mathfrak{Z}})\) must be true at the interface, the local connection slip lengths must correspond accordingly, indicating \(\tilde{A}_{a}<0\) and \(\tilde{A}_{b}>0\), if \(w(\mathfrak{z})>0\). In the present example, \(\tilde{A}_{a}<0\) thus ensures that the direction of the axial velocity of the internal pipe flow at the interface is in the positive \(\tilde{Z}\)-direction. Additionally, negative connection slip lengths can lead to \(\alpha(\tilde{A})>1\). This follows from the simple observation that a flow is driven by the pressure gradient of the linked flow in addition to its own pressure gradient, surpassing the effect of a no-shear boundary. One example would be pipe flow illustrated in figure 10(d), with \(\alpha(\tilde{A})=2.34496\). The weighting coefficients of the connected annular flow, on the other hand, can additionally be negative. However, this does not result in reverse flow, but may even exceed the case of no-shear slits, which will be shown in the following example. We consider \(R_{1}=0.4,\theta=\pi/3\) and \(N=3\). The inner pipe has water (\(\mu_{a}=0.001\) Pa s) and the annular pipe oil (\(\mu_{b}=0.005\) Pa s) flowing through it, further assuming \(s_{1}=s_{2}\). The weighting coefficients of the annular flow field result to \(\beta_{1}=1.30142\) and \(\beta_{1}=-0.30142\) accordingly. The associated flow field is shown in figure 10(a). By comparing the velocity in the center of the slits, it can be shown that the present shear example has overall higher velocities than the comparison no-shear solution, since \(\tilde{w}(\tilde{\mathfrak{Z}})=0.065\) and \(\tilde{w}_{\rm ca}(\tilde{\mathfrak{Z}})=0.049\). ## 5 Conclusions This article is concerned with modelling pressure-driven pipe flows along longitudinal slits or ridges filled with a second immiscible fluid. In addition to the classical pipe geometry, flow through an annulus containing grooves on its inner wall is also considered. Such flows have numerous applications, including water flowing over superhydrophobic surfaces or along porous media. Surfaces of this type can lead to a significant reduction of flow resistance and are therefore of great importance for the design of energy-efficient technical surfaces. Basis for the corresponding design process is the knowledge of the prevailing flow conditions in the immediate vicinity of the aforementioned microstructures and its consequential influence on the bulk flow. Analytical solutions can decipher these complex dynamics and are therefore of fundamental importance. With a superposition technique, analytical approximations for the velocity field of longitudinal flow over rotationally symmetric microstructures in pipes (eq. 2.14) and annuli (eq. 42) were developed. The novelty of these solutions is their dependence on a finite local slip length evaluated at the center of the fluid-fluid interface. This contrasts with previous solutions by Philip (Philip 1972_a_), Crowdy (Crowdy 2021). Typical solutions to these mixed-valued boundary problems assume perfect interfacial slip, corresponding to an infinite local slip length. This assumption represents an ideal limit of such interface interactions. The extension of these existing approaches performed in this article now allows to factor in the actual influence of individual microstructures. Therefore, viscous interaction along the fluid-fluid interface, flow properties of the enclosed fluid as well as the groove geometry are now taken into account. With the analytical solutions now available, the flow field within a pipe or an annulus can be fully calculated as a function of rotationally symmetric longitudinal slits. The only prerequisite is knowledge of the local slip length at the center of the interface, which remains to be determined on a case-specific basis. The equations are valid for single and multi-phase flow. From these solutions, analytical expressions for the effective slip lengths are determined, which are an important building block in the numerical investigation of superhydrophobic walls. The derived solutions in section SS2 are important fundamental equations for the calculation of slippery pipes. However, a combination of those results leads to an interesting extension, a pipe-within-pipe geometry (discussed in section SS3). For that, the pipe and annulus solutions are re-scaled and connected along their interfaces (see figure 3). This yields two interconnected flow regimes whose physical interfacial communication is modelled using local slip lengths as connection coefficients. Such a pipe-within-pipe promises greater operational control over the slippery effects, as unwanted interface protrusion and potential collapse can be prevented by appropriate external pressure control. The solutions derived in section SS3 are fully determined, i.e. no longer dependent on still unknown local slip lengths. In fact, these are specified in the connection of both flow regions in dependence of the respective other flow. The resulting flow field equation is then only a function of fluid properties and geometry parameters and therefore easily calculated. Overall, a comparison of the analytical solutions with numerical simulations shows excellent agreement. This is particularly remarkable since the derived equations are the result of a comparatively simple superposition approach. This strongly underlines the validity and quality of the mathematical assumption of constant shear stress along all interfaces for such geometries. Ultimately, the presented equations enable further investigation on case-specific optimization of slippage along microstructured pipe walls. ## Acknowledgments We kindly acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 467661067.
2308.01686
LiDAR-Camera Panoptic Segmentation via Geometry-Consistent and Semantic-Aware Alignment
3D panoptic segmentation is a challenging perception task that requires both semantic segmentation and instance segmentation. In this task, we notice that images could provide rich texture, color, and discriminative information, which can complement LiDAR data for evident performance improvement, but their fusion remains a challenging problem. To this end, we propose LCPS, the first LiDAR-Camera Panoptic Segmentation network. In our approach, we conduct LiDAR-Camera fusion in three stages: 1) an Asynchronous Compensation Pixel Alignment (ACPA) module that calibrates the coordinate misalignment caused by asynchronous problems between sensors; 2) a Semantic-Aware Region Alignment (SARA) module that extends the one-to-one point-pixel mapping to one-to-many semantic relations; 3) a Point-to-Voxel feature Propagation (PVP) module that integrates both geometric and semantic fusion information for the entire point cloud. Our fusion strategy improves about 6.9% PQ performance over the LiDAR-only baseline on NuScenes dataset. Extensive quantitative and qualitative experiments further demonstrate the effectiveness of our novel framework. The code will be released at https://github.com/zhangzw12319/lcps.git.
Zhiwei Zhang, Zhizhong Zhang, Qian Yu, Ran Yi, Yuan Xie, Lizhuang Ma
2023-08-03T10:57:58Z
http://arxiv.org/abs/2308.01686v2
# LiDAR-Camera Panoptic Segmentation via Geometry-Consistent and Semantic-Aware Alignment ###### Abstract 3D panoptic segmentation is a challenging perception task that requires both semantic segmentation and instance segmentation. In this task, we notice that images could provide rich texture, color, and discriminative information, which can complement LiDAR data for evident performance improvement, but their fusion remains a challenging problem. To this end, we propose LCPS, the first LiDAR-Camera Panoptic Segmentation network. In our approach, we conduct LiDAR-Camera fusion in three stages: 1) an Asynchronous Compensation Pixel Alignment (ACPA) module that calibrates the coordinate misalignment caused by asynchronous problems between sensors; 2) a Semantic-Aware Region Alignment (SARA) module that extends the one-to-one point-pixel mapping to one-to-many semantic relations; 3) a Point-to-Voxel feature Propagation (PVP) module that integrates both geometric and semantic fusion information for the entire point cloud. Our fusion strategy improves about \(6.9\%\) PQ performance over the LiDAR-only baseline on NuScenes dataset. Extensive quantitative and qualitative experiments further demonstrate the effectiveness of our novel framework. The code will be released at [https://github.com/zhangzw12319/tcps.git](https://github.com/zhangzw12319/tcps.git). ## 1 Introduction 3D scene perception has become an increasingly important task for a wide range of applications, including self-driving and robotic navigation. Lying in the heart of 3D vision, 3D panoptic segmentation is a comprehensive perception task composed of semantic and instance segmentation [15]. This is still challenging since it not only requires predicting semantic labels of each point for _Stuff_ classes, such as _tree_, _road_, but also needs recognizing instances for _Thing_ classes, e.g., _car_, _bicycle_, and _pedestrian_ simultaneously. Currently, the leading 3D panoptic methods use LiDAR-only data as input sources. However, We have observed that using only LiDAR data for perception has some insufficiencies: 1) LiDAR point cloud is usually sparse and unevenly distributed, as illustrated in Figure 1 (a), making it challenging for 3D networks to capture the notable difference between the foreground and the background; 2) distant objects that occupy just a few points appear to be small in the view and cannot be effectively detected. On the contrary, images provide rich texture and color information, as shown in Figure 1 (b). This observation motivates us to use images as an additional input source to complement LiDAR sensors for scene perception. Moreover, most autonomous driving systems come equipped with RGB cameras, which makes LiDAR-Camera fusion studies more feasible. Although LiDAR sensors and cameras complement each other, their fusion strategy remains challenging. Existing fusion strategy could be generally split into proposal-level fusion [16], result-level fusion [27], and point-level fusion [33, 12, 34], as summarized in PointAugmenting [35]. Yet, proposal-level and result-level fusion focus on integrating Figure 1: The distinctions between LiDAR point cloud and images. (a) The red box displays a vehicle segment (orange points) in the point cloud, where points are sparsely and unevenly distributed. (b) The lower-right green mask demonstrates a vehicle with dense texture and color features, effectively detected via [40]. The upper-left blue mask (partly occluded) shows image features that help detect small objects in the distance. Better zoomed in. 2D and 3D proposals (or bounding box results) for object detection, which limits their generalizability in dense predictions like segmentation tasks. The previous point-fusion methods also suffer: 1) the asynchronous working frequency between LiDAR and camera sensors is not considered, which may result in misaligned feature correspondence; 2) point-fusion is a one-to-one fusion mechanism, and large image areas are unable to be mapped to sparse LiDAR points, resulting in the waste of abundant information from dense pixel features; _e.g_., for a 32-beams LiDAR sensor, only about \(5\%\) pixels can be mapped to correlated points, while the \(95\%\) of pixel features would be dropped [23]. 3) previous point-level fusion methods [33, 12, 34] often use simple concatenation, which excludes points whose projections fall outside the image plane, as image features cannot support them. Motivated by these insufficiencies, we propose the first LiDAR-Camera Panoptic Segmentation (LCPS) network to exploit the complementary information from multiple sensors. In this work, we propose a novel three-stage fusion strategy involving the Asynchronous Compensation Pixel Alignment (ACPA) module, Semantic-Aware Region Alignment (SARA) module, and Point-to-Voxel feature Propagation (PVP) module. The ACPA module employs ego-motion compensation operations to achieve spatial-temporal alignment between the LiDAR and camera modalities, overcoming asynchronous issues in point fusion. Then, our novel SARA module extends the one-to-one point-pixel mapping to one-to-many semantic relations, highly improving the image utilization rate. Specifically, SARA introduces Class Activation Maps (CAM) for image branch to localize semantic-related image regions for each point. Next, the PVP module replaces simple concatenation with local attention to propagate information from point-aligned pixels and regions to the entire point cloud. Points outside camera frustums can also be preserved and attached to image features. Finally, we design a Foreground Object selection Gate (FOG) module to enforce the network to learn a class-agnostic foreground object mask in addition to the semantic prediction head. This gate effectively reduces incorrect predictions and stabilizes the training process. To sum up, our main contributions are: * To the best of our knowledge, this is the first LiDAR-Camera fusion network for 3D panoptic segmentation, which effectively exploits the complementary information of the LiDAR and image data. * We have improved the former point-fusion approach with our novel Asynchronous Compensation Pixel Alignment (ACPA), Semantic-Aware Region Alignment (SARA), and Point-to-Voxel feature Propagation (PVP) modules. These contribute to the geometry-consistent and semantic-aware alignment between LiDAR and Camera sensors. * We present the Foreground Object selection Gate (FOG) to reduce the incorrect predictions of confusing points, further boosting panoptic segmentation quality. * Extensive quantitative and qualitative experiments demonstrate the effectiveness of our approach. Our fusion approach improves performance at \(6.9\%\) PQ on NuScenes and \(3.3\%\) PQ in SemanticKITTI compared to the LiDAR-only baseline. ## 2 Related Work Panoptic segmentation is initially proposed from 2D vision [15], for the purpose of integrating semantic and instance segmentation. Later, research of panoptic segmentation extends to videos and LiDAR point cloud. Early work LPSAD [25] handles LiDAR panoptic segmentation via projecting points into range view and then using 2D convolution network to extract features. Although pure 2D network can boost efficiency, it also suffers performance degradation when mapping 2D predictions back to the point cloud. Later, 3D LiDAR networks are designed for this task. Generally, 3D panoptic segmentation can be divided into two categories, _i.e_., proposal-based and proposal-free methods. **Proposal-based 3D Panoptic Segmentation.** Proposal-based methods Panoptic-Deeplab [6] and EfficientLPS [30] predict bounding-box proposals and then merge them with semantic results to obtain panoptic predictions, following classical object detection framework[5, 9]. However, proposal-based methods tend to result in inconsistent segmentation between instance and semantic branches. Moreover, the segmentation result is susceptible to the quality of object detection. **Proposal-free 3D Panoptic Segmentation.** Proposal-free methods abandon object proposals and predict object center and point offset instead. The post-processing module will cluster points into instance groups according to object center and point offset. DS-Net [11] proposes a dynamic-shifting mechanism of instance points toward its possible centers for Mean Shift clustering. SMAC-Seg [17] and SCAN [38] attempt to use attention module on multi-directional or multi-scale feature maps. GP-S3Net [28] constructs a dynamic graph composed of foreground clusters as graph nodes, processed by graph convolutional network for instance segmentation branch. Panoptic-Polarnet [41] projects 3D features into BEV and utilizes learnable BEV heatmap with non-maximum suppression(NMS) to predict centers. Following Panoptic-Polarnet's BEV design, Panoptic-PHNet [19] improves center and offset generation by replacing NMS with a center grouping module to merge duplicated centers, as well as augmenting offset via KNN-Transformer. For now, Panoptic-PHNet has achieved 1st place on NuScenes and SOTA performance on SemanticKITTI benchmarks. Nevertheless, sparse and uneven LiDAR points will impose large variance for center and offset predictions in Bird-Eye-View and thus becomes a bottleneck for current SOTA approaches. RGB images can compensate for LiDAR features, which motivates us to design LCPS. LiDAR-Camera Fusion Models.In object detection and semantic segmentation, pioneering research already considers modal fusion between images and LiDAR points. For example, PMF [43] attempts to project LiDAR points to the perspective view and proposes a two-branch 2D network to extract semantic features with an attentive fusion module. TransFuser [26] and TransFusion [1] consider utilizing transformers to fuse 3D LiDAR points and 2D images. DeepFusion [20] focuses on how to avoid feature misalignment when extensive data augmentation is performed in both LiDAR and camera branches. However, multi-modal panoptic segmentation has yet to be explored, accompanied by asynchronous and utilization issues. ## 3 Methodology ### Overview **Problem Formulation.** This paper considers 3D panoptic segmentation [7]. Formally, we denote a set of LiDAR points as \(\{(x_{i}^{\text{3D}},f_{i}^{\text{3D}})|(x_{i}^{\text{3D}}\in\mathbb{R}^{3},f_ {i}^{\text{3D}}\in\mathbb{R}^{C}\}_{i=1}^{N}\), where \(N\), \(x_{i}^{\text{3D}}\) and \(f_{i}^{\text{3D}}\) represent the total number of points, 3D positions, and point features of \(C\) dimensions, respectively. This task requires predicting a unique semantic class \(\{\hat{y}_{i}^{\text{3D}}\}_{i=1}^{N}\) for each point and accurately identifying groups of points as foreground objects with an instance ID, denoted as \(\{\text{ID}_{i}^{\text{3D}}\}_{i=1}^{N}\). Besides, we assume that \(K\) surrounding cameras, which are cheap and common, capture images associated with the LiDAR frame for LiDAR-Camera fusion. Similarly, we represent each image as a set of pixels \(\{(x_{k,i}^{\text{2D}},f_{k,i}^{\text{2D}})|(x_{k,i}^{\text{2D}}\in\mathbb{R}^ {2},f_{k,i}^{\text{2D}}\in\mathbb{R}^{C}\}_{i=1,k=1}^{i=N^{\prime},k=K}\), where \(N^{\prime}\), \(x_{i}^{\text{2D}}\), \(f_{i}^{\text{2D}}\) and \(k\) represent the total number of pixels, 2D positions, pixel features, and the camera index, respectively. Our primary objective in this paper is to improve panoptic segmentation performance by fully exploring the complementary information in the LiDAR and Camera sensors. **Pipeline Architecture.** The framework in Figure 2 consists of a multi-modal encoding module, a LiDAR-Camera fea Figure 2: The overall pipeline of our LiDAR-Camera Panoptic Segmentation network (LCPS). LCPS consists of multi-modal encoding, feature fusion, and panoptic prediction modules. The encoding module extracts cylinder features, MLP features, and image features. In the fusion stage, MLP features are geometrically and semantically aligned with pixel features via ACPA and SARA. Next, the PVP module merges fused point features with original cylinder features to obtain fused ones. Finally, the panoptic prediction module yields predictions of four heads, which are post-processed to obtain panoptic segmentation results. ture fusion module, and a panoptic prediction module. In the encoding stage, the LiDAR points are respectively encoded by a cylindrical voxel encoder and an MLP encoder, while the images are encoded using SwiftNet [36]. In the fusion stage, the MLP feature and image features, which are not strictly correlated, are first aligned through the proposed Asynchronous Compensation and Semantic-Aware Region Alignment, and then are concatenated into fused point features. Subsequently, our Point-to-Voxel Propagation module (PVP) accepts the fused point features and outputs the final cylinder representation. In the prediction stage, the backbone network includes a proposed FOG head, a semantic segmentation head, a heatmap head, and an offsets head. The latter two heads follow Panoptic-Polarnet [41], where we regress a binary object center mask and a 2D offset among bird-eye-view grids. During inference, the post-processing shifts the predicted foreground BEV grids to their nearest centers and clusters the points within the grids into instances. ### Asynchronous Compensation Pixel Alignment A straightforward solution [21, 33, 44] for fusing LiDAR and Camera is to establish point-to-pixel mappings, such that points can be directly projected to image planes and decorated with pixel features. However, this mapping would lead to false mapping due to the asynchronous frequency between cameras and LiDAR sensors. For instance, on NuScenes dataset, each camera operates at a frequency of \(12\)Hz, while the LiDAR sensor operates at \(20\)Hz. Motivated by this, we improve the point-level fusion by incorporating additional asynchronous compensation to achieve a consistent geometric alignment over time. The fundamental idea is to transform the LiDAR points into a new 3D coordination system when the corresponding images are captured at that time. The transformation matrix is obtained by considering the ego vehicle's motion matrix. Specifically, let \(t_{1}\) and \(t_{2}\) denote the time when the LiDAR point cloud and the related images are captured. Then we have: **Step-1.** Transform LiDAR points from world coordinates to ego-vehicle coordinates at time \(t_{1}\). By multiplying the coordinate transformation matrix \(\mathbf{T}_{t_{1}}^{\text{W}\rightarrow\text{V}}\) provided by the dataset, we can obtain the 3D position in the ego-vehicle coordinate system, denoted as \(\hat{x}_{i}^{\text{3D}}\). **Step-2.** Transform LiDAR points in ego-vehicle coordinates from time \(t_{1}\) to time \(t_{2}\). To achieve this, a time-variant transformation matrix is required, denoted \(\mathbf{T}_{t_{1}\to t_{2}}^{\text{V}\rightarrow\text{V}}\). However, such a matrix is often not directly available in datasets. Instead, the ego vehicle's motion matrices from the current frame to the first frame are often provided for each sliced sequence. Therefore, we can divide \(\mathbf{T}_{t_{1}\to t_{2}}^{\text{V}\rightarrow\text{V}}\) as the product of \((\mathbf{T}_{t_{2}\to t_{0}}^{\text{V}\rightarrow\text{V}})^{-1}\) and \(\mathbf{T}_{t_{1}\to t_{0}}^{\text{V}\rightarrow\text{V}}\), where \(t_{0}\) is the time of the first frame. Using this ego-motion transformation matrix, we obtain the point position in ego-vehicle coordinates at time \(t_{2}\), denoted as \(\tilde{x}_{i}^{\text{3D}}\). **Step-3.** Obtain pixel features at time \(t_{2}\). By using camera extrinsic and intrinsic matrices (\(\mathbf{E}_{k}\) and \(\mathbf{I}_{k}\)), we get the projected 2D position \(\tilde{x}_{k,i}^{\text{2D}}\) of each point in the \(k_{\text{th}}\) image plane at time \(t_{2}\). After excluding the points whose projections are outside the image plane, the resulting pixel features \(\{\tilde{f}_{k,i}^{\text{2D}}\}_{i=1}^{N_{k}}\) are indexed by \(\tilde{x}_{k,i}^{\text{2D}}\). \(N_{k}\) is the number of points inside the image plane (\(N_{k}<N\)). These homogeneous transformation steps can be summarized in the following equation: \[\left[\begin{array}{c}\tilde{x}_{k,i}^{\text{2D}}\\ 1\end{array}\right]=\mathbf{I}_{k}\mathbf{E}_{k}\mathbf{T}_{t_{1}\to t _{2}}^{\text{V}\rightarrow\text{V}}\mathbf{T}_{t_{1}}^{\text{W}\rightarrow \text{V}}\left[\begin{array}{c}x_{i}^{\text{3D}}\\ 1\end{array}\right]. \tag{1}\] In summary, we obtain pixel-aligned features for each point using Equation 1. Our approach adopts ego-motion compensation via Step 2, resulting in a simple but more accurate geometric-consistent feature alignment. ### Semantic-Aware Region Alignment Due to the sparse nature and limited eyeshot of LiDAR point clouds, only a small fraction of image features can be matched with LiDAR points. To address this issue, we propose to find semantic-relevant regions, extending the one-to-one mapping to one-to-many relations. Inspired by _Class Activation Maps_ (CAMs) [39, 24], we present a Semantic-Aware Region Alignment module by using image CAMs to localize relevant semantic regions, as illustrated in Figure 3 Figure 3: (a) Overview of the SARA module, which employs pixel-wise semantic classifier, constructs CAMs and locates semantic regions. (b) Overview of the PVP Module, which involves a cylindrical partition of fused point features and attentive propagation. Better zoomed in. (a). **Step-1.** We first introduce a pixel-wise semantic classifier \(\phi^{\text{2D}}(\cdot)\) to learn the semantic information in the image branch, and define \(\mathbf{\Theta}^{\text{2D}}\in\mathbb{R}^{M\times C}\) as the classifier parameters, where \(M\) is the number of semantic categories. Based on the observation that projected pixels share the same semantic category with matched points, we use point labels to train the image classifier with cross-entropy loss: \[\mathcal{L}_{\text{2D}}=-\frac{1}{N_{k}}\sum_{i=1}^{N_{k}}y_{i}^{\text{3D}}\log (\hat{y}_{i}^{\text{2D}}), \tag{2}\] where \(\hat{y}_{i}^{\text{2D}}\) and \(y_{i}^{\text{3D}}\) denote the predicted pixel label and related ground-truth point label (such alignment is obtained in Section 3.2), and \(N_{K}\) represents the number of points which can be projected into the \(k\)-th image plane. **Step-2.** We use this classifier to generate the class activation maps (CAMs). Let \(\mathbf{F}_{k}^{\text{2D}}\in\mathbb{R}^{C\times H^{\text{2D}}\times W^{\text {2D}}}\) be the image feature map extracted by the last convolution layer, and \(H^{\text{2D}}\) and \(W^{\text{2D}}\) are the height and width of image feature maps. We can then obtain CAMs using the following formula: \[\mathbf{F}_{k}^{\text{CAM}}=\mathbf{\Theta}^{\text{2D}}\times\mathbf{F}_{k}^{\text {2D}}, \tag{3}\] where \(\times\) denotes the matrix multiplication. The generated CAMs are represented by \(\mathbf{F}_{k}^{\text{CAM}}\in\mathbb{R}^{M\times H^{\text{2D}}\times W^{ \text{2D}}}\). Each channel in CAM is a \(H^{\text{2D}}\times W^{\text{2D}}\) heatmap related to a specific semantic category. **Step-3.** For each LiDAR point, we use the generated CAMs to localize sets of pixels as semantic-related image regions. We design a filtering gate \(\mathbf{G}_{k,i}^{y}\in\mathbb{R}^{H^{\text{2D}}\times W^{\text{2D}}}\), constructed by selecting a single heatmap of class \(y\) from CAMs \(F_{k}^{\text{CAM}}\) according to the ground-truth or predicted pixel label. The gate is controlled by subtracting a predefined confidence threshold \(\tau\). Pixels with heatmap values lower than that threshold will be set to zero in \(\mathbf{G}_{k,i}^{y}\). Finally, we get a set of related pixels: \[\{f^{\text{CAM}}\}_{k,i}=\text{\emph{Flatten}}(\sigma(\mathbf{G}_{k,i}^{y} \otimes\mathbf{F}_{k}^{\text{2D}})), \tag{4}\] where \(\otimes\) denotes element-wise multiplication, and \(\sigma\) denotes the activation function. _Flatten_ function is adopted to transform features from matrix format \(C\times H^{\text{2D}}\times W^{\text{2D}}\) into a set format \((H^{\text{2D}}W^{\text{2D}})\times C\), followed by discarding zero vectors which is filtered by \(G_{k,i}\). Consequently, we obtain a set of pixel features \(\{f^{\text{CAM}}\in\mathbb{R}^{C}\}_{k,i}\) for each LiDAR point \(i\) and each camera \(k\). We finally average the set of region features to a single vector, then concatenate it with the MLP output and pixel-aligned features to constitute the fused point features. In summary, unlike one-to-one pixel alignment via pure geometric projection, the image regions are directly collected in a one-to-many semantic-aware manner. ### Point-to-Voxel Feature Propagation Image features seem to not support the points outside the camera frustum; therefore, these points are usually excluded [29, 33, 12]. To overcome this problem, we propose the Point-to-Voxel Feature Propagation to integrate both geometric and semantic information for the entire point cloud. To this end, we choose cylindrical voxels as the bridge to complete the fusion process since the tensor shape of the voxel representation is invariant to the alteration of point numbers, which naturally provides an alignment between the original point cloud and the image-related point cloud subset. As shown in Figure 3 (b), a cylindrical encoder first encodes the original point cloud into voxels. Meanwhile, for the fused point features, we first align their channel dimensions with the original voxel using MLP, and then divide these fused points into another set of cylindrical voxels, where features will be scattered and pooled within the same voxel to obtain voxel features. A noticeable observation is that a LiDAR point may have alignment with more than one camera, resulting in multiple fused point features of a single point. Therefore, we treat such multiple features as multiple points at the same 3D position during voxelization. Then we propagate the voxels of the fused point features (denoted as \(\vartheta^{\text{im}}\)) to the original cylindrical voxels (denoted as \(\vartheta^{\text{p}}\)) using modified local attention [32]. In this attention mechanism, each voxel \(\vartheta^{\text{p}}\) acts as queries \(Q\), while the neighboring \(27\)\(\vartheta^{\text{im}}\) voxels act as keys \(K\) and values \(V\). Then the computation is given by: \[\text{Att}(\vartheta^{\text{p}},\vartheta^{\text{im}},\vartheta^{\text{im}})= \text{Softmax}(\frac{QK^{T}}{\sqrt{C}})V, \tag{5}\] where \(C\) is the channel dimensions. After that, we add the attentive voxels with original \(\vartheta^{\text{p}}\) to make a residual connection, as shown in the following equation: \[\vartheta=\text{Att}(\vartheta^{\text{p}},\vartheta^{\text{im}},\vartheta^{\text {im}})+\vartheta^{\text{p}}. \tag{6}\] Through this attentive propagation, information from the entire point cloud and multiple cameras is comprehensively integrated into a single cylindrical voxel representation \(\vartheta\). ### Improved Panoptic Segmentation Here we briefly describe the Foreground Object selection Gate (FOG) head and loss functions for panoptic prediction. Other implementation details are displayed in Section 4.2 and the Appendix. **Foreground Object Selection Gate.** In Panoptic-PolarNet [41], the panoptic network diverges into three prediction heads for semantic labels, centers, and offset prediction. However, we find that semantic predictions largely affect the final quality of panoptic segmentation. This is because the center and offset head only provide class-agnostic predictions, while accurate semantic information is required for post-processing to cluster foreground grids into the nearest object centers. Inspired by [22], we propose FOG, a Foreground Object Selection Gate, to enhance the original semantic classifier. FOG is a binary classifier aiming to differentiate foreground objects. Given voxel features obtained from the backbone network as \(\vartheta^{\text{b}}\in\mathbb{R}^{H\times W\times Z}\), FOG predicts a class-agnostic binary mask \(y^{\text{FOG}}\in[0,1]^{H\times W\times Z}\), which is supervised by binary cross-entropy loss \(\mathcal{L}^{\text{BCE}}\). As a result, the foreground mask complements the semantic head by filtering out background points in the post-processing period. **Loss Designs.** The total loss is derived in the following equation: \[\mathcal{L}^{\text{total}}=\alpha_{1}(\mathcal{L}^{\text{CE}}+\mathcal{L}^{ \text{Ovasz}})+\alpha_{2}\mathcal{L}^{\text{MSE}}+\alpha_{3}\mathcal{L}^{ \text{L1}}+\alpha_{4}\mathcal{L}^{\text{BCE}}+\alpha_{5}\mathcal{L}^{\text{2D}}. \tag{7}\] The top four terms are based on Panoptic-Polarnet [41]. \(\mathcal{L}^{\text{CE}}\) and \(\mathcal{L}^{\text{Lovasz}}\) represent cross-entropy loss and Lovasz loss [4] for semantic supervision. \(\mathcal{L}^{\text{MSE}}\) is a Mean-Squared-Error (MSE) loss for BEV center heatmap regression. \(\mathcal{L}^{\text{L1}}\) is an L1 loss for BEV offset regression. In addition, the last two terms are new in this paper. \(\mathcal{L}^{\text{BCE}}\) represents a binary entropy loss used for FOG head, and \(\mathcal{L}^{\text{2D}}\) is a pointly-supervised loss for region-fusion, given by Equation 2. \(\alpha_{2}\) and \(\alpha_{3}\) are set to 100 and 10 respectively, while the other three weights are set to 1. ## 4 Experiments In this section, we evaluate our proposed LiDAR-Camera Panoptic Segmentation network on NuScenes [7] and SemanticKITTI [3] dataset, making comparisons with recent state-of-the-art methods. ### Datasets and Evaluation Metric **NuScenes** is a large-scale multi-modal dataset for autonomous driving. It contains a 32-beam LiDAR, 5 Radars, 6 RGB cameras and maps, covering 1000 real-world driving scenes of 4 locations in Boston and Singapore. There are 850 annotated scenes for training and 150 for testing. The panoptic annotations contain 10 _Thing_ classes, 6 _Stuff_ classes and 1 class for noisy labels. **SemanticKITTI** is a pioneering outdoor dataset presenting the panoptic segmentation tasks on LiDAR data [3, 2, 8]. It provides a 64-beam LiDAR sensor and two front-view cameras. The dataset contains 8 _Thing_ classes and 11 _Stuff_ classes, consisting of 19130 frames for training, 4071 for validation, and 20351 for testing. **Evaluation Metrics.** We assess the panoptic segmentation via panoptic quality (PQ), segmentation quality (SQ), and recognition quality (RQ) [7]. Metrics with superscripts th and st (_e.g., \(PQ^{\text{fl}}\)_) represent _Thing_ or _Stuff_ classes performance respectively. Meanwhile, we also provide semantic segmentation metrics (mIoU) [3]. ### Implementation Details **Backbone Network.** Cylinder3D [42, 11] is adopted as our backbone network in Figure 2 due to its reliable LiDAR perception ability for cylinder voxel representation. As for NuScenes, the entire point cloud is divided into \(480\times 360\times 32\) voxels for \([-100\text{m}\sim 100\text{m},0\sim 2\pi,-5\sim 3\text{m}]\) polarized volume of the scenery. As for SemanticKITTI, we only change the perception distance from \(100\text{m}\) to \(60\text{m}\). **Settings and Hyper-parameters.** Following common practice [19, 38], we apply random flip augmentation along the \(y\)-axis for the point cloud and images accordingly, and random rotation for the point cloud only. These LiDAR augmentations are adopted after precomputing the point-pixel alignment. The performance gains from data augmentations are already included in LiDAR-only baseline results for fair comparisons, as shown in the first line of Table 4. We train our model for 120 epochs with a batch size of \(2\), using Adam optimizer [14]. The initial learning rate is 0.004 and will be reduced to \(0.0004\) after 100 epochs. For SARA described in Section 3.3, the filtering parameter \(\tau\) is set to 0.7. During inference, all operations are performed in BEV grids, where centers are picked from a dynamic heatmap using non-maximum-suppression with a kernel size of 5 and a value threshold of 0.1. Other setting details are described in the Appendix. ### Main Results In this section, we make extensive comparisons with other state-of-the-art methods and our LiDAR-only baseline. Specifically, the baseline network excludes the image branch, feature fusion module, and FOG in Figure 2. **Results on NuScenes.** In Table 1, our approach outperforms the best Panoptic-PHNet [19] by a margin of \(5.1\%\) PQ (\(79.8\%\) vs. \(74.7\%\)) in validation set. Primarily, we achieve a large gain of \(4.3\%\) RQ and \(7.1\%\) RQ\({}^{\text{fl}}\), which mainly increases the overall accuracy. Compared with the LiDAR-only baseline, our methods show a significant improvement of \(6.9\%\) PQ in total, demonstrating the effectiveness of our LiDAR-Camera fusion strategy. As for the test set, we also achieve comparable SOTA results with Panoptic-PHNet [19] without using test-time augmentation and \(6.7\%\) PQ increase compared with our LiDAR-only baseline. Evidence from the class-wise comparison on NuScenes validation set also consolidates the effectiveness of our fusion strategy. Figure 4 shows that an overall improvement among various _Thing_ and _Stuff_ categories can be witnessed. Specifically, for _Thing_ objects like _bicycle_, _bus_, _construction vehicle_, _motorcycle_, and _traffic cone_, our method out performs Panoptic-PHNet by a large margin (\(9.3\%\) on average for 5 _Thing_ classes), which demonstrates the ability of our approach to distinguish the sparse, distant and rare objects by taking advantages from image features. **Results on SemanticKITTI.** Here, we list the comparison results of the SemanticKITTI validation set in Table 3. Since SemanticKITTI has only two cameras in the front view, fewer points can be matched with image features compared with NuScenes, thus increasing the difficulty of LiDAR-Camera fusion. Nevertheless, we discover an increase of \(3.3\%\) PQ over the LiDAR-only baseline, demonstrating the robustness and effectiveness of our fusion strategy. (SARA), and Point-to-Voxel feature Propagation (PVP). It is observed that the ACPA with simple concatenation (SC) could bring an improvement of \(3.9\%\) PQ (contrasting line 1 and line 2) and an improvement of \(4.6\%\) PQ with PVP module (contrasting line 1 and line 3). Another \(1.7\%\) PQ gain can be achieved combined with SARA (contrasting line 3 and line 4). It verifies our designs on geometry-consistent and semantic-aware LiDAR-Camera fusion strategy. **Ablation on FOG Mask.** We test the influence of the FOG mask and observe the improvement of \(0.6\%\) PQ (contrasting line 4 and line 5). It suggests that FOG Mask may bring additional supervision to the backbone network and further augments the semantic prediction in post-processing grouping. ### Qualitative results and the Discussion **Visualization of Panoptic Predictions.** In Figure 5, we evaluate our visual predictions compared among ground-truth (GT), baseline, and full network (Full Predictions). The following observation can be made: 1) Our architecture achieves effective semantic and instance segmentation among challenging scenarios, like crowds of pedestrians and vehicles (see Figure 5 (a)(b)(e)(f)); 2) Our LiDAR-Camera Fusion strategies can achieve robust segmentation quality at nighttime with the complementary information from surrounding cameras (see Figure 5 (c)(d)); 3) FOG can help filter confusing points and noise points, making segmentation quality more robust (see Figure 5 (a)(b)). **Visualization of Class Activation Maps.** We further verify the quality of generated Class Activation Maps (CAMs) in Figure 6, which constitute the semantic-aware regions in images. The red color illustrates higher semantic correlations, while the blue color refers to lower ones. It demonstrates that our SARA module generates highly correlated alignment among various categories, effectively extending the one-to-one mapping to semantic-aware one-to-many relations. ## 5 The Conclusion In this paper, we are the first to propose the geometry consistent and semantic-aware LiDAR-Camera Panoptic Network. As a new paradigm, we effectively exploit complementary information from LiDAR-Camera sensors and make essential efforts to overcome asynchronous and utilization problems via Asynchronous Compensation Point Alignment (ACPA), Semantic-Aware Region Alignment (SARA), Point-to-Voxel feature Propagation (PVP), and Foreground Object selection Gate (FOG) mask. These modules enhance the overall discriminability and performance. We hope that our thought-invoking multi-modal fusion practice can benefit future research. **Acknowledgement** The research is supported by National Natural Science Foundation of China (No.62222602, No.61972157 and No.72192821), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), Shanghai Science and Technology Commission (21511101200), Shanghai Sailing Program (22YF1420300 and 23YF1410500), Natural Science Foundation of Chongqing (No.CSTB2023NSCQ-JQX0007), CCF-Tencent Open Research Fund (RAGR20220121), Young Elite Scientists Sponsorship Program by CAST Figure 5: The overview of qualitative results from NuScenes validation set. (a) and (b) are visualization comparisons among the ground-truth (denoted as GT), the baseline predictions (Baseline), and full LCPS predictions (Full). Red circles emphasize the notable differences. We find that various _Thing_ and _Stuff_ objects can be predicted more accurately. (c) and (d) demonstrate semantic segmentation quality at nighttime. (e) and (f) verify the robust instance segmentation ability of our network. Better zoomed in. Figure 6: The visualization results of semantic-aware regions filtered by CAMs. (2022QNRC001) and CAAI-Huawei MindSpore Open Fund.
2304.08211
Dynamically Reconfigurable Variable-precision Sparse-Dense Matrix Acceleration in Tensorflow Lite
In this paper, we present a dynamically reconfigurable hardware accelerator called FADES (Fused Architecture for DEnse and Sparse matrices). The FADES design offers multiple configuration options that trade off parallelism and complexity using a dataflow model to create four stages that read, compute, scale and write results. FADES is mapped to the programmable logic (PL) and integrated with the TensorFlow Lite inference engine running on the processing system (PS) of a heterogeneous SoC device. The accelerator is used to compute the tensor operations, while the dynamically reconfigurable approach can be used to switch precision between int8 and float modes. This dynamic reconfiguration enables better performance by allowing more cores to be mapped to the resource-constrained device and lower power consumption compared with supporting both arithmetic precisions simultaneously. We compare the proposed hardware with a high-performance systolic architecture for dense matrices obtaining 25% better performance in dense mode with half the DSP blocks in the same technology. In sparse mode, we show that the core can outperform dense mode even at low sparsity levels, and a single-core achieves up to 20x acceleration over the software-optimized NEON RUY library.
Jose Nunez-Yanez, Andres Otero, Eduardo de la Torre
2023-04-17T12:31:50Z
http://arxiv.org/abs/2304.08211v1
# Dynamically Reconfigurable Variable-precision Sparse-Dense Matrix Acceleration in Tensorflow Lite ###### Abstract In this paper, we present a dynamically reconfigurable hardware accelerator called FADES (Fused Architecture for DErse and Sparse matrices). The FADES design offers multiple configuration options that trade off parallelism and complexity using a dataflow model to create four stages that read, compute, scale and write results. FADES is mapped to the programmable logic (PL) and integrated with the TensorFlow Lite inference engine running on the processing system (PS) of a heterogeneous SoC device. The accelerator is used to compute the tensor operations, while the dynamically reconfigurable approach can be used to switch precision between _int8_ and _float_ modes. This dynamic reconfiguration enables better performance by allowing more cores to be mapped to the resource-constrained device and lower power consumption compared with supporting both arithmetic precisions simultaneously. We compare the proposed hardware with a high-performance systolic architecture for dense matrices obtaining 25% better performance in dense mode with half the DSP blocks in the same technology. In sparse mode, we show that the core can outperform dense mode even at low sparsity levels, and a single-core achieves up to 20x acceleration over the software-optimized NEON RUV library. neural network, FPGA, sparse, pruning, matrix multiplication acceleration, TensorFlow ## 1 Introduction In this research, we present FADES (Fused Architecture for DErse and Sparse tensor processing) dataflow engine and its extension with dynamically reconfigurable (i.e. Xilinx DEX) capabilities to support floating-point and 8-bit precision arithmetic. DFX enables the modification of blocks of logic by downloading partial bit files that change the functionality on-the-fly without interrupting the operation of the rest of the system. DFX makes more efficient use of the silicon, allowing designers to move to smaller devices and reduce power. FADES is integrated as an accelerator for TensorFlow Lite (TFLite) that is TensorFlow's lightweight solution for mobile and embedded devices suitable for edge deployment promoted by Google. This integration means that FADES benefits from extensive prior development research done by the TensorFlow Lite community on sophisticated quantization-aware training and pruning [1] and directly replaces the high-performance matrix multiplication library RUV 1. A hardware accelerator that can perform both floating-point and 8-bit arithmetic is useful because TFLite natively supports both precision data types. Deep learning models are continuously evolving, so there is a need for flexible hardware solutions that can scale and adapt to increasing network complexities and diversity. Motivated by these observations, the contributions of this paper are as follows: Footnote 1: [https://github.com/google/ruy](https://github.com/google/ruy) * We present the FADES variable-precision sparse-dense dataflow accelerator designed using a high-level synthesis approach and its integration into the Tensorflow Lite inference engine. * We develop 8-bit integer and floating-point configurations in sparse and dense matrix processing modes and compare the performance with a systolic array hardware accelerator and optimised software alternatives. * We propose a design methodology that combines high-level synthesis and dynamic function exchange to generate and deploy variable-precision neural accelerators. * We release the designs open-source to promote further work in this field at: 2 Footnote 2: url[https://github.com/eeilny/gemm.spmm](https://github.com/eeilny/gemm.spmm) This paper is organized as follows: section 2 reviews state-of-the-art solutions for edge deployment, showcasing that current neural network hardware focuses on 8-bit precision, application specific architectures and dense matrix operations. Section 3 motivates the significance of this work highlighting which technical attributes are different from current available hardware. Section 4 describes the proposed hardware architecture for high-performance dense and sparse tensor operations with variable-precision float/int support. Section 5 evaluates the performance mapped to a target edge device - the Zynq UltraScale+ MPSoC comparing it with a systolic design, the optimized ARM library for low-precision tensor arithmetic RUV, and a reference C++ implementation. Section 6 presents the dynamically reconfigurable methodology and its integration with software-defined design frameworks and the TensorFlow Lite inference engine. Section 7 validates performance, power and energy consumption using different accelerator configurations integrated in TFLite and other state-of-the-art hardware targeting comparable edge devices. Finally, section 8 concludes this paper and proposes future work. ## 2 Related work and Motivation ### _Flexible-model Neural accelerators_ In this section we discuss neural accelerators that can support multiple models without hardware changes. Efforts in accelerating the execution of TensorFlow Lite models using flexible hardware have focused on deploying systolic architectures for tensor operations to obtain very high throughput. For example, Google offers a low-cost and low-power version of the Google TPU called EdgeTPU [2], which can run dedicated neural networks with 8-bit precision. The systolic array size in the EdgeTPU has 64 x 64 multiply-add cells obtaining 4 TOPS at 480MHz, and it is much smaller than the TPU cloud configurations. Layers with floating-point precision will run on a external CPU that would act as the host for the EdgeTPU device. Xilinx has also focused on inference including support for TensorFlow, with the Xilinx DPU [3] unit. It is composed of a register configuration unit, the data controller and convolution computing modules optimized for the FPGA hardware resources. The original hardware is specialized for convolutional neural networks, although alternative architectures and hardware configurations are made available for other model types. The DPU hardware supports integer precisions of 4-bit, 8-bit and 16-bit. Xilinx DPU architectures are generally specialized for particular model types, and a vendor-specific compiler, optimizer and quantizer are needed in order to deploy TensorFlow models in the supported devices. The approach in this work is closer to the EdgeTPU than the Xilinx DPU since it can be applied to any TFLite model without any additional compilation or optimizations. While the EdgeTPU supports 8-bit precision exclusively, FADES supports integer precisions of 1, 2, 4, 8-bit and single and half floating-point. In this paper, we focus on 8-bit and single floating-point precisions, which are compatible with current versions of the TensorFlow Lite inference engine that runs on the host processor. ### _Model-specific neural architectures_ In this section we review hardware architectures that requires changes if a new model is to be deployed. There is a significant body of research that has focused on creating tools and architectures targeting particular network models, such as simple FCNNs (fully connected neural networks) [4], CNNs (convolutional neural networks)[5] or LSTMs (long short-term memory networks) [6]. For example, in [7], a specialized architecture for LSTM-type networks on FPGAs with sparsified weights is presented. It works by determining at design-time the number of PEs (processing elements) that are required for each row of the sparse-matrix vector operation. A dual-pruning strategy is proposed for the different weight matrices involved in LSTM-type networks. The LSTM-type network is also addressed in [8] integrating stochastic computing principles in the LSTM model. The hardware that uses the stochastic number representation with multiplication arithmetic done with XNOR gates and additional with mux or parallel counters. The FPGA demonstratoe shows significant lower power usage than reference hardware with conventional binary arithmetic without affecting overall accuracy. Examples of architecture specialization in FPGAs also include FINN [9] and hls4ml [10] - two well-developed frameworks supporting arbitrary precision hardware. hls4lm has been shown to achieve very high performance on very low latency problems such as particle colliders. It uses quantization-aware training and pruning based on the QKeras library and exports the resulting configuration as a C++ Vivado HLS description ready to be implemented in an FPGA. The proposed example neural network in [10] consists of 3 dense layers and a softmax layer with a precision of 14 bits with 6 integer bits. Pruning forces many weights set to zero and removes the hardware resources associated with these weights. Pruning does not affect latency because the non-zero weights define the longest path. Therefore the depth of the network remains unchanged. Other task-specific neural network frameworks optimized for FPGA mapping include LUTnet [11] and LogicNets [12]. These frameworks advance the concept of using the FPGA LUTs to implement 2-input XNORs between weights and activations, as used in binary networks, in order to exploit the capabilities of the multi-input LUTs. The weights are baked into the logic function implemented in the LUT, and a variable number of LUT inputs are used for the binary activations. The number of inputs needed in the LUT is reduced with pruning techniques, resulting in better performance and lower logic complexity. In any case, specialization means that the architecture and final implementation are task-specific. In [13] different hardware architectures to accelerate CNNs in FPGAs are discussed, indicating how CPU+FPGA approaches can benefit from mapping to the FPGA only the parts where the FPGA is very efficient while more irregular computation is done on the CPU. This is the heterogeneous approach selected in this paper that can also support large models without requiring large FPGA devices. Heterogeneous devices combine different processing resources such as CPUs, FPGAs and GPUs enabling low-power edge-intelligence applications with improved integration such as the low-latency hybrid data initialization proposed in [14] that improves how input and parameter network data are loaded by the different devices. Accelerators focusing on sparse computation include [15] that shows similarities with our work with the sparse weights stored using CSR format, shape-wise pruning and an equivalent ultrascale device although the development is done in Verilog and not HLS. The accelerator achieves 990 GOps/second on VGG16 with a DSP efficiency of 0.367 GOPS/DSP and uses different architectures depending on the CNN model. Heterogeneous devices have also been used for dense matrix processing and in [16] a tool flow is proposed for dynamic precision quantization with 8 and 4-bit values for the weight matrix with a variable number of PEs in each layer depending on the layer precision. The design uses a classical line buffer optimized for the VGG network with a fix size window with 3x3 values. Our approach is not specialized for particular filter sizes and it is design to process any matrix shape used by the TFLite model format. ## 3 Research Motivation In contrast to the research presented in section 2.2, the work presented in this paper avoids creating a specific architecture for a particular neural network model. Instead, it aims to be a direct hardware-based drop-in replacement of a software matrix multiplication library such as Google RUY and it is conceptually similar to the Google EdgeTPU. The benefit is that while the EdgeTPU uses a systolic architecture and supports only dense matrix multiplication with 8-bit precision, FADES can work with multiple precisions and dense/sparse models efficiently, minimizing data stalls by streaming data into internal FIFOs and BRAMs and performing irregular data accesses only to the BRAM internal memory. FADES consists of an ensemble of dynamically reconfigurable accelerator overlays that work with models optimized using the pruning and quantization techniques part of the TensorFlow Model Optimization extensions Toolkit [17]. In this way, we benefit from the extensive amount of prior research done by the TensorFlow community on sophisticated quantization-aware training and pruning. Therefore, the objective of the paper is not to propose novel training techniques to compensate for the possible degradation of accuracy due to sparsity. In our previous work [18], we investigated the accuracy effects of arbitrary quantization with LSTM and CNN layers, and we presented hardware for independent sparse and dense computations using small Xilinx Zynq SoC devices. The sparse architecture was an extension of matrix-vector hardware, which limited parallelism, and there was no specific support for TensorFlow Lite. In this paper, we target the larger Zynq Ultrascale devices with a novel fused architecture that reuses hardware resources and is integrated into the inference engine of TFLite. We deploy dynamic reconfiguration with high-level synthesis to support floating-point and integer precisions. The accelerator performance and power consumption are compared with virtual reconfiguration alternatives and state-of-the-art software and hardware acceleration libraries. Table I compares selected technical attributes with the FADES proposal to highlight differences. ## 4 Fused architecture ### _System architecture_ The use of low precision data types and sparse neural representations are two research trends that favour the adoption of FPGAs in deep learning. Motivated by these trends, we design the FADES dataflow architecture to accelerate TFLite matrix operations with support for asymmetric quantized activations, column-major matrix write, per-filter/per-axis bias values and the current scaling TensorFlow Lite specifications. FADES computes the tensor operations of the DNN models, representing up to 95% of the execution time for the tested neural networks. In FADES, we design the SPMM (SParse Matrix Matrix) and GEMM (GEneral Matrix Matrix) accelerators using a tile approach to deal with a \(B\) matrix composed of a large number of columns. We use a high-level synthesis (HLS) description to define the architecture. The challenge of supporting both GEMM and SPMM simultaneously is how to avoid unnecessary hardware replication and performance degradation in the HLS description. A top-level block diagram showing the architecture ports is shown in 1. The mode input sets the hardware in sparse or dense modes, the data ports move the data values for for the input A,B and output C matrices and the configuration ports control Tensorflow Lite int8 scaling and clamping parameters, optional bias and matrix dimensions. The matrix shape being processed is defined at run-time with parameters \(N=A\)_rows_, \(M=A\)_columns_ and \(P=B\)_columns_. Figure 2 shows the internal organization of the dataflow consisting of 4 stages interconnected with streaming FIFOs for both GEMM and SPMM. All the dataflow stages are coded in HLS to obtain an efficient initialization interval (e.g. II) of 1. An initialization interval of 1 in high-level-synthesis means that a new iteration of the loops can be started in each clock cycle which is a critical requirement for high performance. In FADES, the streaming architecture means that a tile of the dense matrix is initially buffered inside the FPGA device, then the sparse matrix in CSR format is streamed from memory into the accelerator. As the sparse values are streamed, they are computed with multiple values of the dense matrix depending on the tile size. After the tile is processed, a new tile is loaded, and the streaming of the sparse data is performed again. This approach means that all irregular accesses are done to local BRAM memory and do not result in stalls or additional latency. Stage 1 is responsible for loading a \(B\) matrix block with a column count that equals \(PEs*4\) bytes (each PE processes four 8-bit elements or one single-precision floating-point element) and row count that equals the number of rows in the \(B\) matrix. It then starts streaming elements of the \(A\) matrix in raw format for GEMM, and in CSR (Compressed Sparse Row) format with _column_index_ and _non-zero_ values for SPMM. This means that in SPMM mode, the accelerator performs a variable number of reads of weight values and column indices per matrix. In GEMM mode, the number of reads is constant and defined by the row size multiplied by the number of rows in the weight matrix, while in SPMM is defined by the number of non-zero elements present in the weight matrix. This does not represent a performance limiting factor in the dataflow architecture because the READ stage is independent of the other stages, and it can run at full speed over all the data elements present in the \(A\) matrix for both GEMM and SPMM as long as there no buffer underflows or overflows. Stage 2 is the main computing loop that instantiates the bulk of DSP blocks. It aims at activating all PEs in parallel in each clock cycle. This is the case for all \(B\) tiles with a number of columns equal to or larger than the number of PEs specified by \(B\_WIDTH\_BLOCK\). \(B\_BLOCK\_WIDTH\) is the main parameter related to parallelism and it defines the \begin{table} \begin{tabular}{c c c c c c c} & Model set & Target & \begin{tabular}{c} Dynamic \\ float-in \\ \end{tabular} & Sparse & \begin{tabular}{c} Transformer \\ Tensorflow \\ integration \\ \end{tabular} \\ \hline LogTPU [2] & Yes & ASIC & No & No & Low & Yes \\ \hline CPUU [3] & Yes & FPCA & No & Yes & Low & Yes \\ \hline FNN [9] & No & FPGA & No & No & High & No \\ \hline HLS-Net [10] & No & FPGA & No & No & High & No \\ \hline LUTNET [11] & No & FPGA & No & No & High & No \\ \hline [16] & No & FPGA & No & No & High & No \\ \hline [15] & No & FPGA & No & Yes & High & No \\ \hline FADES & Yes & FPGA & Yes & Yes & High & Yes \\ \end{tabular} \end{table} TABLE I: Hardware comparison number of PEs that work in columns of the activation matrix \(B\) in parallel. Typically, the last tile contains a number of columns lower than \(B\_WIDTH\_BLOCK\), and in this scenario, some of the PEs do not write their output FIFOs. This enables arbitrary matrix shape support that is not a multiple of the tile size. Stage 3 uses the values of \(QM\) (quantization multiplier) and \(shift\) to implement the scaling strategy as per Tensorflow Lie specifications for int8 values. It reads the raw values produced by stage 2 and writes the corresponding scaled values to stage 4. Stage 3 forwards directly raw values to stage 4 in float mode. Finally, stage 4 reads these values and writes them to the correct memory addresses to properly construct the output matrix C assembling the multiple tile outputs. The accelerator offers multiple compile-time configuration options that are summarized in table II. The columns in the table represent the number of cores (CCs), the number of processing elements per core (PEs), the number of parallel rows per core (PRs) that sets the number of rows of A that are processed in parallel, the supported modes (SPMM only, GEMM only or FUSED that enables both SPMM and GEMM), transpose output matrix enabled/disabled (TRANS) and if scaling is enabled/disabled (SCALE). SCALE ensures that the results follow the TFLite specification using per-filter quantized multiplier parameters. Per-filter quantized multiplier parameters are part of the recent TFLite versions, and they enable different scaling parameters for each filter to improve accuracy. TRANS ensures that the result matrix is written in a column-major format, as it is currently required by the TensorFlow Lite inference engine. The CC option instantiates multiple independent cores in the device and splits the computation evenly among them. The splitting can be done either with multiple blocks of the \(A\) matrix and a single \(B\) matrix, or a single \(A\) matrix and multiple blocks of the \(B\) matrix. The first option divides the sparse matrix \(A\) into multiple blocks, and it is the option used in this paper to optimize the read of \(A\) values and column indexes over multiple ports. ### _Float-int8 hardware support_ In the FADES design each data word is 32-bit wide and it is formed with weight/activation values whose width depends on the required precision. In our previous work, we show how the 32-bit value can be interpreted as multiple 2, 4 or 8-bit int values [18]. In this paper, we focus on analyzing the \(int8\) and float modes that are directly compatible with the TFLite inference engine running on the PS. Although it is generally accepted that 8-bit precision is enough for deep-learning inference, floating-point precision may be required by some layers within the network or in certain types of networks such as recurrent neural networks [19]. Floating-point precision is also useful when network re-training and not just inference is required in the application to follow changes in the input data statistical properties. Code snippets for the compute function are shown in listing 1 and 2 for the _float_ and _int8_ precisions, respectively. Note that the \(int8\) listing shows a partial accumulation of the four results obtained in one PE, while the float listing does not do this accumulation since there is only one result per PE. In both cases, a not-shown outer loop performs the accumulation of results over all the rows of the activation matrix \(B\). ``` 1for(intj=0;j<B_WIDTH_BLOCK;j++){ 2{ 3#pragmaHLSUNROLL 4#pragmaHLSPIPELINE 5ap_int<32>b_block_int=b_block[ 6\(\rightarrow\)b_row][j]; 7ap_uint<32>A_val=a_value; 8FTYPEa_val_float=*(FTYPE*)&A_val; 9FTYPEb_val_float=*(FTYPE*)& 10b_block_int; 11floatrhs_float=float( 12zero_point_rhs); 13acc[j]=(a_val_float)*(b_val_float- 14rhs_float); 15}//jloop ``` Listing 1: Float compute kernel example ``` 1 2for(intj=0;j<B_WIDTH_BLOCK;j++){ 3#pragmaHLSUNROLL 4#pragmaHLSPIPELINE 5for(intz=0;z<DTYPE_LENGTH;z\(\rightarrow\)+=8){ 6ap_int<8>A_val=a_value.range(z\(\rightarrow\)+7,z); 7ap_int<8>B_val=b_block[b_row][ 8acc[j]+=A_val*(B_val- 9zero_point_rhs); 10}//jloop ``` Listing 2: Int8 main compute kernel To maximize hardware reuse, the code is created to minimize differences that depend on the data precision. Data types in the interface and buffer memory are mainly defined as _uint32_ and only in the compute kernel pointer \begin{table} \begin{tabular}{c c c c c c c} CCs & PEs & PRs & SPMM & GEMM & TRANS & SCALE \\ \hline 1 to 4 & 2 to 256 & 1 to 2 & 0 to 1 & 0 to 1 & 0 to 1 & 0 to 1 \\ \end{tabular} \end{table} TABLE II: Accelerator configuration options Fig. 1: Top-level block diagram reinterpretation is used to interpret the _uint32_ value as four _int8_ components or as a single floating point library as conceptually shown in lines 7 and 8 in Listing 1 where _A_val_ is a _uint32_ value and _a_val_float_ a float value. The operations done on _a_val_float_ will be standard floating point operations. To obtain high performance in float mode it is not enough to just interpret the 32-bit word as a floating-point value. The problem is that the latency of a floating-point ADD (FADD) used for accumulation in the Zynq Ultrascale is higher than one. Consequently, to achieve one addition per clock cycle, the pipeline inside FADD must be filled with interleaved operations. On the other hand, the latency for the integer addition is one clock cycle, so the C synthesis tool can achieve the initialization interval of one without interleaved operations. In order to achieve an equivalent initiation interval of one in the float kernel, we interleave FADD_LATENCY partial accumulations onto the same core, each completing every FADD_LATENCY cycles. The C synthesis tool recognizes that it can schedule the partial accumulations onto a single adder core on alternating cycles. The optimal value of FADD_LATENCY increases with the target frequency and has a value of 6 at 200 MHz which is the frequency selected for all the implementations running in the Zynq UltraScale+ device. The resulting architecture uses 4 DSP48 blocks per 32-bit float operator and 1 DSP48 block per _int8_ operator resulting in a similar number of DSP48 blocks for the 32-bit float and 32-bit int8 configuration. Table III shows the complexity of different configurations as follows. The first number in the pair represents the number of cores, being 1 always in this case. The second one, is the number of processing elements, which can be 32 or 128. It must be noticed that the number of the DSP blocks in the float and _int8_ versions have some variation due to to the presence of the scaling module in the _int8_ configuration and the addition of interleaved partial results in the float configuration. The third row in this table shows the design complexity supporting int8 and float configurations simultaneously using multiplexers to select if the float or _int8_ datapaths are active. The pointer reinterpretation in the source code maintains the code for both configurations similar and maximizes the amount of logic reuse. In any case, the internal organization of the DSP48 blocks for 1 float and 4 int8 operations is different, so when both configurations are enabled, the compiler needs to duplicate the DSP48 usage. This additional resource utilization means that, for example, the 128 float-int8 configuration fails to meet timing at 200 MHz, although the individual configurations meet timing without issues. The main advantage of dynamic function exchange is that both configurations do not need to be deployed simultaneously. In section 6 we will explore how dynamic function exchange Fig. 2: Fused GEMM/SPMM architectural components reduces resource usage and improves power consumption (due to that only the logic elements that participate in the computation need to be configured) and performance (since a core with more processing elements can be created without having to lower the clock frequency). In summary, the whole architecture is based on a DATAFLOW of the different functions or stages. Then, each stage uses pipelining that can be considered a fine-grained dataflow (or equivalently DATAFLOW can be considered a coarse-grained PIPELINE). The pipelining of the FADD accumulation is problematic when the operators are floating point values due to the latency of floating point add that is longer than one so the interleaving technique described in this section is used to achieved an overall latency of one for FADDs. The side effect is that the floating point and integer logic are significantly different and dynamic reconfiguration will be used to switch between modes. ## 5 Initial Performance and Functional Validation ### _Experimental setup_ In this section, we validate the performance characteristics of the FADES core compared with high-performance systolic hardware [20]. This initial performance evaluation considers a stand-alone implementation before its integration as part of the TFLite framework so a fair comparison with [20] can be done while section 7 will analyze the performance obtained after TFLite integration. Xilinx SDSoC 2018.3 is used for both [20] and the proposed design with a clock frequency of 200 MHz for the programmable logic while the processing system runs at the standard frequency of 600 MHz. To validate the correct functionality of the designs we have created a C test bench compiled with the RUV multiplication library. The matrices are initialized with random data and output generated by the hardware are compared with the software results produced by the RUV library to ensure correctness. The systolic architecture has been implemented on the same ZCU102 board equipped with the Zynq UltraScale device running Ubuntu 16.04. The systolic core has 32x32 = 1024 PEs, and each PE uses one DSP block as per the original design. We also consider as a comparison point the high-performance multiplication library RUV 3, developed by Google engineers, that focuses on covering the matrix multiplication needs of neural network inference engines. RUV is used in TFLite, as an option, on the ARM CPU architecture. It is designed to achieve high performance not just on very large matrix sizes but on the sizes and shapes of matrices relevant for TensorFlow Lite applications. RUV fully uses assembly code to make use of ARM NEON and SIMD optimizations, and it supports both floating-point and 8-bit integer quantized matrices. It replaces EIGEN and GEMMLOWP, achieving better performance. An alternative to RUV is XNNPACK, but XNNPACK focuses exclusively on float operations, so it is not applicable in this work. XNNPACK also relies on the arms8.2 instruction set that is not available in armw8 processors such as the Cortex-A53, available in the Zynq UltraScale+ MPSoC used in this work. The systolic hardware only supports ints, while RUV can be configured to work with floats and int8 precision. We initially consider the dense mode with square matrices since neither systolic nor RUV can run in sparse mode. Footnote 3: [https://github.com/google/ruy](https://github.com/google/ruy) ### _Initial performance results_ Table IV shows these results including a reference C++ implementation after validating the correct output of each test point. The table shows that FADES with a (1,128) configuration in int8 mode clearly outperforms the systolic hardware and RUV, especially as the size of the matrix being processed increases. This is significant since the systolic configuration contains 32x32 cells with twice the amount of DSP blocks. It is also clear that the C++ implementation is much slower than the software optimized RUV. Overall, with a matrix size of 1024x1024 FADES (128,1)/int8 is 18x faster than RUV in int8 mode and FADES (128,1)/float is 9x faster in float mode. Notice that the FADES accelerator includes the per-filter scaling hardware needed in TFLite, so direct comparisons with other GEMM hardware that simply performs matrix multiplication must take this into account. The (1,32) configuration uses 4x fewer DSP blocks and it is approximately 4x slower than the (1,128) configuration for the larger matrices, although this performance difference is diluted for the smaller matrices as expected. The lower resource usage for the (1,32) configuration enables more cores to be mapped to the Zynq Ultrascale device with the same DSP usage. Table V shows a performance comparison for a large matrix comparing the (1,128) solution (one core with 128 PEs) and the (4,32) solution (4 cores with 32 PEs each), and using the sparse mode with 3 levels of sparsity: low ( 50% sparse), medium ( 70% sparse) and high ( 90% sparse). Note that configurations (1,128) and (4,32) contain the same number of processing elements (i.e. 1x128 = 4x32 = 128 PEs). It is clear that for dense and low levels of sparsity, the multi-core configuration does not offer significantly better performance. The reason is that for dense matrices, the compute intensity is high, and the bottleneck is the compute power defined by the number of PEs that is equivalent in both configurations. On the other hand, as \begin{table} \begin{tabular}{l c c c c} & 128 & 256 & 512 & 1024 \\ \hline Systolic int8 & 0.29 & 1.27 & 5.34 & 21.92 \\ \hline RUV int8 & 1.53 & 6.14 & 41.03 & 302.31 \\ \hline C++ int8 & 547.4 & 4k & 35k & 314k \\ \hline 1,128 int8 & 0.307 & 1.001 & 3.618 & 16.58 \\ \hline 1,32 int8 & 0.365 & 1.575 & 8.65 & 54.78 \\ \hline RUV float & 1.94 & 14.73 & 64.63 & 500.02 \\ \hline C++ float & 583.48 & 4651.59 & 38923.8 & 423516 \\ \hline 1,128 float & 0.089 & 3.28 & 12.88 & 56.23 \\ \hline 1,32 float & 0.921 & 4.761 & 29.33 & 200.5 \\ \end{tabular} \end{table} TABLE IV: float/int8 performance with square matrices (ms) \begin{table} \begin{tabular}{l c c c c} Configuration & LUTs(K) & FFs(K) & BRAM\_18Ks & DSP48Es \\ \hline (1,32) float & 90.9 & 120.5 & 186 & 181 \\ \hline (1,32) int8 & 50.8 & 56.9 & 189 & 161 \\ \hline (1,32) float-int8 & 96.8 & 121.6 & 187.5 & 325 \\ \hline (1,128) float & 198.2 & 245.9 & 552 & 661 \\ \hline (1,128) int8 & 67.2 & 78.7 & 571 & 545 \\ \hline (1,128) float-int8 & 221.8 & 260.8 & 564 & 1189 \\ \hline (32x32) systolic int8 & 68.2K & 27K & 1041 & 1031 \\ \end{tabular} \end{table} TABLE III: configuration complexity comparison the sparsity increases, the performance bottleneck moves to the data movement, and the additional ports available in the multi-core configuration deliver noticeably better performance. The smaller cores used in the (1,32) configuration also favour introducing a DFX strategy compared with reconfiguring the whole device. The performance and power advantage of the DFX methodology using (1,32) cores is explored in the following sections. ## 6 DFX Methodology The DFX methodology is based on the original work by Xilinx [21] for implementing dynamically reconfigurable systems based on _Pblocks_ and black boxes. We extend it to support software-defined systems with encrypted IP cores such as floating-point operators. A _Pblock_ defines a floor-planned region associated with a reconfigurable partition, and then each reconfigurable partition has several reconfigurable modules or variants. In this system, the reconfigurable partition has two variants for _int8_ and _floating-point_. The black boxes are used to specify the reconfigurable partitions in the static part of the design. In this work, we do not use the DFX Wizard available in Vivado because it works by selecting the whole IP core as a reconfigurable partition in the Block Diagram. In our case, we work at a finer granularity so we specify reconfigurable regions in RTL generated by the high-level synthesis tool. The idea is to select only the RTL that corresponds the functionally unit that needs to be reconfigured to switch between _int8_ and _floating-point_. A description of the alternative methodology proposed and followed is shown in Fig. 3 which shows step numbers described as follows: * Step 1: The HLS input code goes through initial C synthesis in Xilinx OOC (Out-of-Context) mode, where each IP core is synthesised independently. The RTL code for the IP is obtained in the corresponding repository. * Step 2: The RTL instantiation template for the IP core is obtained in Vivado after generating output products. The instantiation template and the IP core RTL from the repository are the inputs that define the variations of the IP for the DFX flow (i.e. _int8_ or _float_). * Step 3: If this RTL IP core instantiates floating-point cores, they need to be generated using the Vivado IP integrator to avoid black box errors during the subsequent implementation stage. This is because the floating point library is encrypted and it cannot be used directly as an additional RTL file. * Step 4: The standard DFX flow reads the RTL and performs RTL synthesis of each variant of the IP core and generates a DCP checkpoint for each of these variants that are then used by the the rest of the components of the DFX implementation flow. The steps [5] and [6] in 3 correspond to the creation of Pblocks in the floorplanner, declaration of black_boxes for the dynamic parts of the design and the static synthesis. Finally, timing constraints are defined in [7], and the standard DFX implementation flow [8] places, routes, and generates bitstream files for each of the variants as partial bitstreams in addition to the complete bitstreams resulting from linking the static and dynamic parts of the design. The bitstreams with extension bit.bibm files can be used by the processor configuration port (PCAP) available in Zynq devices so the processing system can load complete or partial files. In addition to this hardware flow, the software flow generates a hardware library to invoke hardware execution from the processing system. The API is identical for each of the hardware variants, so a single hardware interface library can be linked with the rest of the software application. To integrate DFX in Tensorflow Lite, we make use of the selective layer quantization feature, possible with Tensorflow 2.7. During inference, the host detects if the layer has scaling parameters available, indicating the presence of a quantized layer, and consequently checks and configures the programmable logic with the correct IP if necessary. It is essential to maintain the area being reconfigured in an isolated state to avoid unpredictable activity affecting the static part while partial reconfiguration occurs. In this work, we retain the core area and interface under reset by mapping the corresponding clock reset physical registers available in the PS to a userspace area using the Linux memory map function MMAP. This is a simpler alternative than using Partial Reconfiguration (PR) Decoupler IPs to provide logical isolation capabilities for PR designs due to the high number of interface signals in HLS designs. ## 7 FADES validation in TFLite In this section we analize the performance, power and energy characteristics after the integration of FADES into the TFLite framework as validation parameters. The core acts as a direct replacement of calls to the RUY library. The TFlite \begin{table} \begin{tabular}{l l l l l} & dense(ms) & sparse low & sparse medium & sparse high \\ \hline 1,128,1 int8 & 32.77e & 19.75 & 13.51 & 12.44 \\ \hline 4,32,1 int8 & 28.54 & 18.68 & 9.3 & 4.95 \\ \end{tabular} \end{table} TABLE V: single vs. multi-core performance (ms) Fig. 3: High-level synthesis DFX methodology version used is 2.7 and it has been compiled from sources in the Zynq Ultrascale device. In addition to DFX (Dynamic Function Exchange), we consider Virtual Function Exchange (VFX) and Full Reconfiguration (FR) for comparison purposes. In VFX, the hardware implements both _float_ and _int8_ modes side by side, and multiplexers are used to select which path is active. This virtual mode means that hardware requirements increase but switching between modes can be done in one clock cycle. The hardware complexity of the VFX and DFX modes are compared in table 6. Table 6 shows that the additional LUTs/FFs needed in VFX are modest, but the DSP count almost doubles. The additional hardware requirements mean that deploying VFX will limit the number of parallel cores possible and also increase power/energy compared with DFX. The FR mode corresponds to a full bitstream reconfiguration to replace the _int8_ hardware with the floating-point hardware. The bitstream size in the considered device is 25MBytes, and we have measured the time needed to perform this full reconfiguration at 200ms. On the other hand, the bitstream size for the partial core is 9MB, and the partial reconfiguration time is around 30ms. Consequently, there will be close to a factor of 7x higher energy and time overhead using full reconfiguration. In addition to this time advantage of DFX over FR, DFX cores not being reconfigured can continue to operate, so the system does not need to completely hold operation while the full reconfiguration takes place. ### _FADES performance analysis_ Figure 4 shows the performance of a single core 32-PE (1,32) configuration compared with the performance provided by RIV on layers extracted from the Mobilenet neural network. It is clear that larger and wider layers benefit more from the acceleration for both _int8_ and float modes, but the hardware can speed up all Mobilenet layers. Configurations with multiple FPGA cores can reduce this execution time significantly, as seen in section 5 for large layers. The _int8_ configuration obtains higher speedups than the floating-point hardware, which can be explained by the good performance of the float RIV software that is not 4x slower than RIV _int8_. RIV uses assembly code to exploit the NEON SIMD accelerator for both integer and float operations. On the other hand, the FPGA float hardware is typically 4x slower than _int8_ hardware since each float value contains four _int8_ values. The sparse configuration uses a high sparsity level of 90% and shows significantly better acceleration than the dense configuration. Sparsification is done using the current capabilities of the TensorFlow Model Optimization Toolkit during training and only one layer is sparsified at a time to minimize the effects on accuracy. We select the layers for pruning in Tensorflow using model cloning with a selective pruning function. We use a polynomial decay strategy and progressively increase pruning from 50% to the different maximum levels. The paper focuses on showing the hardware capabilities at accelerating sparse layers and not the effects on accuracy of sparsification that is considered out of scope. The acceleration obtained by the _int8_ hardware is higher than the floating-point hardware. It must be noted that thanks to the silicon reuse capability enabled by DFX, it is not necessary to have float and _int8_ modes configured simultaneously. This means that configurations with up to 4 cores are possible, resulting in better performance than with the non-reconfigurable alternative (VFX), in which the number of cores should be reduced to accommodate the hardware for supporting both float and _int8_ modes. In Table VII we estimate the performance and hardware details of FADES compared with other embedded hardware available for sparse and dense processing reviewed in section 2. We present results with one core (x1) and four core (x4) configurations and report two values of complexity and performance depending if the active mode is float or int8. The table shows that the levels of DSP efficiency are comparable to the best values from the literature while the raw GOPS is proportional to the number of DSP blocks used by the configuration. The main difference and advantage of \begin{table} \begin{tabular}{l c c c c} Configuration & LUTs(K) & FFs(K) & BRAM\_18Ks & DSP48Es \\ \hline float DEX & 90.9 & 120.5 & 186 & 181 \\ \hline int8 DFX & 50.8 & 56.9 & 189 & 161 \\ \hline float-int8 VFX & 96.8 & 121.6 & 187.5 & 325 \\ \end{tabular} \end{table} TABLE VI: DFX/VFX hardware complexity comparison Fig. 4: 1,32 performance analysis in Tensorflow Lite FADES compared with this hardware is the combination of dense-sparse modes and the switching between int8 and floating-point precision via dynamic reconfiguration. An additional difference is the integration in TFLite with the support of the scaling modes and asymmetric activations part of the TFLite specification. ### _FADES power and energy Analysis_ The ZCU102 board includes shunt resistors for each of the power rails in the FPGA. These shunts are connected to Texas Instruments INA226 devices that can be used to measure voltage and current. These devices are connected to the I2C bus that is accessible through the _/sys/class/known_ interface in the Xilinx Petalinux kernels. We create an application that reads the power values corresponding to the PS and PL and produces a trace of power consumption value, obtaining one power sample per 6 ms due to the limitations of the power measuring setup. Power usage in the PL is evaluated in Figure 5 that shows that during reconfiguration, power is low since in these runs, no computation is taking place, and the PS is loading just the configuration memory. PS power is constant mainly during all these runs at around 2 Watts. The most power-intensive modes are the float-int8, and int8-float, which correspond to VEX hardware with both modes configured computing in float or int8 modes, respectively. In DFX, the power of the float mode hardware is higher than the int8 hardware, and this is mainly due to the additional logic used by the pipeline to hide the higher latency of floating-point operations, as seen in Table III. Note that the additional logic used by the floating-point accelerator enables an equivalent latency of one clock cycle for both int8 and float operands, as seen in section 4.2. There is also a clear decrease of power in the sparse configurations compared to dense. As the amount of sparsity in input \(A\) matrix increases, the instantaneous power requirements drop. This can be attributed to the fact that the sparse compute intensity in the PL has a lower number of ops per byte fetched from memory resulting in a lower DSP utilization. To verify this DSP utilization, we have estimated the theoretical maximum throughput of the FADES accelerators, assuming 100% saturation of the core data movement and DSP logic. In the dense mode runs, we measure over 90% saturation of the data movement and arithmetic pipeline. This utilization reduces as the sparsity level increases with a bottleneck in the data movement that reduces the utilization of the DSP blocks. This can be explained in that although sparsity reduces the amount of data that needs to be moved for the sparse matrix \(A\), the dense matrix \(B\) still needs to be moved completely, even with a 100% sparse input. This lower hardware utilization can be observed in the power drop observable in Figure 5. Finally, Figure 6 compares the energy requirements of the Mobilenet layer (1024x1024x49) for different configurations and the dynamic reconfiguration energy cost. The Figure shows that the most energy-intensive configurations are VFX running dense calculations. The DFX dense configurations maintain execution time compared with VFX (at the same number of PEs) but the logic reduction results in lower energy. The sparse DFX configuration benefits from the reduction in execution time and power consumption resulting in the most energy-efficient run. The int8 configuration also benefits from lower power consumption and lower execution time compared with float. The Figure also shows that the energy requirements of the dynamic recon \begin{table} \begin{tabular}{l l l l l l l l} & [15] & [15] & [16] & [16] & [20] & ours x1 & ours x4 \\ \hline Device & XCZU9EG & XCZU9EG & XC/ZU45 & Nvidia TK1 & XCZU9EG & XCZU9EG & XCZU9EG \\ \hline Type & Sparse & Sparse & Dense & Dense & Dense & Sparse/Dense & Sparse/Dense \\ \hline Frequency & 200 & 200 & 150 & 852 & 200 & 200 & 200 \\ \hline Precision & 16bit & 8bit & 16bit & float & 8bit & 8bit/float & 8bit/float \\ \hline DSPs & 1350 & 2520 & 780 & & 1031 & 161/181 & 644/724 \\ \hline LUTs & 390k & 405k & 182k & & 142k & 50.8K/90.9K & 203k/363k \\ \hline BRAMs & 1460 & 1460 & 486 & & 528 & 189/186 & 700/688 \\ \hline DSP efficiency & \multirow{2}{*}{0.36} & \multirow{2}{*}{0.39} & \multirow{2}{*}{0.23} & \multirow{2}{*}{0.18} & \multirow{2}{*}{0.47/0.11} & \multirow{2}{*}{0.47/0.11} \\ (Gops/DSP) & & & & & & & \\ \hline Performance & \multirow{2}{*}{495} & \multirow{2}{*}{990} & \multirow{2}{*}{187} & \multirow{2}{*}{76} & \multirow{2}{*}{195} & \multirow{2}{*}{76/20} & \multirow{2}{*}{303/75} \\ (Gops) & & & & & & & \\ \end{tabular} \end{table} TABLE VII: Comparison with other embedded hardware Fig. 5: (1,32) power analysis for different configurations Fig. 6: (1,32) energy analysis for different configurations figuration process between float and int8 hardware are not insignificant, but larger networks with tens of layers would be acceptable if the reconfiguration rate is not very high. For example, when float hardware is only used during training or for groups of layers over time. ## 8 Conclusions and Future Work The following conclusions can be drawn from the research performed in this paper: * Combining sparse and dense modes in a single architecture enables the execution mode selection at a layer level with minimum overheads. Layers can be prepared to run in sparse mode if, during training, it is considered that this mode does not affect accuracy negatively. The sparse mode is faster than the dense mode, even with low sparsity levels of around 50% thanks to its dataflow architecture that reads CSR values and indices in parallel. * Dynamic function exchange can be deployed to optimize the DSP blocks used for the different TFLite _int8_ and floating-point precisions. This hardware optimization enables better performance by mapping more cores to a single device, and it also reduces power, avoiding having both pipelines configured simultaneously. * The sparse mode can outperform the dense mode for both float and _int8_ precisions, and the dense hardware is also significantly faster than the software optimized dense RUY library and a systolic implementation in the same technology. * The dynamic reconfiguration time has been measured at 30ms which means that it is too slow to enable reconfiguration on a layer-per-layer basis. A more suitable approach would be to deploy dynamic reconfiguration when switching between inference and training modes or switching the network model. * The accelerator needs to buffer a tile of dense matrix with a width that depends on the number of processing elements and a depth that equals the number of rows. For very large matrices there could be not enough BRAM space for all the rows and in these cases tiling at the software level would be needed. There are several research aspects that can be addressed in the future work: * In this research, we have focused on floating point and _int8_ precisions because they are natively supported in Tensorflow Lite. Support for 16-bit floats is being introduced, and other research such as hls4ml [10] has explored sub-byte precisions such as binary, ternary and quad, although these are not currently part of the TFLite inference engine. DFX can potentially become more valuable to avoid resource overheads as the number of supported arithmetic precisions in TFLite increases. * We have validated the functionality and benefits of DFX using a single core. Further exploring multi-core configurations in which cores not being reconfigured remain active and perform useful computations will be interesting. * In the current version the configuration of the accelerator is done manually by the designer. A scheduler that is part of the inference engine running on the host processor and issues reconfiguration commands to the accelerator is needed. This scheduler would need to use a prediction model to decide the best precision and sparsity level for each layer based on accuracy and performance measurements * Finally, exploring the application of the FADES accelerator with different precisions for run-time training and other network types such as transformers and recurrent is part of the future work. ## Acknowledgments This research was partially supported by the Royal Society Industry fellowship, INF\(\backslash\)R2\(\backslash\)192044 MINET, EPSRC HOPPWARE EP\(\backslash\)RV040863\(\backslash\)1, Leverhurme trust international fellowship IF-2021-003 and by the Wallenberg AI autonomous autonomous systems and software (WASP) program funded by the Knut and Alice Wallenberg Foundation.
2307.11210
Out-of-Order Sliding-Window Aggregation with Efficient Bulk Evictions and Insertions (Extended Version)
Sliding-window aggregation is a foundational stream processing primitive that efficiently summarizes recent data. The state-of-the-art algorithms for sliding-window aggregation are highly efficient when stream data items are evicted or inserted one at a time, even when some of the insertions occur out-of-order. However, real-world streams are often not only out-of-order but also burtsy, causing data items to be evicted or inserted in larger bulks. This paper introduces a new algorithm for sliding-window aggregation with bulk eviction and bulk insertion. For the special case of single insert and evict, our algorithm matches the theoretical complexity of the best previous out-of-order algorithms. For the case of bulk evict, our algorithm improves upon the theoretical complexity of the best previous algorithm for that case and also outperforms it in practice. For the case of bulk insert, there are no prior algorithms, and our algorithm improves upon the naive approach of emulating bulk insert with a loop over single inserts, both in theory and in practice. Overall, this paper makes high-performance algorithms for sliding window aggregation more broadly applicable by efficiently handling the ubiquitous cases of out-of-order data and bursts.
Kanat Tangwongsan, Martin Hirzel, Scott Schneider
2023-07-20T19:52:45Z
http://arxiv.org/abs/2307.11210v1
# Out-of-Order Sliding-Window Aggregation with ###### Abstract. S Sliding-window aggregation is a foundational stream processing primitive that efficiently summarizes recent data. The state-of-the-art algorithms for sliding-window aggregation are highly efficient when stream data items are evicted or inserted one at a time, even when some of the insertions occur out-of-order. However, real-world streams are often not only out-of-order but also bursty, causing data items to be evicted or inserted in larger bulks. This paper introduces a new algorithm for sliding-window aggregation with bulk eviction and bulk insertion. For the special case of single insert and evict, our algorithm matches the theoretical complexity of the best previous out-of-order algorithms. For the case of bulk evict, our algorithm improves upon the theoretical complexity of the best previous algorithm for that case and also outperforms it in practice. For the case of bulk insert, there are no prior algorithms, and our algorithm improves upon the naive approach of emulating bulk insert with a loop over single inserts, both in theory and in practice. Overall, this paper makes high-performance algorithms for sliding window aggregation more broadly applicable by efficiently handling the ubiquitous cases of out-of-order data and bursts. + Footnote †: copyrighted: This paper is an extended version of our VLDB 2023 paper “Out-of-Order Sliding-Window Aggregation with Efficient Bulk Evictions and Insertions”. It adds an appendix with proofs, pseudocode, and examples that did not fit in the page limit. ## 1. Introduction In data stream processing, a sliding window covers the most recent data, and sliding-window aggregation maintains a summary of it. Sliding-window aggregation is a foundational primitive for stream processing, and as such, is both widely used and widely supported. In various application domains, stream processing must have low latency; for example, late results can cause financial losses in trading or harm property and lives in security or transportation. Furthermore, streaming data often arrives out-of-order, but new data items must be incorporated into a sliding window at their correct timestamps and the aggregation may not be commutative. Finally, data streams do not always have a smooth rate: in the real world, data items often enter and depart sliding windows in bursts. When streaming data is bursty, sliding-window aggregation needs to support efficient bulk evictions and insertions to keep latency low. In other words, it needs to evict or insert a bulk of \(m\) data items faster than it would take to evict or insert them one by one, test it incur a latency spike of \(m\times\) that of a single operation. Bulk evictions are common in time-based windows, where the arrival of one data item at the youngest end of the window can trigger the eviction of several data items at the oldest end. For example, consider a window of size 60 seconds, with data items at timestamps _[0.1,0.2,0.3,0.4,0.5,10,20,30,40,50,60]_ seconds. If the next data item to be inserted has timestamp _61_, the window must evict the items at timestamps _[0.1,0.2,0.3,0.4,0.5]_. Since these are \(m=5\) items, evicting them one by one would incur a \(5\times\) latency spike. While a small bulk (e.g., \(m=5\)) is harmless, bursts can result in \(m\) in the thousands of data items or more. For instance, data streams may experience transient outages, causing bursts during recovery (Brandt et al., 2015). Besides time-based windows, applications may use other window types such as sessions (Kanat et al., 2016) or data-driven adaptive windows (Brandt et al., 2015). Streaming systems may internally use implementation techniques that introduce disorder (Brandt et al., 2015). When a streaming system receives multiple streams from different data sources, their logical times may drift against each other (Kang et al., 2016). Real-world events, such as breaking news, severe weather, rush hour traffic, sales, accidents, opening of stores or stock markets, etc. can cause bursty streams (Kang et al., 2016). All these scenarios necessitate sliding-window aggregation with efficient bulk evictions and insertions--without harming the tuple-at-a-time performance. The literature has few solutions to this problem, and none match our solution in completeness or algorithmic complexity. List-based approaches such as Two-Stacks handle neither out-of-order nor bulk operations (Kang et al., 2016). The AMTA algorithm only handles in-order windows and only offers bulk eviction but not bulk insertion (Kang et al., 2016). CPiX has a linear factor in its algorithmic complexity for bulk eviction and is limited to commutative aggregation over time-based windows (Brandt et al., 2015). The FiBA algorithm is optimal for out-of-order sliding-window aggregation with single evictions and insertions but does not directly support bulk operations (Kang et al., 2016). While the literature on balanced tree algorithms provides partial solutions to bulk evictions and insertions (Kang et al., 2016; Kang et al., 2016; Kang et al., 2016), each paper solves a different part of the problem using a different data structure, and none offer incremental aggregation. Section 2 discusses related work in more detail. Our new solution builds on FiBA (Kang et al., 2016), a B-tree augmented with fingers and with location-sensitive partial aggregates. The fingers help efficiently find tree nodes to be manipulated when the window slides. The location-sensitive partial aggregates avoid propagating local updates to the root in most cases. Intuitively, our bulk eviction and insertion have three steps: * a finger-based _search_ to find the affected nodes of the tree; * a single shared _pass up_ the tree to insert or evict items in bulk while also repairing any imbalances this causes; and * a single shared _pass down_ the affected spine(s) of the tree to repair location-sensitive partial aggregates stored there. The trick for efficient bulk evict is to not look at each evicted entry individually, but rather, only cut the tree along the boundary between the entries that go and those that stay. The trick for efficient bulk insert is to share work caused by multiple inserted entries as low down in the tree as possible, i.e., to process paths from insertion sites together as soon as they converge. Let \(n\) be the window size (the number of data items currently in the window); \(m\), the bulk size (the number of data items being evicted or inserted); and \(d\), the out-of-order insertion distance (the number of data items in the part of the window that overlaps with the bulk). Our algorithm performs bulk eviction in amortized \(O(\log m)\) time and bulk insertion in amortized \(O(m\log\frac{d}{m})\) time. Neither of these two time bounds depend on the window size \(n\) and bulk eviction is sublinear in the bulk size \(m\). For \(m=1\), the amortized time matches the proven lower bounds of \(O(1)\) for eviction and \(O(\log d)\) for out-of-order insertion, which means \(O(1)\) for in-order insertion at the smallest \(d\). The worst-case time complexity is \(O(\log n)\) for bulk evict and \(O(m\log(\frac{m+n}{m})+\log d)\) for bulk insert, because the pass up the tree can reach the root in the worst case. This worst case is guaranteed to be so rare that in the long run, the amortized complexity prevails. It uses \(O(n)\) space, with the constant depending on the B-tree's arity. We implemented our algorithm in C++ and made it available at [https://github.com/IBM/sliding-window-aggregators](https://github.com/IBM/sliding-window-aggregators), along with our implementations of other sliding-window aggregation algorithms we compare with experimentally. Commit f3beed2 was used in the experiments of this paper. Our experimental results demonstrate that our bulk evict yields the best latency compared to several state-of-the-art baselines, and our bulk insert yields the best latency for the out-of-order case (which most algorithms do not support at all in the first place). Overall, this paper presents the first algorithm for efficient bulk insertions in sliding windows, and the algorithm with the best time complexity so far for bulk evictions from sliding windows. ## 2. Related Work Before our work, the most efficient algorithm for in-order sliding window aggregation with bulk eviction was AMTA (Sandes et al., 2017). AMTA supports single inserts or evicts in amortized \(O(1)\) time. Given a window of size \(n\), it supports bulk evict in amortized \(O(\log n)\) time. However, AMTA does not directly support bulk insertion, so inserting \(m\) items takes amortized \(O(m)\) time. Our algorithm matches AMTA's amortized complexity for single inserts and evicts, and improves bulk evict to amortized \(O(\log m)\) time. Unlike our algorithm, AMTA does not support out-of-order insert. CPiX supports both bulk eviction and bulk insertion, including out-of-order insertion (Bordes et al., 2017). The paper states the time complexity of bulk insert or evict as \((p_{1}+1)\log(|\frac{n}{k}|)+3p_{2}\), where the number \(k\) of checkpoints is recommended to be \(\sqrt{n}\); \(p_{1}\) is the number of affected partitions in the oldest checkpoint; and \(p_{2}\) is the number of affected partitions in the remaining checkpoints. Given \(O(\log(|\frac{n}{\sqrt{n}}|))=O(\log n)\), and assuming \(p_{1}\) and \(p_{2}\) are proportional to the batch size \(m\), this corresponds to an amortized time of \(O(m\log n)\). This is worse than AMTA's \(O(\log n)\) and our \(O(\log m)\) for bulk evict. Moreover, unlike our algorithm, CPiX only works for time-based windows and commutative aggregation. The most efficient prior algorithm for out-of-order sliding window aggregation is FiBA (Zhou et al., 2017). It supports a single insert or evict in amortized \(O(\log d)\) time, where \(d\) is the distance of the operation from either end of the window. FiBA can emulate bulk insert or evict using loops of \(m\) single inserts or evicts for a time complexity of \(O(m\log d)\). Our new algorithm improves upon this baseline. Some streaming systems limit out-of-order distance to a watermark (Bordes et al., 2017); instead, our algorithm implements the more general case that requires no such a priori bounds. Our algorithm is inspired by the literature on bulk operations for balanced trees. Brown and Tarjan show how to merge two height-balanced trees of sizes \(m\) and \(n\), where \(m<n\), in \(O(m\log\frac{n}{m})\) steps (Brown and Tarjan, 2010). The keys of the two trees can be interspersed, so their algorithm corresponds to our out-of-order bulk insertion scenario. Unlike our algorithm, theirs supports neither aggregation nor bulk eviction. Furthermore, our algorithm improves the complexity to \(O(m\log\frac{d}{m})\), where \(d\) is the overlap between the two trees. Kaplan and Tarjan show how to catenate two height-balanced trees in worst-case \(O(1)\) time (Kapol et al., 2017). But they do not allow keys to be interspersed, so their approach is restricted to the in-order case. Also, unlike our algorithm, their approach does not perform aggregation and does not support bulk eviction. Hinze and Paterson show how to both split and merge balanced trees in amortized \(O(\log d)\) time (Kapol et al., 2017). However, their merge does not allow keys to be interspersed, so it corresponds to in-order bulk insertion. Also, their approach does not perform sliding window aggregation. The sliding-window aggregation literature also pursues other objectives besides bulk eviction and out-of-order bulk insertion. Scotty optimizes for coarse-grained sliding, performing pre-aggregation to take advantage of co-eviction (Sandes et al., 2017). Their work shows how to handle all combinations of order, window kinds, aggregation operations, etc., and is complementary to this paper. ChronicleDB uses a temporal aggregate B+-tree and optimizes writes to persistent storage while handling moderate amounts of out-of-order data by leaving some free space in each block (Kapol et al., 2017). Hammer Slide uses SIMD instructions to speed up sliding-window aggregation (Sandes et al., 2017); SlideSide generalizes it to the multi-query case (Sandes et al., 2017); and LightSaber further generalizes it for parallelism (Sandes et al., 2017). DABA Lite performs both single in-order insert and single evict in worst-case \(O(1)\) time but does not support out-of-order insert (Sandes et al., 2017). FlatFIT focuses on window sharing for the in-order case, with amortized \(O(1)\) time for single insert and single evict, but does not support out-of-order insert (Sandes et al., 2017). None of the above directly support bulk operations; they can do \(m\) inserts or evicts using simple loops, with an algorithmic complexity of \(m\) times that of their single-operation complexity. ## 3. Background This section formalizes the problem solved in this paper and reviews known concepts such as monoids and finger B-trees upon which our work builds. ### Problem Statement _Monoids._ A monoid is a triple \((S,\otimes,1)\) with a set \(S\), an associative binary combine operator \(\otimes\), and a neutral element \(1\). Several common aggregation operators are monoids, including count, sum, min, and max. Furthermore, several more common aggregation operators can be lifted into monoids, including arithmetic or geometric mean, standard deviation, argMax, maxCount, first, last, etc. Even several sophisticated statistical and machine learning operators can be lifted into monoids, including mergeable sketches (Boges et al., 2016) such as Bloom filters or algebraic classifiers (Kal and children, up to and including its right-most child. For a more concrete example, assume \(h=4\), \(i=5\), \(j=1\), \(k=3\), \(l=5\), \(m=4\), \(n=2\) and the max monoid, then agg=5. * The _inner aggregate_ includes all of \(y\)'s own values and inner children but excludes the left-most and right-most child: \[\Pi_{!}(y)=v_{0}\otimes\Pi_{!}(c_{1})\otimes\ldots\otimes\Pi_{!}(c_{a-2}) \otimes v_{a-2}\] For example, the root in Figure 1 has agg = \(gh..o\), which combines its left value \(g\) with the aggregate of only the middle child \(hi..n\) and the right value \(o\). This means that the root stores an aggregate of the entire tree except for the left and right spines and their descendants. * The _left aggregate_ excludes the leftmost child but includes all of \(y\)'s own values and the rightmost child, and then combines that with the parent \(x\) (unless \(x\) is the root): \[\Pi_{\swarrow}(y)=\Pi_{!}(y)\otimes\Pi_{!}(c_{a-1})\otimes\left(\begin{array} []{c}1&\text{if $x$ is $root}\\ \Pi_{\swarrow}(x)&\text{otherwise}\end{array}\right)\] For example, the left-most leaf of Figure 1 has aggregate agg = \(ab..f\), which combines its own values \(ab\) with the aggregate \(cdef\) of its parent, resulting in an aggregate of the entire left spine and all its descendants. * The _right aggregate_ combines the aggregate of the parent \(x\) (unless \(x\) is the root) with all of \(y\)'s own values and most children but excludes the rightmost child: \[\Pi_{\swarrow}(y)=\left(\begin{array}{ll}1&\text{if $x$ is $root}\\ \Pi_{\swarrow}(x)&\text{otherwise}\end{array}\right)\otimes\Pi_{!}(c_{0}) \otimes\Pi_{!}(y)\] For example, the right-most leaf of Figure 1 has aggregate agg = \(qr..o\), which combines the aggregate \(qrst\) of its parent with its own values \(uv\), resulting in an aggregate of the entire right spine and all its descendants. _Representation._ The tree is represented by three pointers: _left finger_ to the left-most child; the _root_; and _right finger_ to the right-most child. Each node stores its location-sensitive partial aggregate agg, times, and values, and in addition, has pointers to its parent and children, if any. Finally, each node stores two Boolean flags to indicate whether it is on the left or right spine, respectively. _Invariants._ The following properties about height, order, arity, and aggregates hold before each eviction or insertion and must be established again by the end of each eviction or insertion. The _height invariant_ requires all leaves to have the exact same distance from the root. The _order invariant_ says that the times \(t_{0},\ldots,t_{a-2}\) within each node are ordered, i.e., \(\forall i:t_{i}<t_{i+1}\); and furthermore, if a node has children \(c_{0},\ldots,c_{a-1}\), then for all \(i\), \(t_{i}\) is greater than all times in \(c_{i}\) or its descendants and smaller than all times in \(c_{i+1}\) or its descendants. The _arity invariants_ constrain the sizes of nodes to keep the tree balanced. Each node has an arity \(a\), and different nodes can have different aries. For non-leaf nodes, \(a\) is the number of children. All nodes have \(a-1\) entries, i.e., parallel arrays of \(a-1\) timestamps and \(a-1\) values. There is a data structure hyperparameter MIN_ARITY, which is an integer \(>1\), and MAX_ARITY = 2 \(\cdot\) MIN_ARITY. They constrain the arity of all non-root nodes to MIN_ARITY \(\leq a\leq\) MAX_ARITY. And for the root, \(2\leq a\leq\) MAX_ARITY. For example, MIN_ARITY is 2 in Figure 1, so all non-leaf nodes have \(2\leq a\leq 4\) children, and all nodes have \(1\leq a-1\leq 3\) timestamps and values. The _aggregates invariants_ govern which nodes store which kind of location-sensitive aggregates, color-coded in Figure 1. All non-spine, non-root nodes store the up aggregate. Nodes that are on the left spine but not the root store the left aggregate. Nodes that are on the right spine but not the root store the right aggregate. And the root stores the inner aggregate. This means that the aggregate of the entire tree is simply the combination of the aggregates of the left finger, the root, and the right finger. In other words, we can implement query() in constant time by returning \[\Pi_{\swarrow}(\text{left}\text{finger})\otimes\Pi_{!}(\text{root})\otimes \Pi_{\swarrow}(\text{right}\text{\text{\text{\text{\text{\text{\text{Finger}}}}}}})\] _Imaginary Coins._ To help prove the amortized time complexity, we pretend that each node stores imaginary coins. Figure 1 shows these as small copper circles. Nodes that are close to underflowing store one coin to pay for the rebalancing work in case of underflow. Nodes that are close to overflowing store two coins to pay for the rebalancing work in case of overflow. Then, the proofs for amortized time complexity show that for any possible sequence of operations, the algorithm always stores up enough coins in advance at each node before it has to perform eventual actual rebalancing work. ## 4. Bulk Eviction As defined in Section 3.1, \(\text{bulkEvict}(t)\) removes all entries with timestamps \(\leq t\) from the window. So our algorithm must discard nodes to the left of \(t\), keep nodes to the right of \(t\), and for nodes that straddle the boundary, locally evict all entries up to \(t\) and repair any violated invariants. Our bulk eviction algorithm has three steps: 1. A finger-based _eviction boundary search_ that returns a list, called boundary, of triples (node, ancestor, neighbor). 2. A _pass up_ the boundary, and beyond as needed to repair invariants, that does the actual evictions and most repairs. 3. A _pass down_ the left spine, and if needed also the right spine, that repairs any leftover invariant violations. **Bulk eviction Step 1: Eviction boundary search.** This step finds the boundary to enable any subsequent rebalancing operations during Step 2 to be constant-time at each level. For rebalancing to be efficient, it cannot afford to trigger any searches of its own, and must instead rely on all required searching to have already been done upfront. Whereas textbook algorithms for B-trees with single evictions (such as (Brandes et al., 2017)) can repair arity invariants by rebalancing with a node's left or right sibling, bulk eviction leaves no left sibling. That means the only eligible neighbor to help in rebalancing is the right one, and that may have a different parent and thus not be a sibling. Furthermore, rebalancing requires the least common ancestor of the node and its neighbor, and that might not be their parent. Hence, the job of the finger-based search is to find a list of (node, ancestor, neighbor) triples, one for each relevant level of the tree. The search first starts at the left or right finger, whichever is closest to \(t\), and walks up the corresponding spine to find the top of the boundary, i.e., the lowest spine node whose descendants straddle \(t\). Then, the search traverses down to the actual eviction point while populating the boundary data structure. This downward traversal always keeps at most two separate chains for the node and its neighbor, and can thus happen in a single loop over descending tree levels. If the search finds an exact match for \(t\) in the tree, it stops early; otherwise, it continues to a leaf and stops there. **Bulk eviction Step 2: Pass up.** This step of the algorithm does most of the work: it performs the actual evictions, and along the way, it also repairs most of the invariants that those evictions may have violated. Recall from Section 3.2 that there are invariants about height, order, arity, and location-sensitive partial aggregates. The pass up never violates invariants about height or order. It immediately restores arity invariants, using some novel rebalancing techniques described below. Regarding aggregate invariants, the pass up only repairs aggregates that follow a strictly ascending direction (up aggregates \(\Pi_{\uparrow}\) and inner aggregates \(\Pi_{\downarrow}\)). The pass up leaves aggregates that involve the parent (left aggregates \(\Pi_{\swarrow}\) and right aggregates \(\Pi_{\searrow}\)) to the later pass down to repair. The pass up has two phases: an eviction loop up the boundary returned by the search, followed by a repair loop further up beyond the boundary as long as there is more to repair. At each level, the eviction loop performs the local eviction, repairs arity underflow, and repairs local up aggregates or inner aggregates. At each level that still needs such repair, the repair loop repairs arity underflow and repairs local up aggregates or inner aggregates. Aggregate repair happens in constant time per level by simply re-computing aggregates of surviving affected nodes after eviction and rebalancing are done. Arity repair, also known as rebalancing, either moves entries from the neighbor to the node or merges the node into the neighbor, depending on their respective arities. Let nodeDeficit be MIN_ARITY - node_arity and let neighborSurplus be neighbor_arity - MIN_ARITY. If nodeDeficit \(\leq\) neighborSurplus, rebalancing does a move; otherwise, it does a merge. Figure 2 illustrates the _move_ operation, representing each pair \(\left[\begin{smallmatrix}\epsilon_{x}\\ \epsilon_{y}\end{smallmatrix}\right]\) of timestamp and value as an entry \(e_{x}\). In this figure, \(k\) corresponds to nodeDeficit, i.e., the number of entries and children to move to node to repair its underflow by bringing its arity back to MIN_ARITY. In contrast to the textbook move operation (Bartos et al., 2016), \(k\) may exceed 1 and neighbor may not be a sibling of node. The only entry of the ordered window that is between node and neighbor is \(e_{a}\) in their least common ancestor. So the move rotates \(e_{a}\) into node, along with \(e_{0},\ldots,e_{k-2}\) and all associated children, and rotates \(e_{k-1}\) to the ancestor. In the end, node has arity MIN_ARITY and neighbor has arity \(\geq\) MIN_ARITY, because it started with sufficient surplus. Of course, neighbor still has arity \(\leq\) MAX_ARITY, because it started out that way and did not grow any bigger. Figure 18 shows pseudocode and a concrete example for _move_. Figure 3 illustrates _merge_, which adds what is left of node to neighbor and then eliminates node. Unlike in the textbook B-tree setup, node and neighbor may not be direct siblings. Since any other vertices on the path from node to ancestor are entirely \(<t\), those vertices will also be eliminated. On the other hand, \(e_{a}\) has a timestamp \(>t\), so it remains in the tree, and we rotate it into neighbor. Let oldNodeArity and oldWeighborArity refer to the arity of the node and its neighbor before the merge. Then, after the merge, we have \[\begin{array}{l}\texttt{neighbor.arity}\\ =\texttt{oldNodeArity}\\ =\texttt{MIN\_ARITY}-\texttt{nodeDeficit}+\texttt{MIN\_ARITY}+\texttt{ neighborSurplus}\\ =\texttt{2\cdot MIN\_ARITY}+\texttt{(neighborSurplus}-\texttt{nodeDeficit)} \end{array}\] This means that there is no overflow, because merge only happens when nodeDeficit \(>\) neighborSurplus, and there is no underflow, because nodeDeficit \(\leq\) MIN_ARITY and neighborSurplus \(\geq\) 0. See code and example in Figure 19. For bulkFucit(\(t\)) to be fully general, it must handle the case where \(t\) is all the way on the right spine. This implies that the root itself is to the left of \(t\) and must be eliminated. Eliminating the root shrinks the tree from the top, thus preserving the height invariant, and requires giving the tree a new root lower down. There are two sub-cases for shrinking the tree given a node on the right spine. If, after the local eviction, the node still has arity \(>1\), the Figure 3. Merge with neighbor (non-sibling). Figure 2. Move batch. algorithm makes it the root (Figure 4); otherwise, the node has arity \(=1\) and the algorithm makes its single child the root (Figure 5). Figure 20 shows pseudocode and an example. **Bulk eviction Step 3: Pass down.** The last step of the algorithm repairs left aggregates and spine flags on the left spine. In case the eviction touched the right spine, it also repairs right aggregates and spine flags on the right spine. Recall that the left aggregate and right aggregate of a node are computed using the aggregate result from its parent. Hence, the pass down loops over tree levels and performs a local recompute to propagate these changes. Theorem 1 ().: _The algorithm for \(\mathsf{bulkEvict}(t)\) takes \(O(\log m)\) amortized time and \(O(\log n)\) worst-case time._ Proof.: Consider the steps of the algorithm separately. Step 1, the finger-based search, takes time \(O(\log m)\) worst-case, since it takes a single traversal up from a finger to the lowest ancestor containing \(t\) followed by a single traversal down at most to a leaf. Step 2, the pass up, comprises an eviction loop followed by a repair loop. The eviction loop takes time \(O(\log m)\) worst-case, since it traverses the boundary list returned by the search. The repair loop might continue to repair overflow past the top of the boundary. In the worst case, it might reach the root, bringing the total time complexity of the evict loop plus repair loop to \(O(\log n)\) worst-case. However, since the repair loop starts above the boundary, at its start, it can at most have to deal with an underflow of a single entry. Therefore, it meets the conditions of Lemma 9 from the FiBA paper (Friedman et al., 2017), which uses virtual coins to show that the amortized cost for the repair loop is \(O(1)\). This brings the total amortized time of the pass up to \(O(\log m+1)=O(\log m)\). Finally, Step 3, the pass down, traverses the same number of levels as the pass up. ## 5. Bulk Insertion As defined in Section 3.1, \(\mathsf{bulkInsert}(B^{\mathsf{in}})\) inserts one or more entries into the window. The bulk of entries is modeled as an iterator of (timestamp, value) pairs, which are assumed to be timestamp-ordered. Our \(\mathsf{bulkInsert}\) algorithm processes the bulk in three steps: 1. A finger-based _insertion sites search_ that, without making any modifications, locates all the sites in the tree where new entries need to be inserted. 2. A _pass up: interleave&split loop_ that, starting at the leaves, interleaves the new entries into their respective nodes, splitting the node and promoting keys as necessary to satisfy the arity invariants. This happens from the leaves up until no level requires further processing. 3. A _pass down_ the right spine, and if needed also the left spine, that repairs any leftover aggregation invariant violations. The remainder of this section delves deeper into the details of these steps and their cost analysis. Later, Section 6 discusses their implementation and optimization maneuvers. **Bulk insertion Step 1: Insertion sites search.** To locate the insertion sites, the algorithm conducts the search in timestamp order, beginning with the earliest timestamp in the bulk using finger search. Each subsequent search never has to go higher than the least common ancestor between the previous node and its insertion site. This step associates each (timestamp, value) pair from the input with the corresponding node into which it will be inserted. Like in a standard B-tree structure, each new timestamp (key) that is not yet in the tree will be inserted at a leaf location. Such a key can cause cascading changes to the tree structure and, in the context of FiBA, can additionally trigger a chain of recomputation of aggregation values starting from the insertion site. On the other hand, a timestamp that is already in the tree is destined to the node where that timestamp is present, where the aggregation monoid combines its value with the existing value. This results in no structural changes, but in the context of FiBA, this triggers a chain of recomputation of aggregation values starting from that node. We see both cases as events that require processing: an _insertion event_ adds a real entry to the target node and recomputes the aggregation value, whereas a _recomputation event_ merely indicates the node where recomputation must take place. \(\triangleright\)_Treelets._ Concretely, the implementation represents each event as a _treelet_ tuple (target, timestamp, value, childNode, kind). This indicates that this particular (timestamp, value) pair with a child childNode (possibly NULL) is to be inserted into the target node unless the kind is a recomputation event, in which case it simply triggers a recomputation of aggregate values on the target node. Treelets form the backbone of the \(\mathsf{bulkInsert}\) logic, with Step 1 (the insertion sites search) creating the initial timestamp-ordered sequence of treelets targeting all the relevant insertion sites. **Bulk insertion Step 2: Pass up: interleave&split loop.** As the next step, the algorithm proceeds level by level, working its way from the leaf level towards the root until no more changes happen. At any point, the algorithm aims to maintain only two levels of treelets--the current level and the next level. In this view, as illustrated in Figure 6, each level takes as input a sequence of treelets and produces a sequence of treelets for the next level. Since the treelets in the input are timestamp-ordered, the entries destined for the same node appear consecutively in the sequence and are easily identified. Conceptually, each level is processed as follows: 1. For each target \(t\) in the input sequence of treelets: 1. Gather all the treelets that target \(t\) into TL. Figure 4. Make node root. Figure 5. Make child root. 2. Interleave the contents of \(t\) with TL. Since both of these are ordered, the interleave routine is the merge step of the well-known merge-sort algorithm. Interleaving takes time linear in the total length of its input sequences to produce an ordered output sequence without requiring a separate sort step. 3. If \(t\) has arity more than MAX_ARITY, apply bulkSplit to split it into smaller nodes. When multiple entries are added to the same node, a node can temporarily overflow to arity \(p>\texttt{MAX\_ARITY}=2\mu\), often \(p\gg 2\mu\). The bulkSplit routine then splits it into invariant-respecting nodes, consisting of one or more arity-\((\mu+1)\) nodes and one last node with arity between \(\mu\) and \(2\mu\). The following claim, which is intuitive and whose proof appears in the appendix, shows that it is possible to split such a node into legitimate FiBA nodes in this way: Claim 1 ().: _Let \(p>\texttt{MAX\_ARITY}=2\mu\) be an integral temporary arity. The number \(p\) can be written as_ \[p=b_{0}+b_{1}+\dots+b_{t-1}+b_{t},\] _where \(b_{0}=b_{1}=\dots b_{t-1}=\mu+1\) and \(\mu\leq b_{t}\leq 2\mu\)._ For example, if \(p=2\mu+3\) with \(\mu=4\), we can write \(p\) as \(p=(\mu+1)+(\mu+2)\). That is, this split yields one arity-\((\mu+1)\) node, one entry to send up to the next level, and one arity-\((\mu+2)\) node. If \(p=7\mu+2\) with \(\mu=2\), we can write \(p\) as \(p=\mu+1+\mu+1+\mu+1+\mu+1+2\mu\). That is, this split yields four arity-\((\mu+1)\) nodes and one arity-\(2\mu\) node, interspersed with 4 entries to send up to the next level. \(\triangleright\)_Promotion to the next level_. Splitting an overflowed node also generates treelets, representing entries promoted for insertion into nodes in the next level. Importantly, by processing current-level treelets in timestamp order, new treelets for the next level generated in this manner are already sorted in timestamp order. This helps avoid the costly step of sorting them or the need for a priority queue. Additionally, the parent of each existing node is the target insertion site of the corresponding promoted entry. The discussion so far left out recomputation events. There are two ways a recomputation event is created: (a) inserting an entry with an existing timestamp and (b) incorporating entries into a node without causing it to overflow. Case (a) happens in Step 1 (insertion sites search) but can target nodes anywhere in the tree, not just the leaves. Case (b) happens throughout Step 2 (making a pass up). Because of how Step 1 is carried out and to sidestep the need to store treelets for future levels and interleave in treelets for recomputation events when their levels are reached, we start all the recomputation events/treelets at the leaf level. These treelets will ride along with the other treelets but will not have a real effect until their levels are reached. This turns out to have the same asymptotic complexity as if we were to start them at their true levels--but without the additional code complexity. **Bulk insertion Step 3: Pass down.** Like in the bulkEvict algorithm, the final step repairs right aggregates on the right spine and potentially left aggregates on the left spine if it also touches the left spine. For both spines, the aggregate of a node is computed using the value from its parent, so this computation is a pass on the spine towards the finger (i.e., rightmost and leftmost leaf). **Bulk insertion: Time complexity analysis.** The time complexity of bulkInsert can be broken down into (i) the search cost (Step 1), (ii) insertion and tree restructuring (Step 2), and (iii) aggregation repairs (during Steps 2 and 3). To analyze this, we begin by proving a lemma that quantifies the footprint--the worst-case number of nodes that can be affected--when there are \(m\) insertion sites. For a bulkInsert call, the _top_ node, denoted by \(\tau\), is the least-common ancestor of all insertion sites and the rightmost finger. By definition, this is the node closest to the leaf level where paths from all these sites towards the root converge. Lemma 2 ().: _In a FiBA structure with MAX_ARITY\(=2\mu\), if there are \(m\) insertion sites, the paths from all the insertion sites, as well as the node at the right finger, to the top \(\tau\) contain at most \(O(m(1+\log_{2\mu}(\frac{N_{\tau}}{m})))\) unique nodes, where \(N_{\tau}\) is the total number of nodes in the subtree rooted at \(\tau\)._ Proof.: Consider the subtree rooted at the top node \(\tau\). For level \(\ell=0,1,\dots\) away from the top, the total number of nodes at that level \(n_{\ell}\) satisfies \[\mu^{\ell}\leq n_{\ell}\leq(2\mu)^{\ell}, \tag{1}\] which holds because the fan-out degree for non-root2 nodes is between \(\mu\) and \(2\mu\) (inclusive). Now we will assume all the insertion sites are at the leaf level. This can be arranged by projecting every insertion site onto a leaf within its own subtree, and doing so can only increase the number of nodes contributing to the bound. Footnote 2: If \(\tau\) is the root, the bound is slightly different since the root can have as few as two children, but the statement of the lemma remains the same. By (1), the leaves must be at level \(L\leq\log_{\mu}N_{\tau}\) and the smallest level \(\ell\) that has no more than \(m\) nodes is \(\ell\geq\log_{2\mu}m\). This means the paths from the leaf insertion sites can travel without necessarily converging together for \(L-\ell\) levels. During this stretch, the number of unique nodes is at most \(m(L-\ell)=O(m\log_{2\mu}(N_{\tau}/m))\). From level \(\ell\) to the top node, the paths must converge as constrained by the shape of the tree. In a B-tree with \(m\) leaves, the number of nodes at each level decreases geometrically towards the top. Hence, there are at most \(O(m)\) unique nodes from levels \(\ell\) and above, for a grand total of \(O(m(1+\log_{2\mu}(\frac{N_{\tau}}{m})))\) unique nodes. Figure 6. Interleave and split for one level of a tree Next, we address the tree restructuring cost: Lemma 3 ().: _Let \(\mu\geq 2\). The tree restructuring cost of inserting \(m\) entries in a bulk is amortized \(O(m)\) and worst-case \(O(m\log(\frac{m+n}{m}))\), where \(n\) is the number of entries prior to the bulk insertion operation._ The worst-case bound can be easily seen: the \(m\) insertions can only change the nodes from \(m\) leaves to the root, touching at most \(O(m\log(\frac{m+n}{m}))\) nodes (Lemma 2). For the amortized bound, the proof is analogous to Lemma 9 in the FiBA paper (Zhou et al., 2018), arguing that charging 2 coins per new entry is sufficient in maintaining the tree. More details appear in the appendix. Theorem 4 ().: _The algorithm for \(\mathsf{bulkInsert}\) runs in amortized \(O(\log d+m(1+\log(\frac{d}{m})))\) time and \(O(\log d+m\log(\frac{m+n}{m}))\) worst-case time, where \(m\) is the number of entries in the bulk and \(d\) is the out-of-order distance of the earliest entry in the bulk._ Proof.: The running time of \(\mathsf{bulkInsert}\) is made up of (i) the search cost (Step 1), (ii) insertion and tree restructuring (Step 2), and (iii) aggregation repairs (during Steps 2 and 3). The first search for the insertion site takes \(O(\log d)\), thanks to finger searching from the right finger. Each subsequent search only traverses the path from the previous entry to their least common ancestor and down to the next entry. The whole search cost is therefore covered by Lemma 2. After that, the actual insertion takes \(O(1)\) time per entry since interleaving takes time that is linear in its input. The cost to further restructure the tree is as described in Lemma 3. Finally, it is easy to see that the cost of aggregation recomputation/repairs is subsumed by the first two costs because the aggregation of a node has to be recomputed only if it was part of the restructuring or sits on the search path (spine or on the way to the top node). Adding up the costs yields the stated bounds. This means asymptotically \(\mathsf{bulkInsert}\) is never more expensive than individually inserting entries. On the contrary, bulk insertion results in cost savings as insertion-site search and restructuring work can be shared. ## 6. Implementation We implemented our algorithm in C++ because of its strong and predictable raw performance in terms of both time and space. Using C++ avoids latency spikes from runtime services, such as garbage collection or just-in-time compilation, common in managed languages such as Java or Python. Such extraneous latency spikes would obscure the latency effects of our algorithm. The results section contains apples-to-apples comparisons with other sliding-window aggregation algorithms from prior work that was also implemented in C++. We reuse code between our new algorithm and those earlier algorithms. In particular, we use C++ templates to specialize each algorithm for each given aggregation monoid, and share the same implementation of the aggregation monoids across all algorithms. The C++ compiler then inflines both the monoid's data structure and its operator code into the sliding-window aggregation data structure and algorithm code as appropriate. **Deferred free list.** Our implementation has to avoid reclaiming memory eagerly. If bulk eviction reclaimed memory eagerly, the promised algorithmic complexity would be spoiled: Given that the arity of the tree is controlled by a constant hyperparameter \(\mathsf{MIN\_ARITY}\), eagerly evicting a bulk of \(m\) entries would require reclaiming the memory of \(O(m)\) nodes. Those \(O(m)\) calls to delete would be worse than the amortized complexity of \(O(\log m)\) for bulk evict. Therefore, we avoid eager memory reclamation as follows: Recall that the eviction loop iterates over \(O(\log m)\) nodes on the boundary and, for each node, performs local evictions, which will proceed to evict the children of that node. Instead of recursively deleting all the descendants eagerly, the local evict places their children on a deferred free-list. Since at most \(O(\log m)\) nodes can be removed, the cost of adding only the children to the free list during bulk eviction is worst-case \(O(\log m)\). Later, when an insertion would require a new allocation, it first checks the free-list. If that is non-empty, it pops one node, pushes its children, and reuses its memory for the new node. Thus, each insert only spends worst-case \(O(1)\) time on memory reuse. **Memory mangement during \(\mathsf{bulkInsert}\).** Conceptually, we allow a node to grow to an arbitrary size before splitting it into invariant-respecting smaller nodes. For performance, the implementation does this differently. The main goal is to minimize memory allocation and deallocation for intermediate storage. To combine keys from an existing node with keys to be inserted into that node, it employs an ordered interleaving routine from merge sort. Here the interleaving is lazy: instead of generating the combined sequence of keys upfront, our implementation offers an iterator for the interleaved sequence that computes the next element on the fly, reading directly from two sources--the existing node and the sequence of treelets for the current level. We also have an optimization where if the node is not going to overflow after incorporating the new keys ("small insertion"), then simple insertion is used as there would be no memory allocation involved. Additional optimization includes (i) using alternating buffers for treelet processing and (ii) consolidating treelets. For treelet processing, the dataflow pattern is reading from the current level and writing to the next level. Each sequence is progressively smaller as the algorithm works its way up the tree. Hence, we allocate two vectors with enough capacity at the start and alternate between them as the algorithm proceeds. Furthermore, treelets that will be inserted into the same node are consolidated together. This reduces the struct size because the target node does not need to be repeated for each of these treelets. **Miscellaneous.** As described, our algorithm already combines entries with the same timestamp at insert, thus reducing memory. Users can choose to coarsen the granularity of timestamps, thus causing more cases of equal timestamps, recovering basic batching. However, it would require additional work to take full advantage of batching, such as for energy efficiency (Han et al., 2018). Our implementation does not directly use SIMD instructions, but the C++ optimizing compiler sometimes uses them automatically. We did not implement partitioning but it is straightforward: when the aggregate is partitioned by key, keep disjoint state, i.e., a separate tree for each key; that would enable fission (Han et al., 2018) for parallelization, either user-directed or automatically. Previous work describes an algorithm for range queries (Zhou et al., 2018), and that algorithm also works in the presence of bulk insertion and eviction. Future work could pursue a new algorithm for multi-range queries. ## 7. Results This section explores how the theoretical algorithmic complexity from the previous sections play out in practice. It explores how performance correlates with the number \(n\) of entries in the window, the number \(m\) of entries in the bulk insert or bulk evict, and the number \(d\) of entries between an insertion and the youngest end of the window. The experiments use multiple monoidal aggregation operators to cover a spectrum of computational cost: sum (fast), geomean (medium), and bloom(Dani et al., 2017) (slow). This section refers to different sliding-window aggregation algorithms as follows: The original non-bulk FiBA algorithm (Zhou et al., 2017) is nb_fiba4 and nb_fiba8, with MIN_ARTY of 4 or 8. Similarly, the new bulk FiBA algorithm introduced in this paper is b_fiba4 and nb_fiba8. Both of these algorithms can handle out-of-order data. As baselines, several figures include three algorithms that only work for in-order data, i.e., when \(d=0\). The amortized monoid tree aggregator, amta, supports bulk evict but not bulk insert (Zhou et al., 2017). The twostacks_lite algorithm performs single insert or evict operations in amortized \(O(1)\) and worst-case \(O(n)\) time (Zhou et al., 2017). The daba_lite algorithm performs single insert or evict operations in worst-case \(O(1)\) time (Zhou et al., 2017). Since amta, twostacks_lite, and daba_lite require in-order data, they are absent from figures with results for out-of-order scenarios. We ran all experiments on a machine with dual Intel Xeon Silver 4310 CPUs at 2.1 GHz running Ubuntu 20.04.5 with a 5.4.0 kernel. We compiled all experiments with g++ 9.4.0 with optimization level -03. To reduce timing noise and variance in memory allocation latencies, we use minimal10c (Kang et al., 2017) instead of the stock gible allocator, and we pin all runs to core 0 and the corresponding NUMA group. ### Latency In streaming applications, late results are often all but useless: with increasing latency, the value of a streaming computation reduces sharply\(-\)for example, dangers become too late to avert and opportunities are missed. Therefore, our algorithm is designed to support both the finest granularity of streaming (i.e., when \(m=1\)) as well as bursty data (i.e., when \(m\gg 1\)) with low latencies. Even in the latter case, our algorithm still retains the ability of tuple-at-a-time streaming, unlike systems with a micro-batch model. The methodology for the latency experiments is to measure how long each individual insert or evict takes, then visualize the distribution of insertion or eviction times for an entire run as a violin plot. The plots indicate the arithmetic mean as a red dot, the median as a thick blue line, and the 99.99\({}^{\text{th}}\) and 99.999\({}^{\text{th}}\) percentiles as thin blue lines. At 2.1 GHz, \(10^{4}\) processor cycles correspond to 4.8 microseconds. **Figure 7** shows the latencies for bulk evict with in-order data. This experiment loops over evicting the oldest \(m=1,024\) entries in a single bulk, inserting 1,024 new entries one by one, and calling query, measuring only the time that the bulk evict takes. In theory, we expect bulk evict to take time \(O(\log m)\) for b_fiba4 and b_fiba8, and \(O(\log n)\) for amta. The remaining algorithms, lacking a native bulk evict, loop over single evictions, taking \(O(m)\) time. In practice, b_fiba4, b_fiba8, and amta have the best latencies for this experiment, confirming the theory. **Figure 8** shows the latencies for bulk insert with in-order data. This experiment loops over evicting the oldest \(m=1,024\) entries in a single bulk, inserting \(m=1,024\) new entries in a single bulk, and calling query, measuring only the time that the bulk insert takes. In theory, since \(d=0\) in this in-order scenario, the complexity of bulk insert boils down to \(O(m)\) for all considered algorithms. In practice, daba_lite and twostacks-lite yield the best latencies for this scenario since they incur no extra overhead to be ready for an out-of-order case that does not occur here. **Figure 9** shows the latencies for bulk insert with out-of-order data. This experiment differs from the previous one in that each bulk insert happens at a distance of \(d=1,024\) from the youngest end of the window. Since amta, twostacks_lite, and daba only work for in-order data, they cannot participate in this experiment. In theory, we expect bulk insert to take \(O(m\log\frac{d}{m})\) for b_fiba and \(O(m\log d)\) for nb_fiba, which is worse. In practice, b_fiba has lower latency than nb_fiba, confirming the theory. Figure 8. Latency, bulk insert only, window size \(n=4,194,304\), bulk size \(m=1,024\), in-order data \(d=0\). Figure 7. Latency, bulk evict only, window size \(n=4,194,304\), bulk size \(m=1,024\), in-order data \(d=0\). **Figure 10** shows an ablation experiment for memory-management related implementation details. It compares results with malloc (mm) vs. the default memory allocator (libc), and with or without (nor1) the deferred free list from Section 6. Consistent with the theory, the deferred free list is indispensible: nof1 performs much worse. On the other hand, mimalloc made little difference; we use it to control for events that are so rare that they did not manifest in this experiment. ### Throughput Throughput is the number of items in a long but finite stream divided by the time it takes to process that stream. The throughput experiments thus do not time each insert or evict operation individually. While the time for each individual operation may differ, we already saw those distributions in the latency experiments, and here we focus on the gross results instead. The experiments include a memory fence before every insert to prevent the compiler from optimizing (e.g., using SIMD) across multiple stream data items, as that would be unrealistic in fine-grained streaming. All throughput charts show error bars based on repeating each run five times. **Figure 11** shows the throughput for running with bulk evict for in-order data as a function of the bulk size \(m\). This experiment loops over a single call to bulkEvict for the oldest \(m\) entries, \(m\) calls to single insert, and a call to query. The throughput is computed from the time for the entire run, which includes all these operations. In theory, we expect the throughput of b_fiba and anta to improve with larger bulk sizes as they have native bulk eviction. In practice, while that is true, even for algorithms that do not natively support bulk evict, throughput also improves with larger \(m\). This may be because their internal loop for emulating bulk evict benefits from compiler optimization. For in-order data, twostacks_lite yields the best throughput (but not the best latency, see Section 7.1). **Figure 12** shows the throughput for running with both bulk evict and bulk insert for in-order data as a function of the bulk size \(m\). In theory, we expect that since the data is in-order, bulk insert brings no additional advantage over looping over single inserts. In practice, all algorithms improve in throughput as \(m\) increases from \(2^{0}\) to around \(2^{12}\). This may be because fewer top-level insertions means fewer memory fences, even for algorithms that emulate bulk insert with loops. Furthermore, throughput drops when \(m\) gets very large, because the implementation needs to allocate more temporary space to hold data items before they are inserted in bulk. **Figure 13** shows the throughput as a function of the out-of-order degree \(d\) when running with both bulk evict and bulk insert. The amta, twostacks_lite, and daba algorithms do not work for out-of-order data and therefore cannot participate in this experiment. In theory, we expect that thanks to only doing the search once per bulk insert, higher \(d\) should not slow things down. In practice, we find that that is true and b_fiba outperforms nb_fiba. **Figure 14** shows the throughput as a function of the out-of-order degree \(d\) when running with neither bulk evict nor bulk insert, i.e. with \(m=1\). As before, this experiment elides algorithms that require in-order data. In the absence of bulk operations, we expect b_fiba to have no advantage over nb_fiba. In practice, b_fiba does worse on sum and geomean but slightly better on bloom. ### Window Size One Billion To understand how our algorithm behaves in more extreme scenarios, we ran b_fiba with geomean with a window size of 1 billion (\(n=10^{9}\)). In theory, FiBA is expected to grow to any window size and have good cache behaviors, like a B-tree. In practice, this is the case at window size 1B: The benchmark ran uneventfully using 99% CPU on average, fully utilizing the one core that it has. Memory occupancy per window item (i.e., the maximum resident set size for the process divided by the window size) stays the same (\(64-70\) bytes), independent of window size. However, at \(n=1\)B, the benchmark has a larger overall memory footprint, putting more burden on the memory system. This directly manifests as more frequent cache misses/page faults and indirectly affects the throughput/latency profile. While no major page faults occurred, the number of minor page faults _per_ million tuples processed increased mutlible folds (657 at 4M vs. 15,287 at 1B). Compared with the 4M-window experiments, the throughput numbers for \(n=1\)B mirror the same trends as the bulk size is varied. In absolute numbers, the throughput of \(n=1\)B is \(1-1.12\times\) less than that of \(n=4\)M. For latency, the theory promises \(\log d\) average (amortized) bulk-evict time, independent of the window size. With a larger memory footprint, however, we expect a slight increase in median latency. The \(\log n\) worst-case time should mean the rare spikes will be noticeably higher with larger window sizes. In practice, we observe that the median only goes up by \(\approx 7.5\%\). The 99,999-th percentile markedly increases by around \(2\times\). Figure 10. Memory management ablation study (latency, bulk evict only, \(n=\) 4,194,304, \(m=\) 4,096, \(d=\) 0). Figure 9. Latency, bulk insert only, window size \(n=\) 4,194,304, bulk size \(m=\) 1,024, out-of-order data \(d=\) 1,024. Figure 11: Throughput, bulk evict only, window size \(n=4\),194,304, varying bulk size \(m\), in-order data \(d=0\). Figure 12: Throughput, bulk evict+insert, window size \(n=4\),194,304, varying bulk size \(m\), in-order data \(d=0\). Figure 13: Throughput, bulk evict+insert, window size \(n=4\),194,304, bulk size \(m=1\),024, varying ooo distance \(d\). ### Real Data The previous experiments carefully controlled the variables \(n\), \(m\), and \(d\) to explore tradeoffs and validate the theoretical results. It is also important to see how the algorithms perform on real data. Specifically, real applications tend to use time-based windows (causing both \(n\) and \(m\) to fluctuate), and real data tends to be out-of-order (with varying \(d\)). In other words, all three variables vary within a single run. **Figure 15** shows this for the NYC Citi Bike dataset (Aug-Dec, 2018)(Aug-Dec, 2018). The figure shows a histogram of window sizes \(n\) (left) and a histogram of bulk sizes \(m\) (middle), assuming a time-based sliding window of 1 day. Depending on whether that 1 day currently contains more or fewer stream data items, \(n\) ranges broadly, as one would expect for real data whose event frequencies are uneven. Similarly, depending on the timestamp of the newest inserted window entry, it can cause a varying number \(m\) of the oldest entries to be evicted. Most single insertions cause only a single evtion, but there are a non-negligible number of bulk evicts of hundreds or thousands of entries. The figure also shows a histogram of out-of-order distances \(d\) (right). While the vast majority of insertions have a small out-of-order distance \(d\), there are also hundreds of insertions with \(d\) in the tens of thousands. **Figure 16** shows the throughput results for the Citi Bike dataset on a run that involves bulk evicts with varying \(m\) and single inserts with varying \(d\). Since amta, twostacks_lite, and daba require in-order data, we cannot use them here. In theory, we expect the bulk operations to give b_fiba an advantage over nb_fiba. In practice, we find that this is indeed the case for real-world data. ### Java and Apache Flink To experiment with our algorithm in the context of an end-to-end system, we reimplemented it in Java inside Apache Flink 1.17 (Fink et al., 2018). We ran experiments that repeatedly perform several single inserts followed by a bulk evict and query. Using a window of size \(n=2^{22}\approx 4\)M, the FiBA algorithms perform as expected but the Flink baseline was prohibitively slow, so we report a comparison at \(n=8,192\) instead. At this size, the trends are already clear. Figure 17 shows that even without our new bulk eviction support, FiBA is much faster than Flink. Using bulk evictions further widens that gap. As expected, throughput improves with increasing bulk size \(m\), consistent with our findings with C++ benchmarks. ## 8. Conclusion This paper describes algorithms for bulk insertions and evictions for incremental sliding-window aggregation. Such bulk operations are necessary for real-world data streams, which tend to be bursty. Furthermore, real-world data streams tend to have out-of-order data. Hence, besides handling bulk operations, our algorithms also handle that case. Our algorithms are carefully crafted to yield the same algorithmic complexity as the best prior work for the non-bulk case while substantially improving over that for the bulk case. Figure 16. Throughput, citi bike, varying window size \(n\), bulk size \(m\) and ooo distance \(d\) from real data. Figure 17. Throughput, Flink, bulk evict only, window size \(n=8,192\), varying bulk size \(m\), in-order data \(d=0\). Figure 15. Histograms of (left) citi bike instantaneous window sizes \(n_{s}\) (middle) eviction bulk sizes \(m\) for a time-based window of 1 day, and (right) the out-of-order distance \(d\), i.e., the number of records skipped over by insertions.
2303.15581
Topological superconductors from a materials perspective
Topological superconductors (TSCs) have garnered significant research and industry attention in the past two decades. By hosting Majorana bound states which can be used as qubits that are robust against local perturbations, TSCs offer a promising platform toward (non-universal) topological quantum computation. However, there has been a scarcity of TSC candidates, and the experimental signatures that identify a TSC are often elusive. In this perspective, after a short review of the TSC basics and theories, we provide an overview of the TSC materials candidates, including natural compounds and synthetic material systems. We further introduce various experimental techniques to probe TSC, focusing on how a system is identified as a TSC candidate, and why a conclusive answer is often challenging to draw. We conclude by calling for new experimental signatures and stronger computational support to accelerate the search for new TSC candidates.
Manasi Mandal, Nathan C. Drucker, Phum Siriviboon, Thanh Nguyen, Tiya Boonkird, Tej Nath Lamichhane, Ryotaro Okabe, Abhijatmedhi Chotrattanapituk, Mingda Li
2023-03-27T20:21:16Z
http://arxiv.org/abs/2303.15581v1
# Topological superconductors from a materials perspective ###### Abstract Topological superconductors (TSCs) have garnered significant research and industry attention in the past two decades. By hosting Majorana bound states which can be used as qubits that are robust against local perturbations, TSCs offer a promising platform toward (non-universal) topological quantum computation. However, there has been a scarcity of TSC candidates, and the experimental signatures that identify a TSC are often elusive. In this perspective, after a short review of the TSC basics and theories, we provide an overview of the TSC materials candidates, including natural compounds and synthetic material systems. We further introduce various experimental techniques to probe TSC, focusing on how a system is identified as a TSC candidate, and why a conclusive answer is often challenging to draw. We conclude by calling for new experimental signatures and stronger computational support to accelerate the search for new TSC candidates. **Keywords:** Topological superconductors, Majorana fermions, Majorana zero modes, Topological quantum computation ###### Contents * 1 Introduction * 2 Theory * 2.1 Superconductivity Crash Course * 2.2 Majorana Zero Modes * 3 Candidate Materials * 3.1 Natural Candidates * 3.2 Artificial Candidates * 4 Experimental Signatures * 4.1 Tunneling Spectroscopy * 4.2 Photoemission Spectroscopy * 4.3 Transport Measurements * 4.4 Muon Spin Spectroscopy * 5 Future Prospective ## 1 Introduction The field of topological materials has garnered significant research attention over the past decade [1, 2, 3, 4, 5]. Setting aside the kaleidoscope of fundamental new phenomena emerging from various topological phases, many promising applications have been demonstrated at the lab scale. This includes electronic states with no energy dissipation such as quantum spin Hall effect [6, 7] and quantum anomalous Hall effect [8, 9], current induced switching for spintronic applications [10, 11], topological dipoles for next-generation photovoltaic and photodetectors [12, 13], high-efficiency thermoelectrics [14, 15], catalysis for water splitting and other energy conversion and storage processes [16], among others. The recognition of topological materials, or topological phases of matter (since many phases have not been materialized yet) with the 2016 Nobel Prize in Physics was a major milestone [17, 18]. Topological superconductors (TSCs) are one class of topological materials and can host Majorana bound states (MBSs, if focusing on the spatial feature), aka Majorana zero modes (MZMs, if focusing on the energy), which can be used as qubits for topological quantum computation. We provide a step-by-step guide to locate the TSC family within the rich families of topological materials (Fig. 1a). First, the (gapped) topological phases can be classified by symmetry constraint. Systems hosting topology without any symmetry constraint are intrinsic topological orders with long-range entanglement, while systems with symmetry constraint are short-range entangled [20]. Second, the short-range-entangled states can be classified as either topologically trivial, or topology protected by symmetry, termed symmetry-protected topological (SPT) phases. Third, within the SPT phases, it can further be divided as non-interacting or interacting depending on the strength of electron correlations. The non-interacting SPT phases include the popular topological insulators (TIs) protected by time-reversal symmetry (TRS), topological crystalline insulators (TCIs) protected by crystal symmetry, among others. For interacting Figure 1: **Classifications of gapped topological phases of matter and the TSC topological classes.** (**a**) The TSC family in a zoo of topological materials families. At a mean-field level, TSC can be considered as one type of non-interacting SPT phase. (**b**) Four subclasses of BdG family topological materials with inherent particle-hole symmetry. The most common TSC is the class D, and a related DIII is a TRS-preserved version which can be considered as direct product of two copies class-D TSCs with opposite chirality. Subfigure (**b**) adapted from Ref. [19]. TRS: Time-reversal symmetry. PHS: Particle-hole symmetry. TI: Topological insulator. FQHE: Fractional quantum Hall effect. SLS: Sub-lattice symmetry. SPT phases, the effect of strong electron correlation enriches the phase diagrams [21, 22, 23]. It is noteworthy to mention that although Weyl semimetals are non-interacting fermions with topological nodes, they are gapless topological phases that do not belong to the SPT phases discussed here. The gap between the ground state and the excited state allows well-defined collective excitations of ground states and is essential for the robustness against small perturbations. Finally, a superconductor certainly contains strong electron-electron interaction, yet at a mean-field theory level where the electron interaction is approximated as an effective potential (to be discussed in Section 2), a TSC can be classified within the non-interacting SPT family, similar to a TI. Fig. 1a provides such a hierarchical structure of overview. Moreover, there are at least four sub-categories belonging to the superconductor and superfluid "Bogoliubov de Gennes (BdG)" symmetry family (Fig. 1b) [19], though it is a misnomer to only consider BdG family as superconductor since some TI family can support fully gapped quasiparticles which can also describe a superconductor (the vice versa is true, that BdG family can also represent non-conventional TI). Within the BdG family, class D breaking TRS can host the 1D \(p\)-wave TSC and 2D \(p+ip\) TSC, and is often considered synonymous with TSC, although many other TSC families exist (such as class DIII which contains two copies of chiral \(p\)-wave SC with opposite chirality in 1D, direct product of \(p\pm ip\) wave SC in 2D, and He3-B phase in 3D). We refer to References [19, 24] to clarify the complexities. Majorana fermions can emerge in a TSC (but not in a conventional metal) because a Majorana fermion is its own antiparticle. In conventional metals and insulators, a quasiparticle (such as an electron or a hole) carries an electrical charge, with their antiparticle having an opposite charge. Therefore, Majorana fermions are unlikely to emerge as quasiparticles in metals and insulators. A superconductor is therefore a better platform to search for Majorana fermions because of particle-hole symmetry. However, conventional \(s\)-wave superconductors are also unlikely to host Majorana fermions since the quasiparticle, although formed as a superposition of an electron and a hole, contains opposite spins between the electron and hole, and thus cannot be its own antiparticle: an anti-quasiparticle will have electron and hole with opposite spin states. This leaves the chiral \(p\)-wave, odd-pairing superconductors (in 1D) and \(p+ip\) pairing superconductors (in 2D) as natural choices to search for Majorana fermions. Even so, the 1D \(p\)-wave and 2D \(p+ip\) TSC, which can host Majorana fermions, are only one sub-class of the TSC, and vice versa, there are other classes that can hold Majorana fermions (e.g. DIII class in 3D can host 2D surface Majorana fermion modes). Moreover, it is a misnomer to state that quantum computation is based on Majorana fermions, since Majorana fermions are still fermions satisfying conventional fermionic statistics. To achieve quantum computing, a Majorana fermion normally needs to be bound to a defect (hence called MBS) with zero energy (hence also called MZM), which can contain nontrivial non-Abelian statistics. This review uses MBS and MZM interchangeably. To clarify, a Marjoana fermion binding a vortex core in a 2D \(p+ip\) superconductor is not the only way to form the zero modes; there are other approaches to form localized zero modes when bound to topological defects [24]. Here, an MZM is a type of anyon termed Ising anyon, which contain non-Abelian statistics but by itself is not sufficient to carry out universal quantum computations [3, 25]. Fig. 2 explains the details of how MBS is formed in a TSC and how quantum computing is achieved. Figure 2: **Schematic illustration of TSCs and Majorana-based topological quantum computing.** (**a**) 1D topological superconductor (Kitaev chain), where each conventional fermion is the combination of two Majorana fermions. When “intra-site” pairing between the two Majorana fermions is stronger than the “inter-site” pairing (upper), a topologically-trivial SC is obtained. When inter-site interaction is stronger (lower), a 1D TSC is obtained, with two unpaired Majorana fermions (red spheres) with zero energy at two ends. (**b**) 2D \(p+ip\) superconductor. (top) Just like 1D TSC can have 0D boundary modes at two ends, the 2D TSC has 1D chiral Majorana edge modes. (middle) If we pierce one hole to create a region without superconductivity, half-integer excitation spectra are created. (bottom) If we add one magnetic flux quantum \(\Phi\) to the hole to create a superconducting vortex, the energy spectra become integers and a Majorana zero mode is generated. (**c**) Scheme for topological quantum computation. With \(2N\) superconducting vortices, the ground states will have a \(2^{N}\) degeneracy. The unitary transform of U, which can be used as a quantum gate, can be realized by exchanging different pairs of Majorana zero modes within the ground states. ## 2 Theory ### Superconductivity Crash Course A simple model for a superconductor can be described by single-band Hamiltonian with a two-body mean field interaction [26] \[H=\sum_{k,\alpha,\beta}\epsilon_{\alpha,\beta}(k)c^{\dagger}_{k\alpha}c_{k\beta}+ \sum_{k,\alpha,\beta}\Big{(}\Delta_{\alpha,\beta}(k)c^{\dagger}_{-k\alpha}c^{ \dagger}_{k\beta}+h.c.\Big{)} \tag{1}\] where \(c^{\dagger}_{k\alpha}\), \(c_{k\alpha}\) is creation/annihilation operator of particle at crystal momentum \(k\) and spin \(\alpha\), and \(\epsilon_{\alpha,\beta}(k)\) is measured from the chemical potential. The mean field potential \(\Delta_{\alpha\beta}(k)\) acts as an attractive potential that binds electrons together into the Cooper pair state and also serves as the order parameter. Due to the fermionic statistics of electrons, the potential must obey the following constraint \[\Delta_{\alpha,\beta}(k)=-\Delta_{\beta,\alpha}(-k). \tag{2}\] In other words, the pairing potential must be anti-symmetric in momentum space (triplet pairing) or spin space (singlet pairing). To understand the bandstructure picture of the superconductor, it is helpful to introduce the Bogoliubov-de-Gennes (BdG) transformation of basis \(\begin{pmatrix}c^{\dagger}_{k}&c_{-k}\end{pmatrix}\) where we can rewrite the Hamiltonian as \[H_{BdG}=\begin{pmatrix}\epsilon(k)&\Delta(k)\\ \Delta^{\dagger}(k)&-\epsilon(-k)\end{pmatrix} \tag{3}\] and we can see that \(\epsilon(k)\) and \(-\epsilon(-k)\) are related by particle-hole transformation. By treating \(\Delta(k)\) perturbatively, we can consider the bandstructure as \(\epsilon(k),-\epsilon(-k)\) which gap out at the band crossing point \(q\) with the energy gap \(\pm|\Delta(q)|\) (Fig. 3). It is worth noting that the quasiparticle excitations in this system, the BdG quasiparticles, are the superposition of the particle and hole states discussed in the Section above. ### Majorana Zero Modes Here, we consider Kitaev's model [27] to show the emergence of MZMs in the 1D \(p\)-wave superconductor \[H=\sum_{j}-t(c^{\dagger}_{j}c_{j+1}+h.c.)-\mu c^{\dagger}_{j}c_{j}+|\Delta|(c^ {\dagger}_{j}c^{\dagger}_{j+1}+h.c.) \tag{4}\] where we note that the superconductor pairing term is momentum-dependent due to its cross-site pairing. In the Bloch basis, the Hamiltonian can be written as \[H_{Bloch}(k)=\begin{pmatrix}-2t\cos(k)-\mu&2i|\Delta|\sin(k)\\ -2i|\Delta|\sin(k)&2t\cos(k)+\mu\end{pmatrix} \tag{5}\] with energy spectrum \(E=\pm\sqrt{(2t\cos(k)+\mu)^{2}+4|\Delta|^{2}\sin^{2}(k)}\). We point out that at \(\mu=-2t\) and \(2t\), the spectrum becomes gapless, allowing the topological transition of the band. Consider the limiting case where \(|\mu|\rightarrow\infty\), the eigenstate of the system becomes either localized in the lattice site (for \(\mu\rightarrow-\infty\)) or completely empty as vacuum (for \(\mu\rightarrow+\infty\)), i.e., the system is a trivial atomic insulator (Fig. 2a, upper). On the contrary, the scenario where \(-2t<\mu<2t\) is indeed topological (Fig. 2a, lower). Due to the bulk-boundary correspondence, the boundary between such topological state and topologically trivial state would host boundary modes to reconcile the topological number discontinuity [28]. To reveal the features of MBS, we can define the Majorana operator \(a_{j}\) from the fermion operator \(c_{j}\) \[c_{j} = \frac{1}{2}(a_{2j-1}+a_{2j}) \tag{6}\] \[c_{j}^{\dagger} = \frac{1}{2}(a_{2j-1}-a_{2j}) \tag{7}\] We notice that the self-conjugating property \(a_{j}=a_{j}^{\dagger}\) and the anti-commutation relation \(\{a_{i},a_{j}\}=2\delta_{ij}\) of the Majorana operator are sufficient to preserve the anti-commutation relationship of fermion \(\{c_{i}^{\dagger},c_{j}\}=\delta_{ij}\) and \(\{c_{i}^{\dagger},c_{j}^{\dagger}\}=\{c_{i},c_{j}\}=0\). The Hamiltonian can then be written as \[H=\frac{i}{2}\sum_{j}-\mu a_{2j-1}a_{2j}+(t+|\Delta|)a_{2j}a_{2j+1}+(-t+| \Delta|)a_{2j-1}a_{2j+2}. \tag{8}\] Figure 3: **An example of the superconductor bandstructure.** The gray lines indicate the energy band \(\epsilon(k),-\epsilon(-k)\) while the red and blue lines are the bandstructure after tuning on the interaction potential. Note that since \(\Delta\) is momentum-dependent, the perturbation of band crossing can result in either the nodal or gap structure. This leads back to the schematics in Fig. 2a where \(\mu\) acts as the "intrasite" interaction, i.e., hopping term between the two Majorana fermions at the same site, and \(t+|\Delta|\) and \(-t+|\Delta|\) correspond to the hopping across neighboring sites, i.e., "inter-site" hopping. In the topological trivial state, the Majorana fermion pairs up on the same site and acts as a normal fermion (Fig. 2a, upper figure). In the topological non-trivial case, e.g. \(\mu=0\), \(t=|\Delta|>0\), the Majorana pairing occurs on the neighboring sites of the lattice, resulting in the leftover Majorana states at the two boundaries (red spheres in Fig. 2a, lower figure). In order to demonstrate computation with such states, we consider a system of superconducting islands with tunable chemical potentials and "valves" connecting the superconductivity [29]. Note that due to the non-Abelian statistics of the MZMs [30], the permutation of 2N-MZMs would form a braid group which could be used as a basis for robust quantum computing against local perturbation. The topological robustness can be seen since information can be stored nonlocally (at two ends of the 1D \(p\)-wave TSC, or two MZMs spatially away from each other in 2D \(p+ip\) TSC). Thermal fluctuations at finite temperature could still pose a challenge, since the system could be thermally excited to an excited state (Fig. 2c). ## 3 Candidate Materials The experimental realization of TSC is currently limited to a relatively small pool of candidate materials. These materials can be broadly categorized into two groups: natural candidates, which may host TSC own their own, and artificial candidates, which require 2D heterostructures or 1D wires with multiple constituent materials. ### Natural Candidates In the search for TSCs, spin-triplet pairing superconductors which can host MBS have garnered significant attention (DIII family in Fig. 1b). A handful of bulk TSC candidates have been reported, which we summarize below. While we have attempted to be as comprehensive as possible, it is important to note that our list may not be fully exhaustive due to our limited knowledge. However, it should still cover a decent portion of known TSC candidates by far. **Sr\({}_{2}\)RuO\({}_{4}\):** Superconductivity in strontium ruthenate was discovered in 1994 and continues to draw significant research interest because of its mysterious superconducting pairing symmetry. While the chiral \(p\)-wave pairing is believed to be realized in the A-phase of this material, favored by several early experiments, such as the muon spin rotation and relaxation (\(\mu\)SR) [31] and the Knight shift [32] measurements taken in 1998. In contrast, later reports by Knight shift [33], NMR [34], and polarized neutron scattering experiments [35] conducted in 2019 have raised questions regarding the presence of the "chiral \(p\)-wave" or even the spin-triplet pairing in this material. The intrinsic Josephson junction [36], and the in-plane-magnetic-field stabilized half-quantum vortices [37] experiments show that the pairing symmetry in Sr\({}_{2}\)RuO\({}_{4}\) is a triplet with its spin polarization axis aligning in an in-plane direction. A recent report on stress-induced splitting between the onset temperatures of superconductivity and TRS breaking by zero-field-\(\mu\)SR measurements claimed the qualitative expectations for a chiral order parameter in this mysterious compound [38]. Liu _et al._ proposed the following scheme: the order parameter of Sr\({}_{2}\)RuO\({}_{4}\) is not chiral \(p\)-wave but instead one of the helical states that do not break TRS overall, yet in which each spin component does break it individually [39]. More experimental measurements and theoretical work are needed to unambiguously establish the precise pairing symmetry in this TSC candidate. **UPt\({}_{3}\):** Another leading candidate for bulk TSC is UPt\({}_{3}\) with multiple superconducting phases [40]. The TRS breaking superconducting state was reported many years ago by \(\mu\)SR [41] with further confirmation by a Kerr effect study [42]. The temperature dependence of the upper critical field is reported to exhibit a strong anisotropy [43] with a possible strong spin-orbit interaction locking the direction of zero spin projection [44]. These results were contradictory to the NMR results [45] that suggest an equal-spin pairing state with the spin angular momentum directed along the magnetic field, which is possible only in the presence of little or no spin-orbit coupling. Another study by polarized neutron scattering probe predicts odd-parity, spin-triplet superconductivity in UPt\({}_{3}\)[46]. The gap symmetry in UPt\({}_{3}\) was investigated by thermal conductivity tensors where the field-angle-resolved thermal conductivity shows spontaneous twofold symmetry breaking in the gap function for the high-field C-phase, indicating that the pairing symmetry belongs to the \(f\)-wave category. The theoretically proposed chiral \(f\)-state is compatible with most of the experimental results reported until now [47, 48]. However, some fundamental issues, such as the existence of tetra-critical points, have not been explained within the scenario of chiral states [48]. First-principles analysis predicted the microscopic superconducting gap structure as an E\({}_{2u}\) state with in-plane twofold vertical line nodes on small Fermi surfaces and point nodes with linear dispersion on a large Fermi surface [49]. A recent report by small-angle neutron scattering evidenced bulk broken TRS in this heavy-fermion superconductor UPt\({}_{3}\) with anisotropy of the order parameter and current density near the vortex cores [50]. **URu\({}_{2}\)Si\({}_{2}\):** The pairing mechanism of unconventional superconductivity in the heavy-fermion compound, URu\({}_{2}\)Si\({}_{2}\), has been a longstanding mystery, despite being intensively studied by several experimental and theoretical groups. Polar Kerr effect [51], magnetic torque [52], and \(\mu\)SR [53] measurements have provided evidence for the bulk TRS broken superconducting state. In addition, the observation of a colossal Nernst signal attributed to the superconducting fluctuations has been reported, where the results were predicted as chiral or Berry-phase fluctuations associated with the broken TRS of the superconducting order parameter [54, 55]. Furthermore, the field-orientation-dependent specific heat measurements and theoretical analyses described the gap symmetry of URu\({}_{2}\)Si\({}_{2}\) as a chiral \(d\)-wave-type [56]. **SrPtAs:** Hexagonal honeycomb structure SrPtAs superconductor with a spontaneous TRS breaking state was reported based on \(\mu\)SR experiments, suggesting possible chiral-\(d\)-wave states [57]. Recent nuclear magnetic resonance measurements showed multigap superconductivity [58] and a suppressed coherence peak that supports chiral \(d\)-wave order parameter [59]. According to a theoretical study [60], SrPtAs is a superconductor with protected Majorana-Weyl nodes in bulk and (Majorana) Fermi arcs on the surface, along with other topological Majorana surface states. However, further experimental evidence is needed to confirm these predictions. **UTe\({}_{2}\):** The recently discovered heavy-fermion superconductor UTe\({}_{2}\) is a prime candidate for a topological chiral spin-triplet superconductor. There are several reports on spin-triplet pairing [61, 62, 63] and the possible chiral state [64, 65], but the symmetry and nodal structure of the order parameter remain controversial [66]. The anisotropy of low-energy quasiparticle excitations indicates that the order parameter has multiple components in a complex chiral form, which provides hints of the topological properties in UTe\({}_{2}\)[67]. More intriguingly, the optical Kerr effect [64], and microwave surface impedance measurements [68] suggest a TRS broken superconducting state. A scanning tunneling microscopy (STM) study reveals signatures of chiral in-gap states, suggesting UTe\({}_{2}\) is a strong candidate for chiral-triplet TSC [63]. **Transition Metal Dichalcogenides:** The superconductivity of 2M-WS\({}_{2}\), a transition metal dichalcogenide (TMD), was recently confirmed by transport measurements [69] and scanning tunneling microscopy/spectroscopy (STM/STS) investigations. Zero energy peaks in the STS spectra were observed in magnetic vortex cores [70], suggesting the possible existence of MZMs. A further angle-resolved photoemission spectroscopy (ARPES) study established the TSC nature of 2M-WS\({}_{2}\)[71]. Chiral superconductivity is reported in another TMD, 4Hb-TaS\({}_{2}\)[31] with a strong signature of zero-bias states in vortex cores [72]. Other systems, such as MoTe\({}_{2}\)[73, 74, 75], and WTe\({}_{2}\)[76] (parent and doped), are also of significant research interest in exploring possible topological superconductivity. However, the direct observation of topological surface states (TSSs) and the superconducting gap of the TSSs are yet to be explored. **LaPt\({}_{3}\)P:** The weakly correlated pnictide compound LaPt\({}_{3}\)P, with centrosymmetric crystal structure, has been reported TRS broken superconducting state and low-temperature linear behavior in the superfluid density, indicating line nodes in the order parameter. It was predicted that LaPt\({}_{3}\)P is a chiral \(d\)-wave singlet using symmetry analysis, first-principles bandstructure calculation, and mean-field theory [77]. **Doped Topological Insulators:** One approach to realizing TSC is through doping bulk topological insulators (TIs). This approach is attractive because it has the potential to realize the coexistence between fully gapped, bulk superconductivity and topological surface states. In addition, the strong spin-orbit coupling (SOC) of topological materials may lead to unconventional pairing mechanisms [78]. Early experimental efforts on this front focused on electrochemically intercalating Cu into Bi\({}_{2}\)Se\({}_{3}\)[79, 80]. Here, superconducting transition temperatures of \(T_{c}\sim 3.8\)K were observed for Cu doping between 10% and 30%. Moreover, there have been observations of a zero-bias conduction peak on the surface of Cu\({}_{0.3}\)Bi\({}_{2}\)Se\({}_{3}\) through point contact spectroscopy [81], encouraging the possibility of TSC in this system. However, follow-up STS measurements found no evidence of a zero-bias conductance peak intrinsic to the material and also found evidence for conventional BCS \(s\)-wave superconducting pairing [82]. The search for TSCs in this system has also been hampered by the low percentage of bulk material that actually show superconductivity. To overcome this challenge, there have been efforts to intercalate Sr and Ti instead of Cu, which has resulted in larger bulk superconducting fractions of \(\sim\)91% along with transport evidence for topological surface states [83, 84]. Along these lines, there have also been investigations on superconductivity resulting from In doping of TI SnTe [85, 86, 87]. Nevertheless, the pairing mechanism in this class of materials remains under debate. **Non-centrosymmetric Superconductors:** Another class of materials that may host TSC is the non-centrosymmetric superconductors. Due to their broken inversion symmetry, these materials are allowed to have asymmetric spin-orbit coupling (ASOC), which can mix singlet and triplet superconductivity at sufficiently large strength. This is a large family of materials with thorough reviews reported elsewhere [88]. Here, we highlight some notable examples. A central challenge in this class of materials is unambiguously determining the degree of singlet-triplet pairing. For example, CePt\({}_{3}\)Si [89] is a heavy fermion superconductor at ambient pressure with Rashba SOC. It exhibits antiferromagnetism below \(T_{N}=2.2\) K, and superconductivity below \(T_{c}=0.7\) K [90], with thermal transport measurements indicating line nodes in the pairing gap [91]. However, it is possible that the line node in this system arises due to the coexistence of antiferromagnetic and SC orders [92], highlighting the difficulty in extracting the pairing structure from experiments. More recently, superconductivity has been found in topological, inversion symmetry breaking half Heusler compounds RPdBi (R = Ho, Er, Tm, Lu) [93] and RPtBi (R = La, Lu, Y) [94, 95], which are attractive systems because of the chemical and magnetic tunability, as well as the possibility for higher-angular momentum pairing. **Fe-based Superconductors:** Fe-based superconductors are yet another promising avenue for realizing TSC. Prominently featured within this broad family is the FeSe\({}_{1-x}\)Te\({}_{x}\) (FTS) system. In this material, topological bands are driven by SOC from Te inclusion. Unlike other TSC candidates, the unconventional pairing mechanism in FTS [96] is not as hotly debated, and there is evidence for topological surface states coexisting with a hard SC gap [97, 98, 99, 100, 101]. There have been significant experimental efforts to realize MZM within vortex cores on the surface of FTS [102, 103, 104, 105] and (Li\({}_{0.84}\)Fe\({}_{0.16}\))OHFeSe [106]. Nevertheless, it remains a challenge to determine whether the hallmark signature of MZMs, the zero-bias peak (ZBP), is related to trivial states instead of topological states [107]. More recently, there have been theoretical predictions [98] and experimental evidence [108] for helical hinge MZMs in FTS arising from the topological nature and \(s_{+-}\) pairing mechanism of this material. ### Artificial Candidates Artificial structures such as engineered heterostructures and nanowire hybrids are currently attracting significant attention as an alternative to intrinsic TSCs. Several recipes came up for the possible realization of TSCs and detecting MZMs. Here, we briefly summarize predicted systems and experimental materials of the past decade, such as 1. 1D TSC - Hybrid SC - semiconductor nanowire and magnetic atoms chain/SC 2. 2D TSC - Topological insulator (Topological crystalline insulator)/SC heterostructures **1D TSC:** It was proposed that MZMs can emerge at the ends of 1D TSCs. Several studies report possible 1D TSCs, mainly involving hybrid superconductor-semiconductor nanowire devices in the presence of an applied magnetic field along the nanowire axis [109, 110, 111, 112]. MZMs are expected to arise at each end of the wire in such a system. An aluminum superconductor in proximity to an InAs nanowire having strong SOC and Zeeman splitting is reported to show a distinct zero-bias conductance peak and its splitting in energy, with a small applied magnetic field along the wire [109]. Similar predictions were given by another group [111, 112]. Another hybrid structure of indium antimonide nanowires contacted with superconducting niobium-titanium nitride shows bound states at zero bias with the variation of magnetic fields and gate voltages, supporting the hypothesis of Majorana fermions in nanowires coupled to superconductors [110]. Nadj-Perge _et al._ created an alternative hybrid system by depositing iron atoms onto the surface of superconducting lead, where enhanced conductance at the ends of these chains at zero energy was observed by STM [113]. Here proximity-induced superconductivity was expected to be topological due to the odd number of band crossings at the Fermi level. **2D TSC:** Research on 2D TSCs attracted increased attention after a prediction made by Fu and Kane based on an \(s\)-wave SC and TI heterostructure. The helical pairing of the Dirac fermions was realized in a Bi\({}_{2}\)Se\({}_{3}\)/Nb heterostructure [114]. Other heterostructures such as Bi\({}_{2}\)Se\({}_{3}\)/NbSe\({}_{2}\) and Bi\({}_{2}\)Te\({}_{3}\)/NbSe\({}_{2}\) were also predicted as possible TSC [115, 116, 117, 118]. However, the proximity-induced pairing potential is very small for the above systems because of the low pairing potential of the \(s\)-wave superconductor. Later, unconventional superconductors were used instead of \(s\)-wave superconductors in several systems [119, 120, 121], such as Bi\({}_{2}\)Te\({}_{3}\)/FeTe [120] and Bi\({}_{2}\)Te\({}_{3}\)/FeTe\({}_{0.55}\)Se\({}_{0.45}\)[121]; the question of whether the induced superconductivity can be regarded as chiral \(p\)-wave pairing is still under debate. Proximity-induced superconductivity was also investigated in atomically flat lateral and vertical heterostructures of TCI Sn\({}_{1-x}\)Pb\({}_{x}\)Te and superconducting Pb [122, 123]. High-resolution STM measurements make it a promising candidate for TSC. A recent report on the Moire pattern between a van der Waals superconductor (NbSe\({}_{2}\)) and a monolayer ferromagnet (CrBr\({}_{3}\)) proposed periodic potential modulations arising from the Moire pattern as a powerful way to overcome the conventional constraints for realizing and controlling topological superconductivity [124, 125]. Another system, such as gated monolayer WTe\({}_{2}\), is reported as a higher-order topological superconducting candidate with inversion-protected Majorana corner modes without proximity effect [126]. Further studies using STM or transport measurements are required to probe Majorana corner modes. Natural and artificial TSC candidates are summarized in Table 1. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Possible TSC & Features & Drawback \\ \hline \hline Sr\({}_{2}\)RuO\({}_{4}\) [31, 32, 33, 34, 35, 36, 37, 38, 39] & Chiral superconductor & Pairing symmetry is still controversial \\ \hline UPt\({}_{3}\) [40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50] & 1. Time-reversal symmetry-breaking (TRSB) state in bulk & Fundamental issues have not been explained within the scenario of chiral states \\ & 2. Proposed as chiral \(f\)-wave SC & \\ \hline URu\({}_{2}\)Si\({}_{2}\) [51, 52, 53, 54, 55, 56] & 1. Bulk TRSB & Pairing mechanism is still a mystery \\ \hline SrPtAs [57, 58, 59, 60] & 1. Multigap superconductor 2. Possible chiral \(d\)-wave states & Lack of experimental evidence \\ \hline UTe\({}_{2}\) [61, 62, 63, 64, 65, 66, 67, 68] & 1. TRSB 2. Complex chiral form in order parameter 3. Prime candidate for chiral spin-triplet superconductor & Symmetry and nodal structure of the order parameter remains controversial \\ \hline 2M-WS\({}_{2}\) [69, 70, 71] & 1. Zero energy peaks in the STS spectra in magnetic vortex cores & No direct evidence \\ & 2. Topological surface states acquired nodeless superconducting gap & \\ \hline 4Hb-TaS\({}_{2}\) [31, 72] & Strong signature zero-bias states in vortex cores & Requires further investigation \\ \hline MoTe\({}_{2}\) (parent and doped) [73, 74, 75] & 1. Type-II Weyl semimetal 2. 2-gap \(s\)-wave symmetry 3. Suggested topologically non-trivial \(s_{+-}\) state 4. MoTe\({}_{2-x}\)S\({}_{x}\) two-band \(s\)-wave bulk superconductor & No direct observation of topological surface states (TSSs) and the superconducting gap of the TSSs \\ \hline \end{tabular} \end{table} Table 1: A summary of TSC candidates \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Possible TSC & Features & Drawback \\ \hline \hline LaPt\({}_{3}\)P [77] & 1. TRSB & No direct evidence \\ [77] & 2. Low-temperature linear behavior in the superfluid density & \\ & 3. Predicted as chiral \(d\)-wave SC & \\ \hline Cu\({}_{x}\)Bi\({}_{2}\)Se\({}_{3}\) [79, 80, 81] & Zero-bias conduction peak by point contact spectroscopy & 1. Low superconducting volume fraction 2. STM measurements found no evidence of a zero-bias conductance peak intrinsic to the material 3. Evidence for conventional BCS \(s\)-wave superconducting pairing \\ \hline CePt\({}_{3}\)Si [89, 90, 91, 92] & Line nodes in the pairing gap & Line node may arise due to the coexistence of AFM and SC orders \\ \hline Fe-based superconductor [96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108] & Topological surface states coexisting with a hard superconducting gap & Challenge to determine whether the hallmark signature of MZM’s, the zero-bias peak (ZBP), is related to trivial states instead of topological states \\ \hline 1D TSC: Al/ InAs or InSb Nanowire, Fe atoms chain/Pb [110, 111, 112, 113] & Distinct zero-bias conductance peak & Such hybrid systems do not provide definite proof of a Majorana state \\ \hline Bi\({}_{2}\)Se\({}_{3}\)/Nb, Bi\({}_{2}\)Se\({}_{3}\)/NbSe\({}_{2}\) and Bi\({}_{2}\)Te\({}_{3}\)/NbSe\({}_{2}\) heterostructure [114, 115, 116, 117, 118] & Helical pairing of the Dirac Fermions & Proximity-induced pairing potential is very small \\ \hline Bi\({}_{2}\)Te\({}_{3}\)/ FeTe\({}_{0.55}\)Se\({}_{0.45}\) [119, 120, 121] & & \\ \hline \end{tabular} \end{table} Table 1: A summary of TSC candidates ## 4 Experimental Signatures ### Tunneling Spectroscopy One of the crucial tools for identifying TSCs and exploring their properties is STM/STS. This powerful method is well suited for the study of TSCs because it has high spatial and energy resolution of electron spectra and atomic topography (schematic in Fig. 4a). One of the main utilities of this technique is in investigating electronic bandstructures through quasiparticle interference (QPI). QPI measures the quasiparticle spectrum by imaging electronic standing waves through differential conductance maps. This method has been used to identify pairing symmetries [96] and topological surface states [100] in TSC candidates, especially the Fe-based superconducting materials (Fig. 4b-d). QPI can be advantageous over other bandstructure reconstructing techniques like ARPES because it provides atomic-scale sensitivity to different surfaces and can be operated in a finite magnetic field. STM/STS is also prominently used to search for MZMs localized within vortex cores and at other defects such as step edges and chain boundaries. In Fe-based systems, STM has been extensively used to characterize zero-bias peaks at the center of superconducting vortices [102, 103, 104, 105, 106]. Crucial for these investigations is the ability of the STM to identify vortex cores with atomic Figure 4: **Scanning tunneling spectroscopy for TSC studies.** (**a**) By varying bias voltage, the differential tunneling current becomes a measure of local density states for electrons. (**b**) A zero-bias conductance map under 2.0 T is shown on a sample surface FeSe\({}_{0.45}\)Te\({}_{0.55}\). dI/dV spectrum measured at the center of the vortex core. (**c**) A line-cut intensity plot from the vortex shows a stable MZM across the vortex core. (**d**) An overlapping plot of dI/dV spectra under different tunnel coupling values. Figures recreated from [105]. (**e**) Zero bias mapping of a vortex at 0.1 T with the spin nonpolarized tip on the topological superconductor Bi\({}_{2}\)Te\({}_{3}\)/NbSe\({}_{2}\). (**f**) dI/dV away from the center of a vortex measured with a fully spin-polarized tip, where the tunneling is found independent of the spin polarization. Figures recreated from [115]. (**g**) STM image of a monolayer-thick CrBr\({}_{3}\) island grown on NbSe\({}_{2}\). (**h**) Experimental dI/dV spectroscopy on the NbSe\({}_{2}\) substrate (blue), the middle of the CrBr\({}_{3}\) island (red), and at the edge of the CrBr\({}_{3}\) island (green). Figures recreated from [125]. resolution and to measure their differential conductance within the center of the vortex at a finite magnetic field. In this way, the zero-bias conductance can be measured as a function of distance from the center, enabling a direct comparison to theoretically predicted MBS profiles (for example, Fe(Te, Se) in Fig. 4b). STM/STS will also likely play a key role in investigating Majorana braiding operations, which are the foundation for topological quantum computing schemes, by manipulating MBSs directly with the tip [104]. It is also possible to arrange magnetic atoms on the surface of conventional superconductors to directly realize the 1D Kitaev chain model, which prominently features MBSs at each end of the chain [113, 127, 128]. In these experiments, the STM tip writes ferromagnetic atoms into a chain along the surface of a superconductor. Next, it can directly probe the tunneling density of states at either end of the atomic chain and characterize the ZBPs as a function of distance from the chain boundary and tip spin polarization [107]. Beyond investigating the possibility of MZMs and TSC in vortex cores and magnetic chains on superconducting substrates, there have also been STM explorations of 1D topological edge-states in 2D heterostructures (such as Fig. 4e-h). These heterostructures aim to proximitize superconductivity into topological materials, which have 1D helical edge modes [129, 130]. In one study, higher-order topological insulator Bi was grown on superconducting Nb, and the STM was used to arrange Fe atoms in a chain along the chiral hinge mode, leading to the observation of MZMs within this chain [129]. In another study, van der Waals heterostructures consisting of atomically thin quantum spin Hall insulator WTe\({}_{2}\) and superconductor NbSe\({}_{2}\) were prepared, and a superconducting gap was observed along the 1D edge states of the WTe\({}_{2}\) flakes [130]. These experiments highlight the utility of STM in probing TSC and MZM's with atomic resolution in a diverse set of experimentally realizable materials systems. ### Photoemission Spectroscopy ARPES has served as an indispensable technique to measure phenomena related to the collective behavior of electrons and their interactions, namely over the course of different eras of superconductivity research. For example, extensive research on iron-based superconductors has widely been enhanced due to the refined orbital information of the electronic states and momentum-resolved electron dynamics provided by ARPES. In that regard, significant improvements to energy and momentum resolution in ARPES in recent decades, made possible with the latest laser and synchrotron-based light sources, have enabled the measurements of quantities such as the superconducting energy gap and bandstructures with unprecedented precision [131, 132]. More recently, precise ARPES measurements of the electronic bandstructure of topological materials have unveiled topological Dirac and Weyl band crossings in the bulk of the materials, in addition to corresponding topological surface states. In light of these topological states, the quest for TSC evidently follows two routes, as mentioned previously, and ARPES has served as key probes in both cases. One method of choice pioneered during the earlier discoveries of the topological matter is the doping of TIs to induce superconductivity or at the interface of fabricated heterostructures of TIs with superconductors via the proximity effect. In this case, driving the non-spin degenerate surface states of the three-dimensional TI towards superconductivity would induce the realization of a spinless \(p_{x}+ip_{y}\) model with preservation of TRS, as described by the Fu-Kane model [133]. The earliest measurements of TI Bi\({}_{2}\)Se\({}_{3}\) doped with copper using ARPES revealed spin-polarized topological surface states preserved at the Fermi level in the superconducting regime [134]. This was followed by works on heterostructures of Bi\({}_{2}\)Se\({}_{3}\) thin films on NbSe\({}_{2}\)[116, 135] or on Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8+\delta}\)[136, 137]. While the ARPES measurements on the NbSe\({}_{2}\) substrate (Fig. 5a) samples reveal surface state Dirac cones with an appreciable hybridization-related energy gap, there are disagreeing views on whether the proximity effect is suppressed for the samples on the cuprate superconductor substrate due to short coherence lengths, among other reasons. Follow-up studies on Dirac cone surface states in doped Bi\({}_{2}\)Se\({}_{3}\) with Tl [138] or Sr [139], the emergence of isotropic superconducting gaps of samples with Pb(111) grown on similar Tl-doped Bi\({}_{2}\)Se\({}_{3}\) thin films [140] and helical Cooper pairing through measurements of the superconducting gaps in heterostructures of Bi\({}_{2}\)Se\({}_{3}\) on Nb [114] via ARPES have revealed how this momentum-resolved probe can be used to establish these systems as potential platforms for two-dimensional TSC (Fig. 5c and d). Concurrently, there are tremendous efforts to uncover materials that are fully-gapped bulk superconductors and that inherently possess strong non-trivial topological surface states, the amalgamation of which may serve as a potential platform to induce topological superconductivity through the Fu-Kane paradigm. ARPES has undoubtedly participated at the forefront of the discovery of these materials, serving simultaneously as a probe of the induced superconducting gap and of the topological surface states by virtue of modernistic upgrades in energy and momentum resolution. To date, the strongest evidence for topological superconductivity have originated from the observation of topological spin-helical surface Dirac cones on the (001) surface of FeTe\({}_{0.55}\)Se\({}_{0.45}\)[100] (Fig. 5b). Evidenced from calculations of the topological order manifesting the spin-orbit-coupling-induced band inversion from Se substitution and previous work [97], ARPES measurements reveal the superconductivity induced in the topological surface states as the system enters the superconducting state through superconducting gaps of 1.8 meV which is isotropic in momentum. Beyond FeTe\({}_{0.55}\)Se\({}_{0.45}\), there are other candidate systems such as binary Pd-Bi systems [141, 142], TaSe\({}_{3}\)[143], 2M-WS\({}_{2}\) (Fig. 5c) [71], Li(Fe, Co)As [98], and CaKFe\({}_{4}\)As\({}_{4}\)[144] which have been measured for the topological surface states, yet typically suffer from overlapping bulk bands with small gaps which are difficult to resolve in ARPES data in comparison to the Bi\({}_{2}\)Se\({}_{3}\)-based systems. They remain to be validated with other probes to establish their topological states and determine whether they can serve as platforms for realizing Majorana zero modes. ### Transport Measurements The chiral Majorana edge modes in a \(p+ip\) TSC is expected to provide direct thermal transport evidence on the presence of Majorana fermions. However, such measurement has only been done in quantum spin liquid candidate, a system with long-range entanglement (Fig. 1a) but can also host Majorana edge modes [145]. Even so, other electrical transport measurements could still provide some insights. Some of the transport measurements in natural TSC candidates include Shubnikov-de Haas oscillations (SdHOs) that show a non-trivial Berry phase shift. In T\({}_{d}\)-MoTe\({}_{2}\), the Landau level index plot shows a \(\pi\) Berry phase shift [73]. In another candidate, Sr\({}_{x}\)Bi\({}_{2}\)Se\({}_{3}\) SdHOs confirm the shift expected from a Dirac spectrum, giving transport evidence of surface states [83]. Magnetotransport measurements show striking SdHOs in other putative TSC candidate TaSe\({}_{3}\)[146], LuPdBi [147], and YPtBi [148]. According to Abrikosov's theory of quantum magnetoresistance, linear magnetoresistance in Figure 5: **ARPES studies on TSC.** (**a**) A schematic diagram of ultrathin Bi\({}_{2}\)Se\({}_{3}\) films epitaxially grown on the (001) surface of \(s\)-wave superconductor 2H–NbSe\({}_{2}\) (top). High-resolution ARPES dispersion map of Bi\({}_{2}\)Se\({}_{3}\) film on NbSe\({}_{2}\) where the white circle and cross schematically show the measured direction of the spin texture on the top surface of Bi\({}_{2}\)Se\({}_{3}\) film (bottom). Figure recreated from Ref. [135]. (**b**) Band dispersion of FeTe\({}_{0.5}\)Se\({}_{0.5}\) (top). The momentum distribution curvature plot shows the Dirac-cone type band. The Dirac-cone type band (blue lines) is the topological surface band, and the parabolic band (white curve) is the bulk valence band. In the low temperature (2.4 K) data, the spectral features are narrower. The extracted bands overlap well with the curvature intensity plot, confirming the existence of the parabolic bulk band and the Dirac-cone-type surface band (bottom). Figure recreated from Ref. [100]. (**c**) Photoemission spectra intensity plots of the band dispersions in the superconducting (top left) and normal (top right) states show clear superconducting gaps from both the TSS (red dashed lines) and bulk state (BS) (blue dashed lines) in 2M–WS\({}_{2}\). Temperature dependence of the band dispersions of the TSS and BS show the clear superconducting gap below \(T_{c}\) (bottom). Figure recreated from Ref. [71]. zero-gap band systems with a linear energy dispersion is a result of the system being in the extreme quantum limit thereby confining all the electrons in lowest Landau level [147, 149]. In both LuPdBi and YPtBi, the measured zero field resistivities were fitted with a sum of theoretical metallic surface states and semiconducting bulk components and found to agree very well, providing the signature of the existence of nontrivial surface states [147, 148]. The thermal conductivity measurements conducted at low temperatures indicate that 2M-WS\({}_{2}\) may possess either an anisotropic superconducting gap or multiple nodeless superconducting gaps, consistent with features of TSC candidates [150]. In addition to bulk transport, an alternative local probe compared to STS for probing TSC is quantum point contact spectroscopy (PCS). Similar to STS, PCS can be employed to acquire the transport signature of MZMs in ultra thin interfaces [151], as well as bulk topological crystalline interfaces such as Dirac semimetal Cd\({}_{3}\)As\({}_{2}\)[152], Weyl semimetal TaAs [153] and TSC candidates Sr\({}_{2}\)RuO\({}_{4}\)[154] and Au\({}_{2}\)Pb [155]. In single crystalline samples, a needle anvil PCS configuration is found to be convenient. In PCS, two major types of induced superconductivity are observed in the interfaces. Tip-enhanced superconductivity (TESC) is observed when a superconductor is in contact with a normal metal tip at the interface, whereas tip-induced superconductivity (TISC) is observed when nonsuperconducting tip contacts with a nonsuperconducting material in the interface [156]. PCS could also be employed to study multiband superconductivity [157]. Another alternative transport characterization technique for the TSC is the Josephson junction. Josephson junctions can be used to probe the phase coherence of a superconductor. By measuring the Josephson current as a function of the applied voltage or magnetic field, one can obtain information about the superconducting properties of the material [158, 159]. The presence of a MBS may lead to a \(4\pi\)-periodic supercurrent through a Josephson junction. A systematic study of the radio frequency response for various temperatures and frequencies conducted by de Ronde _et al._ has resulted in the observation of a \(4\pi\)-periodic contribution to the supercurrent in Josephson junctions based on BiSbTeSe\({}_{2}\)[160]. Such measurements can provide evidence for the existence of topological properties in the material, even if they do not directly confirm topological superconductivity. ### Muon Spin Spectroscopy Muon spin spectroscopy (\(\mu\)SR) is an extremely sensitive local probe to microscopically resolve the pairing symmetry in superconductors [161, 162, 163]. In this experimental technique, 100% spin-polarized positive muons are implanted in the material and are used to detect the corresponding muon spin evolution with time on an atomic-scale limit (Fig. 6a). The precession of muon spin is due to the magnetic field due to its local environment, similar to other magnetic resonance techniques, such as nuclear magnetic resonance [164] and electron spin resonance [165]. In the mixed or vortex state of type-II superconductors, a \(\mu\)SR study gives rise to a spatial distribution of local magnetic fields, which demonstrate itself through a relaxation of the muon polarization [161, 162]. Transverse field-\(\mu\)SR measurements reveal one of the most important superconducting parameters, London penetration depth, that is inversely related to the density of Cooper pairs n\({}_{s}\)[166]. The temperature and the magnetic field variation of n\({}_{s}\) directly indicate the symmetry of the superconducting gap. In the zero-field (ZF) configuration, \(\mu\)SR is a wonderful tool to reveal the broken TRS in the superconducting state that has significant consequences for the symmetry of pairing and quasi-particle spectrum [161, 162, 163]. Because TSC are classified with respect to the TRS symmetry, such as TRS preserved TSC (or helical superconductor) and TRS broken TSC (or chiral superconductor) [167], \(\mu\)SR is frequently used to identify TSC candidates. In non-centrosymmetric superconductors (such as CePt\({}_{3}\)Si [89], Li\({}_{2}\)(Pd\({}_{1-x}\)Pt\({}_{x}\))\({}_{3}\)B [169]), broken inversion symmetry allows for Rashba and Dresselhaus SOC that lift the spin degeneracy and split the Fermi surface. In such a system, parity is ill-defined, and hence mixing of spin-singlet (\(s\)-wave) and spin-triplet (\(p\)-wave) states is allowed [170, 171]. If the \(p\)-wave gap is larger than the \(s\)-wave gap in a 2D non-centrosymmetric superconductor, then topological properties may appear [170, 171]. In such a topological state, if it preserves TRS, helical Majorana fermions show up at the edge [169]. In a chiral superconductor, the phase of the complex superconducting gap function winds in a clockwise or anti-clockwise sense as momentum vector, \(\vec{k}\) moves about some axis on the Fermi surface [167]. The gap function breaks TRS spontaneously and is degenerate with its time-reversed partner. It is a Figure 6: **Signature of TSC from \(\mu\)SR.** (**a**) The symmetric diagram of positron emission and the muon spin direction. (**b**) Time evolution of the spin polarization of muons above and below superconducting transition temperature under zero-field (ZF) conditions indicates the TRS breaking for Sr\({}_{2}\)RuO\({}_{4}\). (**c**) ZF muon relaxation rate for the initial muon spin polarization for Sr\({}_{2}\)RuO\({}_{4}\). Figures recreated from Refs. [168]. Time evolution of the muon spin polarization in ZF conditions suggests broken TRS for (**d**) SrPtAs (Figure recreated from Ref. [57]) and (**e**) 4Hb-TaS\({}_{2}\) (Figure recreated from Ref. [31]). type of topological state and carries certain signatures of its non-trivial topology. The vortex core of a chiral \(p\)-wave superconductor exhibits a single MZM for the case of spinless fermions [167]. Many materials such as Sr\({}_{2}\)RuO\({}_{4}\) (full gap, chiral-\(p\)) (Fig. 6b and c) [168], SrPtAs (full gap, chiral-\(d\)) (Fig. 6d) [57, 60], 4Hb-TaS\({}_{2}\) (Fig. 6e) [31], UTe\({}_{2}\) (chiral-\(p\)) [61, 62], UPt\({}_{3}\) (nodal gap, chiral-\(f\)) [50], and LaPt\({}_{3}\)P (chiral-\(s\)) [77] are predicted as chiral superconductors, but are still in debate due to the lack of direct evidence of a MZM. The heavy fermion, URu\({}_{2}\)Si\({}_{2}\)[52, 172], displays a mysterious "hidden order" phase with broken TRS, indicating a possible chiral \(d\)-wave state. Other interesting systems such as water-doped cobaltate, Na\({}_{x}\)CoO\({}_{2}\cdot\)yH\({}_{2}\)O, (x = 0.3, y = 1.3) [173], twisted double-layer copper oxides [174], and doped graphene [175] are proposed as chiral superconductors, where \(\mu\)SR and polar Kerr experiments would be great interest to look for possible broken TRS. ## 5 Future Prospective Despite a decade of extensive searching, only a few TSC candidates have been identified (Table 1). Given the great promise of MZM-based topological quantum computation in a TSC, there is an urgent need for fundamentally new approaches to accelerate the search and identification of new TSC materials. On the one hand, the experimental identification of TSC is challenging due to the elusive experimental fingerprints it presents. For instance, both MBS and spurious signals such as Andreev bound states can create the zero-biased peak measured with tunneling spectroscopy. One approach to address this issue is to improve the current experiments by enlarging the measurement parameter space. Recently, Ziesen et al. proposed a new approach that involves replacing single-shot STS with a sequence of shots at varied system parameters, which has the potential to significantly improve the identification of MZM [176]. This strategy could have general implications, as it involves redesigning the probe without the need for drastic changes to the existing experimental apparatus. Beyond improving existing experimental configurations, completely new experimental configurations may be needed in the future to provide more conclusive evidence and efficiently and reliably screen out TSC candidates. On the other hand, modern computational methods have revolutionized many branches of materials science and have accelerated the search for new materials. However, computational-aided TSC search is challenging. The popular Density Functional Theory (DFT) calculations, which have been highly successful in searching for materials with bandstructures and topology [177, 178], are single-particle in nature and rely on local or extended single-particle basis sets. As a result, they are not inherently designed to account for paired states in superconductors. Recently, the use of symmetry indicators has shown great promise in identifying band topology [179, 180], and corresponding symmetry indicators for TSC have also been developed [181]. However, obtaining the pairing symmetry - the crucial input needed to apply the symmetry indicator for TSC - is not readily accessible by any means, which impedes the use of symmetry indicators to search for TSC candidates computationally. Machine learning (ML) has emerged as a powerful tool for materials discovery, including the search for quantum materials [182]. However, due to the data-driven nature of ML and the absence of confirmed TSC materials [71] as well as reliable simulation methods, popular ML methods such as classification, clustering, and generative models may not be suitable for searching for TSC, mainly due to the out-of-distribution (OOD) problem [183]. Nevertheless, ML can still be useful in certain scenarios. Recent studies have shown that ML can optimize Majorana wire gate arrays towards improved topological signatures with reduced disorder effects [184]. Despite the OOD problem, we can leverage various OOD detection models through ensemble learning [185] as a confidence predictor for absent, null phases. This predictor can be used to screen the existing material database or incorporated into a generative adversarial model, which has been highly successful in the field of image generation, to generate new candidates for materials not present in the training phases [186, 187, 188]. In the longer term, as TSC materials need to be confirmed, popular ML methods can further accelerate predictions for even more TSC candidates, forming a positive feedback loop toward accelerated discovery. Last but not least, integrating ML with experiments offers an alternative pathway to augment the capability of experimental techniques for throughput and accuracy [189, 190]. For instance, it has been shown that the experimental spatial resolution to identify proximity effect - one key approach to realize TSC - can be enhanced by a factor of two through ML-based analysis [191]. By increasing the throughput and resolution of experiments in conjunction with ML techniques, the TSC search can also be expected to accelerate in the near future. MM and NCD acknowledge the support from U.S. Department of Energy (DOE), Office of Science (SC), Basic Energy Sciences (BES) Award No. DE-SC0020148. RO and AC acknowledge support from DOE BES Award No. DE-SC0021940. TN and TB acknowledge National Science Foundation (NSF) Designing Materials to Revolutionize and Engineer our Future (DMREF) Program with Award No. DMR-2118448. TB and ML are partially supported by NSF Convergence Accelerator Award No. 2235945. ML acknowledges the support from Class of 1947 Career Development Professor Chair and support from R Wachnik. Declarations.The authors declare no competing interest.
2304.00895
A computation of the ninth Dedekind Number
In this article, we present an algorithm to compute the 9th Dedekind Number. The key aspects are the use of matrix multiplication and symmetries in the free distributive lattice, which are detected with techniques from Formal Concept Analysis.
Christian Jäkel
2023-04-03T11:32:44Z
http://arxiv.org/abs/2304.00895v2
# A computation of the ninth Dedekind Number ###### Abstract In this article, we present an algorithm to compute the 9th Dedekind Number. The key aspects are the use of matrix multiplication and symmetries in the free distributive lattice, which are detected with techniques from Formal Concept Analysis. **Keywords:** dedekind numbers, free distributive lattice, formal concept analysis, intervals. ## 1 Introduction The _Dedekind numbers_ are a fast growing sequence of integers, named after Richard Dedekind. Their determination is known as _Dedekind's Problem_. Let \(\mathrm{d}(n)\) denote the \(n\)-th Dedekind number. A survey over past achievements can be found in [11], which also contains the following table: Several interpretations of \(\mathrm{d}(n)\) exist. For example, the value \(\mathrm{d}(n)\) is equal to the number of antichains of the powerset lattice \(\mathcal{I}^{n}:=(2^{\{0,\ldots,n-1\}},\subseteq)\), or the number of monotone Boolean functions on \(n\) variables, or the number of elements of the free distributive lattice with \(n\) generators. Therefore, the next section explains how the free distributive lattice can be numerically represented. After that, Section 3 treats (anti) isomorphic lattice intervals, a tool that is utilized for the computation. Section 4 and 5 deal with theoretical, and Section 6 with practical aspects of our algorithm to compute the ninth Dedekind number. ## 2 The Free Distributive Lattice Let \(\mathbb{D}_{n}=(\mathrm{D}(n),\leq)\) denote the free distributive lattice with \(n\) generators. The following theorem can be found in [5]: Figure 1: Dedekind Numbers up to \(n=8\). Integer sequence \(\mathrm{A000372}\). **Theorem 1**.: _There is a one to one correspondence between elements of \(\mathbb{D}_{n}\) and monotone mappings from \(2^{k}\) into \(\mathbb{D}_{n-k}\)._ **Corollary 1**.: _There is a one to one correspondence of elements from \(\mathbb{D}_{n+1}\) and pairs \((x,y)\) of elements \(x,y\) from \(\mathbb{D}_{n}\), such that \(x\leq y\)._ _Proof_. Let \(x\) and \(y\) be elements of \(\mathbb{D}_{n}\). We have \(\emptyset\) and \(\{0\}\) as elements of \(2\). Mapping \(\emptyset\) to \(x\) allows to monotonously map \(\{0\}\) to every \(y\) for which \(x\leq y\) holds. \(\sqcap\)\(\sqcup\) Corollary 1 provides a method to generate elements of \(\mathbb{D}_{n}\) recursively. Even more, this leads to a numerical representation of \(\mathrm{D}(n)\) which facilitates the efficient implementation of algorithms to compute \(\mathrm{d}(n)\). For that, let the two elements of \(\mathbb{D}_{0}\) be represented by the binary numbers \(0\) and \(1\). The underlying order is \(0\leq 1\) applied bitwise. In the next step, we form valid pairs w.r.t. Corollary 1 and concatenate them to \(2\)-bit numbers, which are \(00\), \(01\) and \(11\). This process can be iterated further. **Example 1**.: _This example illustrates the generation of \(\mathrm{D}(3)\), starting from \(\mathrm{D}(0)\)._ \[\begin{array}{ ## 3 Lattices, Definitions and Notation Throughout this section, we consider a finite lattice \(\mathbb{L}=(L,\vee,\wedge,\bot,\top)=(L,\leq)\) with _bottom_\(\bot\) and _top_\(\top\). An _isomorphism_ between lattices is a bijective map that preserves join and meet operations, and consequently order as well. Dually, an _anti isomorphism_\(\varphi\) is a bijective map which reverses order, and hence join and meet too. \[x\leq y\Leftrightarrow\varphi(y)\leq\varphi(x)\qquad x\lor y\Leftrightarrow \varphi(x)\land\varphi(y)\qquad x\wedge y\Leftrightarrow\varphi(x)\lor \varphi(y).\] Definition 1: For \(a,b\in L\) with \(a\leq b\), we define the _interval_\([a,b]:=\{x\mid x\in L,\;a\leq x\leq b\}\). An interval \(I\) is _isomorphic_ to interval \(J\), denoted by \(I\cong J\), if an isomorphism exists, that maps \(I\) onto \(J\). Similarly, _anti isomorphic_ intervals \(I\cong_{\mathrm{a}}J\) are defined through the existence of a respective anti isomorphism. Two intervals \(I\) and \(J\) are _equivalent_, denoted by \(I\equiv J\), if they are isomorphic or anti isomorphic. Furthermore, \(\mathrm{Int}(\mathbb{L})\) denotes the set of all intervals of \(\mathbb{L}\) and \(\#I\) the cardinality of \(I\). Proposition 1: _The equivalence of intervals is an equivalence relation on \(\mathrm{Int}(\mathbb{L})\). Consequently, the set of all intervals \(\mathrm{Int}(\mathbb{L})\) can be factorized by \(\equiv\)._ _Proof._ Reflexivity is induced by \(\cong\) and the fact that every interval is isomorphic to itself. Furthermore, symmetry is given due to the symmetry of \(\cong\) and \(\cong_{\mathrm{a}}\). To deduce transitivity, we consider three cases: \[I\cong J\wedge J\cong K\Rightarrow I\cong K,\quad I\cong_{\mathrm{a}}J\wedge J \cong_{\mathrm{a}}K\Rightarrow I\cong K\quad I\cong J\wedge J\cong_{\mathrm{a }}K\Rightarrow I\cong_{\mathrm{a}}K.\] \(\sqcap\) For \([I]\in\mathrm{Int}(\mathbb{L})/_{\equiv}\) and we denote the equivalence class \([I]\)'s cardinality by \(\#[I]\). Lastly, for every \(x\in I\), we introduce two operators that decode the interval length w.r.t. the top and bottom of \(I\): \[\bot_{\mathrm{I}}(x):=\#[\bot_{\mathrm{I}},x]\text{ and }\top_{\mathrm{I}}(x):=\#[x, \top_{\mathrm{I}}],\] where \(\bot_{\mathrm{I}}\) denotes the bottom and \(\top_{\mathrm{I}}\) the top element of \(I\). Since \(I\) can be deduced from the outer context (the domain containing the operators' argument), we will omit subscript \(\mathrm{I}\) in the rest of this paper. Example 2: _These are the equivalence classes of \(\mathrm{Int}(\mathbb{D}_{2})/_{\equiv}\) (compare Fig. 2)._ \[\{[0,0],[1,1],[3,3],[5,5],[7,7],[15,15]\},\{[0,1],[1,3],[1,5],[3,7],[5,7],[7,15 ]\},\] \[\{[0,3],[0,5],[3,15],[5,15]\},\{[1,7]\},\{[0,7],[1,15]\},\{[0,15]\}.\] ## 4 Enumeration of \(\mathbb{D}_{n+1}\), \(\mathbb{D}_{n+2}\) and \(\mathbb{D}_{n+3}\) In this section, we apply Theorem 1.1 to derive values of \(\mathrm{d}(n+1),\mathrm{d}(n+2),\mathrm{d}(n+3)\), by respectively counting all monotonous maps from \(\mathbb{2}^{1},\mathbb{2}^{2}\) and \(\mathbb{2}^{3}\) to \(\mathbb{D}_{n}\). This is a warmup for the next section and provides an overview over formulas to compute some Dedekind numbers. Theorem 4.1: _The following formulas compute \(\mathbb{D}_{n+1}\) by means of \(\mathbb{D}_{n}\):_ \[\mathrm{d}(n+1)=\sum_{a\in\mathrm{D}(n)}\top(a)=\sum_{I\in\mathrm{Int}( \mathbb{D}_{n})}1=\#\,\mathrm{Int}(\mathbb{D}_{n})=\sum_{[I]\in\mathrm{Int}( \mathbb{D}_{n})/_{\equiv}}\#[I].\] _Proof._ If \(\mathbb{2}\)'s bottom is mapped to \(a\in\mathrm{D}(n)\), every \(b\in[a,\top]\) is a valid image for the top element. This implies that every \(a,b\in\mathrm{D}(n)\), such that \([a,b]\in\mathrm{Int}(\mathbb{D}_{n})\), is a valid choice too (see Figure 3, left image). Instead of summing up all intervals, we can restrict the summation to equivalent ones. \(\sqcap\)\(\sqcup\) The first equation from Theorem 4.1 was presented in [1]. Theorem 4.2: _The following formulas enumerate \(\mathbb{D}_{n+2}\) by means of \(\mathbb{D}_{n}\):_ \[\mathrm{d}(n+2)=\sum_{a,b\in\mathrm{D}(n)}\bot(a\wedge b)\cdot\top(a\lor b)= \sum_{I\in\mathrm{Int}(\mathbb{D}_{n})}(\#I)^{2}=\sum_{[I]\in\mathrm{Int}( \mathbb{D}_{n})/_{\equiv}}(\#I)^{2}\cdot\#[I].\] _Proof._ If \(a,b\in\mathrm{D}(n)\) are chosen as in Figure 3's center, every element in \([a\lor b,\top]\) and each in \([\bot,a\wedge b]\) are possibilities for top and bottom respectively. On the other hand, Figure 3's right image shows that bottom and top can be mapped to every interval \([a,b]\), which implies the second last equation. Since isomorphic and anti isomorphic intervals have equal cardinality, the last equality holds. \(\sqcap\)\(\sqcup\) The next theorem shows how to express the computation of \(\mathrm{d}(n+2)\) with matrix calculus. For that we use the trace operator \(\mathrm{Tr}\) of square matrices, that is defined as the sum of all diagonal elements. **Theorem 4**.: _We define two matrices \(\alpha(a,b):=\bot(a\wedge b)\) and \(\beta(a,b):=\top(a\lor b)\), with \(a,b\in\mathrm{D}(n)\). It holds that:_ \[\mathrm{d}(n+2)=\mathrm{Tr}(\alpha\cdot\beta).\] _Proof._ Let \(\gamma=\alpha\cdot\beta\) be the matrix product. We use the first formula from Theorem 3 and insert the matrix expressions. Note that \(\alpha\) and \(\beta\) are symmetric. \[\mathrm{d}(n+2)=\sum_{a\in\mathrm{D}(n)}\sum_{b\in\mathrm{D}(n)}\alpha(a,b) \cdot\beta(b,a)=\sum_{a\in\mathrm{D}(n)}\gamma(a,a)=\mathrm{Tr}(\gamma).\] \(\sqcup\) The next theorem's first formula is similar to the one stated in [7]. **Theorem 5**.: _Let \(X\) denote the set of all intervals \(\{[\bot,y]\mid y\in\mathrm{D}(n)\}\). The following formulas enumerate \(\mathbb{D}_{n+3}\) by means of \(\mathbb{D}_{n}\):_ \[\mathrm{d}(n+3) =\sum_{y\in\mathrm{D}(n)}\sum_{a,b,c\in[\bot,y]}\bot(a\wedge b \wedge c)\cdot\top(a\lor b)\cdot\top(a\lor c)\cdot\top(b\lor c)\] \[=\sum_{[I]\in X/_{\#}}\#[I]\cdot\sum_{a,b,c\in I}\bot(a\wedge b \wedge c)\cdot\top(a\lor b)\cdot\top(a\lor c)\cdot\top(b\lor c)\] \[=\sum_{I\in\mathrm{Int}(\mathbb{D}_{n})}\sum_{a,b,c\in I}\top(a \lor b)\cdot\top(a\lor c)\cdot\top(b\lor c)\] \[=\sum_{[I]\in\mathrm{Int}(\mathbb{D}_{n})/_{\bar{\bar{\imath}}}} \#[I]\cdot\sum_{a,b,c\in I}\top(a\lor b)\cdot\top(a\lor c)\cdot\top(b\lor c)\] \[=\sum_{[I]\in\mathrm{Int}(\mathbb{D}_{n})/_{\bar{\bar{\imath}}}} \#[I]\cdot\sum_{a,b,c\in I}\top(\sum_{c\in[a,\top_{\bar{\imath}}]}\bot(b \wedge c))^{2}.\] _Proof._ Choosing \(y\) and then \(a,b,c\in[\bot,y]\) as in Figure 4's left image, the first formula can be deduced. By factoring out isomorphic elements from \(X\), we derive the second equation. It is important to notice that after choosing \(a,b,c\in[\bot,y]\), there is still one degree of freedom for the bottom element. If we map bottom and top to an interval \([x,y]\)'s boundaries, we remove this degree of freedom and get the third equation. Obviously, isomorphic intervals can then be factored out. Equation number four follows from the fact that anti isomorphic intervals can be factored out as well. To see why this holds, we dualize the third identity1: Footnote 1: This implies \(a,b,c\) have to be placed one “level” above than depicted in Figure 4. \[\mathrm{d}(n+3)=\sum_{[I]\in\mathrm{Int}(\mathbb{D}_{n})}\sum_{a,b,c\in I} \bot(a\wedge b)\cdot\bot(a\wedge c)\cdot\bot(b\wedge c).\] Figure 3: Figures for the proof of Theorem 2 and 3. Since an anti isomorphism reverses order and thereby join and meet operations, for \(I\cong_{a}J\) it holds that: \[\sum_{a,b,c\in I}\bot(a\wedge b)\cdot\bot(a\wedge c)\cdot\bot(b\wedge c)=\sum_{a,b,c\in J}\top(a\lor b)\cdot\top(a\lor c)\cdot\top(b\lor c).\] To conclude the last equation, we utilize that \(2^{3}\cong 2^{2}\times 2\). The goal is to apply vertical symmetry from \(2^{3}\) (see Figure 4, right image). If an interval \(I\) is chosen, every \(a,b\in I=[x,y]\) can be independently placed as depicted. After computing \(\sum_{c\in[a,y]}\#[x,b\wedge c]\), the result can be squared due to symmetry induced by \(2^{2}\). Like in Theorem 4, we express the computation of \(\mathrm{d}(n+3)\) via matrix calculus. Theorem 4.1: _We define the matrix \(\alpha(a,b):=\top(a\lor b)\), with \(a,b\in I\in\mathrm{Int}(\mathbb{D}_{n})\). It holds that:_ \[\mathrm{d}(n+3)=\sum_{[I]\in\mathrm{Int}(\mathbb{D}_{n})/\bar{\phantom{a}}}\#[ I]\cdot\mathrm{Tr}(\alpha^{3}).\] Proof: We use Theorem 3's fourth formula and insert the matrix definition. \[\mathrm{d}(n+3)=\sum_{[I]\in\mathrm{Int}(\mathbb{D}_{n})/\bar{\phantom{a}}}\#[ I]\cdot\sum_{a\in I}\sum_{b\in I}\alpha(a,b)\sum_{c\in I}\alpha(b,c)\alpha(c,a).\] The innermost sum expresses \(\alpha^{2}\), the sum over \(b\) leads to \(\alpha^{3}\) and the sum over \(a\) gives the trace operator. Our implementation of Theorem 4.1 with GPU support, run on a Nvidia A100 GPU, computes \(\mathrm{d}(8)\) in \(17.56\) seconds. However, this algorithm can not compute \(\mathrm{d}(9)\), since the involved matrices would have a maximal dimension of \(7828354\times 7828354\). ## 5 Enumeration of \(\mathbb{D}_{n+4}\) In this section, we apply Theorem 1 to derive values of \(\mathrm{d}(n+4)\), by counting all monotonous maps from \(2^{4}\) to \(\mathbb{D}_{n}\). Theorem 5.1: _The following formulas enumerate \(\mathbb{D}_{n+4}\) by means of \(\mathbb{D}_{n}\):_ \[\mathrm{d}(n+4)=\sum_{I\in\mathrm{Int}(\mathbb{D}_{n})}X=\sum_{[I]\in\mathrm{ Int}(\mathbb{D}_{n})/\bar{\phantom{a}}}\#[I]\cdot X,\text{ with:}\] Figure 4: Figures for the proof of Theorem 4.1. \[X=\sum_{a,b,c,d\in I}\sum_{\begin{subarray}{c}f\in[a\lor b,\top]\\ f\in[a\lor c,\top]\\ g\in[a\lor d,\top]\\ h\in[b\lor c,\top]\\ i\in[b\lor d,\top]\\ j\in[c\lor d,\top]\\ \end{subarray}}\top(e\lor f\lor h)\cdot\top(e\lor g\lor i)\cdot\top(f\lor g\lor j \lor j)\cdot\top(h\lor i\lor j)\] \[=\sum_{\begin{subarray}{c}a,b,c,d\in I\in[a\lor b\lor b,\top]\\ f\in[a\lor b\lor d,\top]\\ g\in[a\lor b\lor d,\top]\\ h\in[b\lor c\lor d,\top]\\ \end{subarray}}\top(\#[a\lor b,e\wedge f]\cdot\#[a\lor c,e\wedge g]\cdot\#[a\lor d,f\wedge g]\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ Theorem 7's last formula can be expressed via matrix calculus. Theorem 8: _For \(I\in\operatorname{Int}(\mathbb{D}_{n})\) and \(a,b,c,d\in I\), we define matrices \(\alpha\) and \(\beta\):_ \[\alpha_{ab}(c,d):=\bot(a\wedge c\wedge d)\cdot\top(b\lor c\lor d)\text{ and }\beta_{ab}(c,d):=\bot(b\wedge c\wedge d)\cdot\top(a\lor c\lor d).\] _Let \(\gamma_{ab}\) be the matrix product of \(\alpha_{ab}\) and \(\beta_{ab}\). It holds that:_ \[\mathrm{d}(n+4)=\sum_{[I]\in\operatorname{Int}(\mathbb{D}_{n})/\bar{z}}\#[I] \cdot\sum_{a,b\in I}\operatorname{Tr}(\gamma_{ab}^{2}).\] Proof: Inserting the matrix expressions into the formula from Theorem 7, we get for \(d(n+4):\) \[\sum_{[I]\in\operatorname{Int}(\mathbb{D}_{n})/\bar{z}}\#[I]\cdot\sum_{a,b\in I }\sum_{c,d\in I}\sum_{e,f\in I}\alpha(c,d)\beta(c,e)\alpha(e,f)\beta(d,f).\] Since \(\alpha\) and \(\beta\) are symmetric, we can identify two matrix multiplications: \[\gamma(d,e)=\sum_{c\in I}\alpha(d,c)\beta(c,e)\text{ and }\gamma(e,d)=\sum_{f \in I}\alpha(e,f)\beta(f,d).\] All indices are from the same range (the interval \(I\)), which means that the matrices above are equal. Proceeding from this matrix point of view, we compute the diagonal of the product from \(\gamma\) with itself and sum it up \(\sum_{d\in I}\sum_{e\in I}\gamma(d,e)\gamma(e,d)\). Looking at Figure 5, we see that \(a\) and \(b\) are independent from each other. Together with the formula from Theorem 7, this implies rotation symmetry between \(a\) and \(b\). It is enough to consider pairs such that \(a\leq b\). Values for \(a\) that are strictly smaller than \(b\) are weighted with factor two and for \(a=b\) the weight is one. This reduces computations by almost one half. Still, further reductions are possible. Therefor, for every \(I\in\operatorname{Int}(\mathbb{D}_{n})\), let \(\varphi:I\to I\) be an (anti) isomorphism. We introduce a relation on \(I\times I\). That is \((a,b)\sim(\tilde{a},\tilde{b}):\Longleftrightarrow\) \[\exists\varphi:I\to I,\ \varphi(a)=\varphi(\tilde{a})\text{ and }\varphi(b)=\varphi(\tilde{b}).\] Let \(I\times I\mid_{\leq}\) denote all pairs \((a,b)\), such that \(a\leq b\). Furthermore the weight operator \(\omega:I\times I\mid_{\leq}\to\{1,2\}\) equals \(2\) for \(a\neq b\) and \(1\) otherwise. Theorem 9: _For \(I\in\operatorname{Int}(\mathbb{D}_{n})\) and \(a,b,c,d\in I\), we define matrices \(\alpha\) and \(\beta\):_ \[\alpha_{ab}(c,d):=\bot(a\wedge c\wedge d)\cdot\top(b\lor c\lor d)\text{ and }\beta_{ab}(c,d):=\bot(b\wedge c\wedge d)\cdot\top(a\lor c\lor d).\] _Let \(\gamma_{ab}\) be the matrix product of \(\alpha_{ab}\) and \(\beta_{ab}\). It holds that:_ \[\mathrm{d}(n+4)=\sum_{[I]\in\operatorname{Int}(\mathbb{D}_{n})/\bar{z}}\#[I] \cdot\sum_{[(a,b)]\in(I\times I\mid_{\leq})/\sim}\omega(a,b)\cdot\#[(a,b)] \cdot\operatorname{Tr}(\gamma_{ab}^{2}).\] Proof: The relation \(\sim\) is an equivalence relation and all pairs within one equivalence class result in an equal summand for the enumeration formula. We illustrate this at the simpler example \(\sum_{z\in I}\bot(a\wedge z)\cdot\top(a\lor z)\). Under an isomorphism \(\varphi\), with \(\varphi(a)=\tilde{a}\) (also note that top and bottom are preserved), we get: \[\sum_{\varphi(z)\in I}\bot(\varphi(a\wedge z))\cdot\top(\varphi(a\lor z))= \sum_{\varphi(z)\in I}\bot(\tilde{a}\wedge\varphi(z))\cdot\top(\tilde{a}\lor \varphi(z))=\sum_{\tilde{z}\in I}\bot(\tilde{a}\wedge\tilde{z})\cdot\top( \tilde{a}\vee\tilde{z}).\] Since \(\varphi\) is bijective and preserves join, meet and order, both sums (with and without applying \(\varphi\)) are equal. An anti isomorphism would inverse order, but this is offset by the symmetry w.r.t. top and bottom. These arguments can be extended to the actual formula from Theorem 7. Our implementation of Theorem 9 with GPU support, as described in the following section, can compute \(\mathrm{d}(8)\) in about \(3\) seconds. ## 6 Algorithm To Compute The Ninth Dedekind Number We list all steps necessary to compute \(\mathrm{d}(n+4)\) via Theorem 9 and address them individually. 1. Generate \(\mathbb{D}_{n}\) as described at the end of Section 1. 2. Compute the equivalence classes \(\mathrm{Int}(\mathbb{D}_{n})/_{\equiv}\) and save one representative for each class together with the class's cardinality. 3. For every equivalence class representative \(I\) of \(\mathrm{Int}(\mathbb{D}_{n})/_{\equiv}\), compute equivalence classes of \((I\times I\mid_{\leq})/_{\sim}\). Again save one representative for each class together with the respective cardinality. 4. Run the computation according to Theorem 9: ``` Data:\(\mathrm{Int}(\mathbb{D}_{n})/_{\equiv}\) and \(\forall[I]\in\mathrm{Int}(\mathbb{D}_{n})/_{\equiv}:\ (I\times I\mid_{\leq})/_{\sim}\) Result:\(\mathrm{d}(n+4)\) for\([I]\in\mathrm{Int}(\mathbb{D}_{n})/_{\equiv}\)do for\([(a,b)]\in(I\times I\mid_{\leq})/_{\sim}\)do - generate the matrices \(\alpha_{ab}\) and \(\beta_{ab}\); - compute the matrix product \(\gamma_{ab}=\alpha_{ab}\cdot\beta_{ab}\); - compute the trace of \(\gamma_{ab}^{2}\); - multiply the trace with \(\omega(a,b)\) and \(\#[(a,b)]\); end for - sum up each value from above; - multiply the sum with \(\#[I]\); end for ``` **Algorithm 1**\(\mathrm{Int}(\mathbb{D}_{n})/_{\equiv}\) and \(\forall[I]\in\mathrm{Int}(\mathbb{D}_{n})/_{\equiv}:\ (I\times I\mid_{\leq})/_{\sim}\) Result:\(\mathrm{d}(n+4)\) for\([I]\in\mathrm{Int}(\mathbb{D}_{n})/_{\equiv}\)do for\([(a,b)]\in(I\times I\mid_{\leq})/_{\sim}\)do - generate the matrices \(\alpha_{ab}\) and \(\beta_{ab}\); - compute the matrix product \(\gamma_{ab}=\alpha_{ab}\cdot\beta_{ab}\); - compute the trace of \(\gamma_{ab}^{2}\); - multiply the trace with \(\omega(a,b)\) and \(\#[(a,b)]\); end for - sum up each value from above; - multiply the sum with \(\#[I]\); end for ``` **Algorithm 2**\(\mathrm{Int}(\mathbb{D}_{n})/_{\equiv}\) and \(\forall[I]\in\mathrm{Int}(\mathbb{D}_{n})/_{\equiv}:\ (I\times I\mid_{\leq})/_{\sim}\) Result:\(\mathrm{d}(n+4)\) for\([I]\in\mathrm{Int}(\mathbb{D}_{n})/_{\equiv}\)do for\([(a,b)]\in(I\times I\mid_{\leq})/_{\sim}\)do - generate the matrices \(\alpha_{ab}\) and \(\beta_{ab}\); - compute the matrix product \(\gamma_{ab}=\alpha_{ab}\cdot\beta_{ab}\); - compute the trace of \(\gamma_{ab}^{2}\); - multiply the trace with \(\omega(a,b)\) and \(\#[(a,b)]\); end for - sum up each value from above; - multiply the sum with \(\#[I]\); end for ``` **Algorithm 3**\(\mathrm{Int}(\mathbb{D}_{n})/_{\equiv}\) and \(\forall[I]\in\mathrm{Int}(\mathbb{D}_{n})/_{\equiv}:\ (I\times I\mid_{\leq})/_{\sim}\) Result:\(\mathrm{d}(n+4)\) for\([I]\in\mathrm{Int}(\mathbb{D}_{n})/_{\equiv}\)do for\([(a,b)]\in(I\times I\mid_{\leq})/_{\sim}\)do - generate the matrices \(\alpha_{ab}\) and \(\beta_{ab}\); - compute the matrix product \(\gamma_{ab}=\alpha_{ab}\cdot\beta_{ab}\); - compute the trace of \(\gamma_{ab}^{2}\); - multiply the trace with \(\omega(a,b)\) and \(\#[(a,b)]\); - multiply the trace with \(\omega(a,b)\) and \(\#[(a,b)]\); end for - sum up each value from above; - multiply the sum with \(\#[I]\); end for - sum up each value from above; - multiply the sum with \(\#[I]\); end for - sum up each value from above; - multiply the sum with \(\#[I]\); - multiply the sum with \(\#[I]\); end for - sum up each value from above; - multiply the sum with \( The identification of (anti) isomorphic formal contexts can be translated to an isomorphism problem of colored bipartite graphs. That is because every formal context represents a bipartite graph (see Figure 6). If we want to determine context isomorphisms only, each of the two vertex sets gets a different color label assigned to, and we compute graph isomorphisms that respect them. Since we want to determine anit isomorphisms too, we have to allow for a complete switch of the vertex sets, but still no mapping from one set into the other. This can be achieved by adding two additional equally colored vertices, and connecting them with every vertex of the bipartite set that they represent. To practically determine isomorphic graphs, we use the software "nauty" (see [8]), that can compute a canonical string representation of a given colored graph. This translates the graph isomorphism problem into a string comparison, since two graphs are isomorphic if and only if their canonical string representations are equal. ``` Data:\(\mathrm{Int}(\mathbb{D}_{\mathrm{n}})\) Result:\(\mathrm{Int}(\mathbb{D}_{\mathrm{n}}/_{=}\) for\([x,y]\in\mathrm{Int}(\mathbb{D}_{\mathrm{n}})\)do - compute a formal context that represents \([x,y]\); - transform the context to a colored bipartite graph as depicted in Figure 6; - use "nauty" to compute a canonical string; - count occurrences of each string; end for ``` **Algorithm 1**The steps to calculate \(\mathrm{Int}(\mathbb{D}_{\mathrm{n}})/_{=}\). For \(\mathbb{D}_{2}\) we compute 6, for \(\mathbb{D}_{3}\) 18, for \(\mathbb{D}_{4}\) 134 and for \(\mathbb{D}_{5}\) 9919 equivalence classes. Due to nauty's efficiency, the last computation can be performed within just a couple of seconds on a 2.4 GHz desktop PC. Note that according to Theorem 2 there are 7828354 intervals of \(\mathbb{D}_{5}\). Hence we reduce them to just 0.00127%, which has a huge runtime impact on the computation of \(\mathrm{d}(5+4)\). ### Step 3 Considering an interval \(I\in\mathrm{Int}(\mathbb{D}_{\mathrm{n}})\), we can generate all pairs \((a,b)\in I\times I\mid_{\leq}\). In order to determine the existence of an (anti) isomorphism \(\varphi:I\to I\), with \(\varphi(a)=\tilde{a}\) and \(\varphi(b)=\tilde{b}\), for given pairs \((a,b)\) and \((\tilde{a},\tilde{b})\), we use a refined version of the approach from Step 2, which is illustrated in Figure 7. Firstly, the bipartite graph that decodes the (anti) isomorphism problem on \(I\) (as in Step 2) has to be generated. Next we impose further restrictions, by adjoining 4 additional vertices that are connected to all join and meet irreducibles w.r.t. \(a\) and \(b\) respectively. The red vertices in Figure 7 connect the vertices related to \(a\) and \(b\) separately. This configuration assures that, if we compute equivalence classes w.r.t. nauty's canonical string representation, an (anti) isomorphism on \(I\) respects the pair \((a,b)\), and can additionally swap \(a\) and \(b\). Example 3 shows the equivalence classes of \(([0,15]\times[0,15])\mid_{\leq}\). Figure 6: A formal context interpreted as bipartite graph. **Example 3**.: _Equivalence classes of \(([0,15]\times[0,15])\mid_{\leq}\) w.r.t. \(\sim\)._ \[\{(0,0),(15,15)\},\{(1,1),(7,7)\},\{(3,3),(5,5)\},\{(0,1),(7,15)\}, \{(3,5)\},\{(0,3),\] \[(0,5),(3,15),(5,15)\},\{(1,3),(1,5),(3,7),(5,7)\},\{(1,7)\},\{(0, 7),(1,15)\},\{(0,15)\}.\] If we perform the computation for every \([I]\in\mathrm{Int}(\mathbb{D}_{\mathrm{n}})/_{\equiv}\), we treat a certain amount of pairs \((a,b)\) and reduce them to equivalence classes. We get the reductions \(\mathbb{D}_{2}:\,56\to 33\), \(\mathbb{D}_{3}:\,1127\to 446\), \(\mathbb{D}_{4}:\,274409\to 80741\) and \(\mathbb{D}_{5}:\,8646896880\to 4257682565\). The computation for \(\mathbb{D}_{5}\) took about one hour on an AWS '\(c6i.32\)xlarge' cluster with \(128\) cores. One could think that it is not worth the effort, since the reduction for \(\mathbb{D}_{5}\) is only slightly more than a factor two. But it can be observed that large intervals usually have a higher reduction rate. That is because they leave "more room" for symmetry than small intervals. For instance the largest interval of \(\mathbb{D}_{5}\) is \([0,4294967295]\). For this interval we have a reduction \(57471561\to 140736\). This reduction has a big runtime impact, since the involved matrices have dimension \(7581\times 7581\). ### Step 4 For a given \(I\in\mathrm{Int}(\mathbb{D}_{\mathrm{n}})\), we precompute all values of \(\bot(\cdot)\) and \(\top(\cdot)\) on a CPU host and transfer them to a GPU device. There the matrices \(\alpha\) and \(\beta\) are generated and multiplied. A batch of matrices is processed in parallel via CUDA's "cublasDgemmStridedBatched" kernel. Since a shift from integer values to double precision occurs, we have to assure that all values are not bigger than \(2^{53}\) to rule out a loss of precision. Actually, we use the smaller bound \(2^{51}\), since this is beneficial for another estimation. An upper bound for the entries of \(\alpha\) and \(\beta\) is given by \(\max_{\alpha}=\bot(a)\cdot\top(b)\) and \(\max_{\beta}=\bot(b)\cdot\top(a)\). This leads to \(\max_{\gamma}=\max_{\alpha}\cdot\max_{\beta}\cdot\#I\). We compute \(\max_{\gamma}\) for all pairs \((a,b)\) w.r.t. \(\mathbb{D}_{5}\). In \(118084\) cases this estimation exceeds \(2^{51}\). For these we perform the exact matrix multiplication with \(64\)-bit integer precision using the C++ Eigen library, and confirm that \(\max_{\gamma}\) is smaller than \(2^{51}\). After multiplying the matrices, a handwritten CUDA kernel computes the trace of \(\gamma^{2}\). For that \(128\)-bit unsigned integer values are used. Since the largest entry in \(\gamma\) is smaller than \(2^{51}\) and \(7581\leq 2^{13}\), we can estimate a maximal value for the trace of \(\gamma^{2}\) via \((2^{51}\cdot 2^{51}\cdot 2^{13})\cdot 2^{13}=2^{128}\). This assures us that \(128\)-bit unsigned integer precision is enough to compute the trace. Finally, the summation of all trace values and multiplication with the equivalence classes' carnality is done via \(1024\)-bit unsigned integers on the host system. ## 7 Conclusion We gave an overview about formulas to compute Dedekind numbers in Section 4 and 5. Some of these formulas can by interpreted in terms of matrix multiplication, which allows for fast 'number crunching' on a GPU device. In Section 6, the steps of an algorithm to compute the ninth Dedekind number are described. For that, Formal Concept Analysis is used to detect symmetries on lattice intervals. These symmetries could be efficiently computed with the graph isomorphism software nauty. Figure 7: The graph that decodes the (anti) isomorphism problem w.r.t. \(\sim\). The only thing left to say is that we run the algorithm on Nvidia A100 GPUs. 5311 GPU hours and 4257682565 matrix multiplications later, we got the following value for the ninth Dedekind number: \[286386577668298411128469151667598498812366.\]
2307.08741
Characterization of Coherent Errors in Noisy Quantum Devices
Characterization of quantum devices generates insights into their sources of disturbances. State-of-the-art characterization protocols often focus on incoherent noise and eliminate coherent errors when using Pauli or Clifford twirling techniques. This approach biases the structure of the effective noise and adds a circuit and sampling overhead. We motivate the extension of an incoherent local Pauli noise model to coherent errors and present a practical characterization protocol for an arbitrary gate layer. We demonstrate our protocol on a superconducting hardware platform and identify the leading coherent errors. To verify the characterized noise structure, we mitigate its coherent and incoherent components using a gate-level coherent noise mitigation scheme in conjunction with probabilistic error cancellation. The proposed characterization procedure opens up possibilities for device calibration, hardware development, and improvement of error mitigation and correction techniques.
Noah Kaufmann, Ivan Rojkov, Florentin Reiter
2023-07-17T18:00:02Z
http://arxiv.org/abs/2307.08741v1
# Characterization of Coherent Errors in Noisy Quantum Devices ###### Abstract Characterization of quantum devices generates insights into their sources of disturbances. State-of-the-art characterization protocols often focus on incoherent noise and eliminate coherent errors when using Pauli or Clifford twirling techniques. This approach biases the structure of the effective noise and adds a circuit and sampling overhead. We motivate the extension of an incoherent local Pauli noise model to coherent errors and present a practical characterization protocol for an arbitrary gate layer. We demonstrate our protocol on a superconducting hardware platform and identify the leading coherent errors. To verify the characterized noise structure, we mitigate its coherent and incoherent components using a gate-level coherent noise mitigation scheme in conjunction with probabilistic error cancellation. The proposed characterization procedure opens up possibilities for device calibration, hardware development, and improvement of error mitigation and correction techniques. ## I Introduction Current quantum computers are highly affected by noise [1]. The errors originate from interactions of the devices with their environment [2], unwanted dynamics between the qubits [3], or imperfect control signals [4]. Characterization aims to find error sources and quantify their effect on the performance of a computing processor [5]. The information obtained from the characterization of a device facilitates the development of a less error-prone hardware platform [6], the implementation of schemes for correcting specific errors [7], and the application of noise mitigation protocols [8]. Therefore, noise characterization stands at the core of future technological progress. Demarcating noise along the line of purity conservation leads to the distinction between coherent and incoherent noise [9]. _Coherent_ noise occurs due to systematic, reversible perturbations such as imperfect calibrations, imprecise control signals, or couplings to low-frequency external fields [10; 11; 12; 13]. Conversely, _incoherent_ noise, e.g., bit-flip errors, is stochastic and typically arises from insufficient isolation of the system from its environment [12], causing a non-reversible loss of information. Since controllability and isolation of a system are generally opposing goals in hardware design [4; 14], distinguishing the structure of the limiting noise is critical to advance quantum computing platforms. Numerous characterization techniques have been developed to identify and quantify coherent and incoherent noise in quantum processes. Full process tomography [15; 16] and gate-set tomography [17; 18] are common techniques for obtaining a general representation of a physical operation. Although they provide a full representation, they lack in discriminating the noise processes from the actual operation. Furthermore, as they involve the reconstruction of density matrices, they are experimentally intractable on devices with more than a few qubits due to exponential scaling of the number of operations with the size of the system's Hilbert space and the resource-intensive post-processing [19]. While some protocols reach scalable noise characterization using principles of randomized benchmarking [20; 21; 22], they are often limited to a few heuristic metrics that do not provide enough insights for existing noise calibration, mitigation, or correction techniques [23]. Characterization approaches that exceed this limitation while still being scalable exist for incoherent noise [7; 24]. These _noise reconstruction_ protocols [25] introduce a model and estimate its parameters by fitting it to the real process. They aim to quantify noise of a gate layer, where a restricted noise correlation length is the central structural assumption. However, those models do not portray the actual noise process completely, as the protocols require the application of probabilistic _twirling_ schemes [20; 26; 27; 28; 29; 30] that convert coherent errors into incoherent ones. Currently, no similar approach to the noise reconstruction protocols exists that is sensitive to coherent errors. A useful benchmark to evaluate the effectiveness of a noise characterization protocol is to apply its results to _error mitigation_ techniques, i.e., methods aiming to remove, on average, noise-induced bias from experimental results. Unitary errors can, for instance, be suppressed with dynamical decoupling [31; 32; 33], dynamically corrected operations [34; 35], or hidden inverses [36]. Despite their effectiveness in certain situations [37; 38; 39; 40; 41; 42], coherent error mitigation techniques are often tailored to specific systems or require precise knowledge of unitary errors. Incoherent noise can be mitigated using probabilistic error cancellation or zero-noise extrapolation techniques [8; 42]. These have been proven to be efficient against stochastic errors using the information from noise reconstruction methods [43; 44; 24; 45], but only twirling of the intrinsic noise processes allows to apply those techniques for coherent noise mitigation. Consequently, it is essential to develop scalable noise characterization protocols that provide information simultaneously for coherent and incoherent noise mitigation techniques. In this article, we propose a characterization technique that accounts for unitary errors and show how it can be used to estimate the model parameters. To characterize the coherent part of the model, we devise a protocol that is compatible with the noise reconstruction method. The proposed model and protocol are hardware-agnostic and scalable to larger systems, assuming a low noise-correlation length. To demonstrate our method, we apply it to characterize a gate layer on a 7-qubit superconducting circuit device provided by IBM Quantum (IBMq) [46]. This allows us to determine the most significant coherent single- and two-qubit errors and their behavior over time. Finally, we leverage the gained information from the protocol and mitigate the coherent errors through a gate-level approach. We thereby show the validity of our model and the accuracy of our protocol. Our work paves the way for advancements in hardware platform development, quantum error correction and mitigation schemes, and the design of noise-aware algorithms. ## II Model An ideal noiseless operation on a quantum computer can be expressed by a _unitary channel_\(\mathcal{U}_{I}(\rho)=U_{I}\rho U_{I}^{\dagger}\), with \(U_{I}\) being some unitary matrix. A simple model for the noisy implementation of this operation on real hardware \(\mathcal{E}_{\mathcal{P}}\) consists of the concatenation of a noise modeling _Pauli channel_\(\mathcal{P}\) and the ideal unitary, \[\mathcal{E}_{\mathcal{P}}\left(\rho\right)=\mathcal{P}(\mathcal{U}_{I}(\rho)) =\sum_{i}p(i)P_{i}U_{I}\rho U_{I}^{\dagger}P_{i}, \tag{1}\] where the Pauli error rates \(p(i)\) form a probability distribution over the \(n\)-qubit Pauli operators \(P_{i}\in\mathbb{P}^{\otimes n}=\{I,X,Y,Z\}^{\otimes n}\). Compared to general completely positive trace-preserving maps, Pauli channels offer a concise description of noisy operations by restricting the model to only Pauli operators. Despite those substantial expressibility restrictions of the model, a wide range of incoherent noise, including dephasing and depolarization, can be described by Pauli channels. In this framework, a \(n\)-qubit noisy operation is modeled at most \(3^{n}\) Pauli error rates and operators. To reduce this scaling from exponential to polynomial, we follow the widely used and experimentally motivated approach of only regarding _locally_ correlated noise, i.e., considering \(n\)-qubit Pauli operators \(P_{i}\) that act nontrivially on \(l\) qubits only [24, 7]. These \(l\) qubits are selected with respect to the qubits' connectivity in the quantum computing architecture. This work focuses on a correlation length \(l=2\). Pauli noise models are appealing for different physical and practical reasons. Not only is the interaction of a qubit system with its environment often dominated by depolarization and dephasing [47, 48], but quantum error correction also leads to noise better approximated by Pauli channels [49, 50, 51]. Additionally, existing mitigation schemes applicable to Pauli noise [24, 8] are generally implementable with low-depth circuits on near-term devices. Lastly, Pauli channels are interesting for theoretical work, as they can be efficiently simulated classically [52]. To illustrate the limitations of the Pauli noise model and motivate its extension, we study noise introduced by identity circuits on IBMq processors. We perform an _echo-experiment_ (similar to Ref. [54]) where we prepare a two-qubit system in the Pauli basis state \(\left|0+\right\rangle\), apply an even number of \(\mathit{CX}\) gates, and measure the evolved state in the same Pauli basis as it was prepared. This measurement allows us to estimate the expectation value of all two-qubit Pauli operators of which the prepared state is an eigenstate, specifically \(P_{IX},\,P_{ZI},\,\) and \(P_{ZX}\). In Fig. 1, we diagram these expectation values against the number of noisy identities \(m\) obtained from the ibm_lagos processor. The results for different input states and superconducting hardware platforms available on IBMq are similar. In the noiseless case, all of those expectation values are independent of the number of repetitions of the identity and equal to 1. In the exclusive presence of Pauli noise, a purely exponential decay of the expectation values, characteristic of incoherent error processes, is expected [27, 20, 21, 22, 55]. Consequently, the Pauli noise model of Eq. (1) cannot explain the oscillations observed in the experiment of Fig. 1 (blue dashed curves). Figure 1: Noise examination on identity circuit. The data is obtained on ibm_lagos and plots the expectation value of the three two-qubit Pauli operators \(P_{IX},P_{ZI},P_{ZX}\) against the number of noisy identities \(m\) implemented by two \(\mathit{CX}\) gates. In the experiment, we prepare and measure the system in the state \(\left|0+\right\rangle\) using one Hadamard gate \(H\). The blue dashed curves correspond to the unchanged noise. The dotted red curves are obtained using Pauli twirling scheme that projects the noise to a Pauli noise (cf. Eq. (1)). Each circuit is measured 4096 times and the twirled results are averaged over 32 different twirling gates \(T\). A readout mitigation scheme [53] is applied in post-processing based on the reported readout error rates of IBM Quantum. A well-established technique to address this discrepancy between model and reality is Pauli twirling [26; 27; 28; 29; 30], a procedure which involves stochastically introducing Pauli gates before and after a Clifford operation. Since the Pauli group is fixed under Clifford conjugation, carefully chosen sets of Pauli gates can effectively cancel each other out and enforce the errors to follow a Pauli noise model given in Eq. (1) [27; 55]. The Pauli twirled version of the echo-experiment presented above is illustrated in Fig. 1 with red dotted curves. Notably, we observe that the expectation values follow a desired exponential trend. However, when examining the physical noise sources of a system, altering the noise structure is often not desired, as it obscures the distinction between coherent and incoherent noise. Furthermore, the probabilistic nature of twirling creates a circuit and sampling overhead [26; 27; 24]. Instead of altering the noise, we modify the noise model and introduce a coherently rotated Pauli noise model. We extend the Pauli noise model in Eq. (1) with a unitary channel \(\mathcal{U}_{\theta}\) (cf. Fig. 2) intended to explain the oscillatory behavior observed in Fig. 1 \[\mathcal{E}_{\mathcal{P},\theta}\left(\rho\right)=\mathcal{U}_{\theta}( \,\mathcal{E}_{\mathcal{P}}(\rho))=U_{\theta}\sum_{i}p(i)P_{i}U_{I}\rho U_{I} ^{\dagger}P_{i}U_{\theta}^{\dagger} \tag{2}\] with \(U_{\theta}\) being a unitary operator parameterized by a Hermitian matrix \(H_{\theta}\) as \(U_{\theta}=e^{-iH_{\theta}}\). This matrix can be expressed in the Pauli basis as \(H_{\theta}=\sum_{k}\theta_{k}P_{k}\), with real coefficients \(\theta_{k}\) and Pauli matrices \(P_{k}\). Assuming small noise, i.e., \(U_{\theta}\) is close to identity, we can rewrite the coherent noise as \(U_{\theta}\approx\prod_{k}\exp(-i\theta_{k}P_{k})\), which is a valid approximation up to \(\mathcal{O}(\theta^{2})\). Each product term reads \[\exp(-i\theta_{k}P_{k})=\cos\left(\theta_{k}\right)\mathbb{I}-i\sin\left( \theta_{k}\right)P_{k}\approx\mathbb{I}-i\theta_{k}P_{k}\,, \tag{3}\] and represents in the Bloch sphere picture a rotation around the \(k\)-axis by the angle \(2\,\theta_{k}\), cf. Fig. 2. To reach a scalable model, we consider, as for the Pauli noise model in Eq. (1), locally correlated noise operators, namely single- and two-qubit rotations among neighboring qubits according to their connectivity in the hardware. Therefore, the _coherently rotated Pauli noise model_ defined in Eq. (2) consists of nine two-qubit rotations for each neighboring qubit pair and three single-qubit rotations for each qubit, leading to a magnitude of these rotations of \(\left|\theta\right|=15\). Note that this model is not universal. Nonunital noise [56], such as decay, surpasses its expressibility. In the discourse regarding the influence of coherent noise on error correcting schemes and the fault tolerance threshold, similar noise models to Eq. (2) are utilized in Refs. [57; 58; 59]. However, those models are limited to specific types of Pauli and coherent noise, and the works do not focus on the characterization of a noisy process but on analyzing the properties of a process with the respective noise model. ## III Characterization We now present our novel protocol for characterizing all the parameters of the coherently rotated Pauli noise model introduced in Eq. (2). The protocol relies on the assumption that the amplitude of both coherent and incoherent noise is small, allowing us to utilize the small angle approximation in Eq. (3). We express the coherent noise characterization protocol in the _Pauli transfer matrix_ (PTM) formalism [60]. As the Pauli matrices build a complete basis of the operator space, a \(n\)-qubit state \(\rho\) can be vectorized \(\left|\rho\right\rangle\in\mathbb{R}^{4^{n}}\) in the Pauli basis with the components being their expectation values \(\left|\rho\right\rangle_{i}=\frac{1}{2^{n}}\mathrm{Tr}\left[P_{i}\rho\right]\). These \(4^{n}\) expectation values can be estimated from \(3^{n}\) measurements of \(\rho\) in different Pauli bases [61; 62; 7; 63]. For a single qubit, this state representation is known as the Bloch vector [64]. In this formalism, a channel \(\mathcal{E}\) is represented by a matrix \(T\in\mathbb{R}^{4^{n}}\times\mathbb{R}^{4^{n}}\) that acts on the vectorized states according to \(\left|\mathcal{E}(\rho)\right\rangle=T\left|\rho\right\rangle\). The matrix \(T\) is called the PTM, with elements \(T_{ij}=\mathrm{Tr}\left[P_{i}\,\mathcal{E}(P_{j})\right]\). Before addressing the characterization protocol applicable to a quantum process modeled by the coherently rotated Pauli noise model of Eq. (2), we present the case of estimating \(U_{\theta}\) for a 2-qubit process described by \(\mathcal{E}_{\theta}(\rho)=U_{\theta}\rho U_{\theta}^{\dagger}\), meaning that we disregard both the Pauli noise \(\mathcal{N}\) and the ideal unitary \(\mathcal{U}_{I}\) channels. The Pauli transfer matrix \(T_{\theta}\) corresponding to \(\mathcal{E}_{\theta}\) is a square matrix of order 16, parameterized by 15 single- and two-qubit rotation angles. While the first row and column are trivial and independent of \(\theta\), in the small angle approximation (cf. Eq. (3)), the other elements of this matrix are given by \[\left(T_{\theta}\right)_{ij}\approx\delta_{ij}-\frac{1}{2}\theta_{k}\mathrm{ Tr}\left[P_{i}P_{k}P_{j}\right]=\delta_{ij}+\theta_{k}\,C_{kij}, \tag{4}\] where we use Einstein's notation for the summation over \(k\). Applying the channel to an arbitrary state \(\left|\rho\right\rangle\) leads to the equation \(\left|\mathcal{E}_{\theta}(\rho)\right\rangle_{j}=\left|\rho\right\rangle_{j} +\theta_{k}\,C_{kij}\left|\rho\right\rangle_{j}\). Defining the matrix \(\left(B_{\left|\rho\right\rangle}\right)_{ik}=C_{kij}\left|\rho\right\rangle_{j}\), the estimation problem boils down to a linear inverse problem of form \(B_{\left|\rho\right\rangle}\,\theta=\left|\mathcal{E}_{\theta}(\rho)\right\rangle -\left|\rho\right\rangle\) where the two-dimensional matrix \(B_{\left|\rho\right\rangle}\) and the vectors \(\left|\rho\right\rangle\) and \(\left|\mathcal{E}_{\theta}(\rho)\right\rangle\) can be obtained using state tomography. \(B_{\left|\rho\right\rangle}\) is singular and cannot be inverted. Therefore, experimentally estimating \(\left|\rho\right\rangle\) and Figure 2: Bloch sphere transformations. Image of the Bloch sphere on the left under the action of a Pauli channel \(\mathcal{P}\) (middle), and an additional unitary transformation \(\mathcal{U}_{\theta}\) (right). \(|\mathcal{E}_{\theta}(\rho))\) for a single state \(\rho\) does not allow to estimate \(\theta\). However, by estimating the vector representation of \(k\) different states before and after the evolution by \(\mathcal{E}_{\theta}\), we can create an over-determined system of \(14N\) independent equations for the 15 unknown parameters. We concatenate the matrices \(B_{|\rho\rangle}\) for \(N\) different states into a single matrix \(\tilde{B}\) of size \(15N\times 15\) and combine \(N\) vectors \(|\mathcal{E}_{\theta}(\rho))-|\rho\rangle\) to one vector \(y\) of length \(15N\). Finally, we estimate \(\theta\) using the least square method, \[\hat{\theta}=\left(\tilde{B}^{T}\tilde{B}\right)^{-1}\tilde{B}^{T}y\,. \tag{5}\] The strategy of the characterization protocol for the entire \(n\)-qubit system is to apply this estimation procedure to every involved two-qubit subsystem according to their connectivity in the hardware. Three points must be considered when moving from the discussed simple case to the model given in Eq. (2): first, the action of the ideal unitary \(\mathcal{U}_{I}\); second, the presence of the Pauli noise \(\mathcal{N}\); and lastly, the impact of two-qubit noise on qubits that are outside of the considered subsystem. The effect of the ideal unitary channel \(\mathcal{U}_{I}\) can be incorporated in post-processing after estimating the state \(|\rho\rangle\). For an arbitrary \(U_{I}\), this procedure requires a full-state tomography of the system [52]. To preserve scalability, we restrict to operations, separable into multiple single- and two-qubit unitaries \(U_{I}=U_{1}\otimes U_{2}\otimes...\otimes U_{l}\). This requirement is fulfilled, e.g., in the characterization of so-called _gate layers_, i.e., \(n\)-qubit operations consisting of single- and two-qubit gates that can be executed simultaneously in experiments. We address the presence of the Pauli noise channel \(\mathcal{N}\) by assuming that both the incoherent and coherent noises are relatively small. In the scenario that \(\max_{ijk}|p(i)-p(j)|\left|\theta_{k}\right|\ll 1\) where \(p(i)\) and \(\theta_{k}\) are the Pauli, respective coherent-error rates of Eq. (2), we can consider the Pauli channel to approximately commute with the coherent rotations. Then we can show that the presented estimation procedure is not disturbed by the presence of an additional Pauli channel (cf. App. A). Intuitively this is visible when illustrating the action of the two types of noise for a single qubit on a Bloch sphere. While coherent noise corresponds to a rotation, Pauli noise leads to a shrinkage of the sphere (cf. Fig. 2) which does not alter the orientation of the sphere poles and does, therefore, not interfere with the estimation of unitary rotations. Lastly, when focusing exclusively on two-qubit subsystems, one has to address the correlated multi-qubit noise between a qubit in the regarded subsystem and the external ones. To handle this problem in the estimation of \(|\rho\rangle\) and \(|\mathcal{E}_{\theta}(\rho))\), we prepare the surrounding qubits in different basis states for every single experiment run. This technique isolates the two desired qubits and randomizes the effect of the noise introduced outside the regarded subsystem, similar to twirling schemes. We now use the presented protocol to characterize the coherent noise induced by an arbitrary gate layer executed on the IBMq Falcon Processor ibm_lagos. The result is summarized in Fig. 3, where we show the absolute values of the angles of the characterized single- and two-qubit coherent errors for an arbitrary gate layer. The layer is composed of two \(C\!X\) gates and three single-qubit gates, namely \(H\), \(\sqrt{X}\), and \(X\). Considering the connectivity of the 7 qubits in the hardware and our assumption that the noise is only locally correlated, we prepare 216 initial states, all being product states of \(|0\rangle\), \(|1\rangle\), \(|+\rangle=(|0\rangle+|1\rangle)/\sqrt{2}\), \(|-\rangle=(|0\rangle-|1\rangle)/\sqrt{2}\), \(|i\rangle=(|0\rangle+i\,|1\rangle)/\sqrt{2}\) and \(|-i\rangle=(|0\rangle-i\,|1\rangle)/\sqrt{2}\). Each two-qubit subsystem is prepared in 36 states, and for each of those states, the environment is randomized with 6 additional states. The dimension of the \(y\)-vector in the estimation problem given in Eq. (5) is thus equal to 36 for each subsystem. To further increase the value of \(N\), we execute 0 to 3 repetitions of the layer, amounting to 4 circuits per preparation. Finally, bypassing the effect of the unitary gate layer \(\mathcal{U}_{I}\) in the post-processing necessitates performing state tomography on up to three-qubit subsystems, leading to \(3^{3}\) Pauli measurements per circuit. Overall, we run \(216\cdot 4\cdot 27=23^{\prime}328\) different circuits with 128 shots each, leading to a total running time in the system of about 30 minutes. Figure 3: Characterization of the coherent noise introduced by executing a gate layer on the 7-qubit IBMq Processor ibm_lagos[46]. The gate layer is drawn on top of the connectivity tree (qubit connectivity in the quantum computing architecture), where each big circles marks a different qubit. The gate layer consists of two \(C\!X\) gates between qubits 1 and 2, and qubits 6 and 7, a Hadamard (\(H\)) gate on qubit 3, a \(\sqrt{X}\) (\(S\!X\)) gate on qubit 4 and a \(X\) gate on qubit 5. The three pieces inside the qubit representing circles represent the single-qubit coherent errors, and the nine small circles between two qubits display the two-qubit coherent cross-talk errors. All errors are measured in radians and the absolute value is plotted. The used IBMq platforms report readout errors on the order of 1%, caused by relaxation, imperfect coupling to the readout resonator, and signal amplification errors [65]. To counter the influence of these errors, we apply a post-processing readout mitigation scheme in all executed experiments [53]. From Fig. 3, we conclude that the most significant two-qubit coherent errors occur between qubits 1 and 2 and qubits 6 and 7. Those are associated with the Pauli \(P_{YZ}\) error. This result is expected, as those are the pairs of qubits on which the \(C\!X\) gates are acting. This Pauli error has already been noted in the effective analysis of the echoed cross-resonance gate used to realize the \(C\!X\) gate [66]. The predominant coherent errors are the single-qubit ones. We note no strong bias towards a specific single-qubit rotation error, and those errors could be related to the hardware implementation of each single-qubit gate. We conduct the characterization of the same device and gate layer as in Fig. 3 three times, with time differences between the runs being 6 hours and 19 days. The results available in App. B show that the leading coherent errors in the system do not change over time. The most significant single-qubit coherent errors change by no more than 10% and the two-qubit coherent rotation errors shift maximally by 0.018 radians. This result suggests that the platform suffers from systematic coherent errors not influenced by the daily re-calibration of the system. ## IV Mitigation In principle, it is possible that the characterization results in Fig. 3 originate from overfitting the non-ideal process to the introduced model. Indeed, it is unknown whether the coherently rotated Pauli noise model of Eq. (2) is expressive enough to resemble the main characteristics of the probed IBMq process. To verify that the model captures the dominant noise contributions of a process on the probed device, we examine whether the output of the presented protocol facilitates significant noise mitigation. Returning to the unmitigated circuits in Fig. 1, our goal is to suppress the observed oscillatory behavior in the echo-experiment by correcting the characterized coherent errors at the circuit level. In case of a successful correction, according to the introduced model, one would expect a purely exponential decay of the expectation values with the number of process repetitions due to the Pauli noise. From this exponential decay, we can then characterize the Pauli error rates [7; 24] and mitigate these noise components using the probabilistic error cancellation method (PEC) [8]. If the final measurement statistics after coherent error mitigation and PEC closely follow the ideal statistics, we demonstrated that the proposed model is physical, and the characterization protocol provides valuable insights. Any unitary channel is _reversible_. Therefore, in our model, the effect of the coherent noise can be canceled by single- and two-qubit rotations with opposite rotation angles than the characterized ones. Single-qubit rotations are readily available in most of the leading quantum computing architectures. To invert the two-qubit rotations, an entangling gate is necessary. We use the \(C\!X\) gate as it is a native gate of the chosen platform. By interleaving 15 single-qubit rotations into a structure of four \(C\!X\) gates and four fixed single-qubit rotations, as shown in Fig. 4, a circuit element with 15 parameters is created. These parameters, which we will denote as \(\vec{\theta}\), are directly obtained from the characterization procedure presented in the previous section. The element corresponds to the identity if all parameters are set to zero. We run the echo-experiments for three different situations: unmitigated; coherent-error mitigated; and coherent-error mitigated combined with PEC. The results are presented in Fig. 5. Fig. 5(a) and (b) were obtained with 2048 shots per circuit, whereas for Fig. 5(c), an ensemble of 280 altered circuits with 512 shots each was executed per experiment. For the unmitigated case in Fig. 5(a), we see two main characteristics: Firstly, the expectation values of the 12 Pauli operators that should be ideally 0 are spreading. Secondly, the expectation values that ideally equal 1, meaning that the states are eigenstates of the corresponding Pauli, decay. Note that the expectation value of the identity in the plot is not based on actual experiments and is always 1 due to the trace constraints on density matrices. Applying the presented coherent noise characterization protocol on the data of Fig. 5(a) leads to a first estimate of the rotation angles of the coherent errors \(\hat{\theta}_{0}\). To further improve the characterization, we run the characterization scheme with \(\vec{\theta}=\hat{\theta}_{0}\) and obtain \(\hat{\theta}_{1}\). This iterative approach is expected to improve the characterization because it compensates for violations of the assumed commutation of single- and two-qubit rotation errors. Fig. 5(b) shows the result of the final mitigation iteration with the estimate \(\vec{\theta}=\hat{\theta}_{1}\) using the gate-based mitigation circuit from Fig. 4. The observed changes in the angles between the first and the last iteration were about an order of magnitude smaller than the biggest Figure 4: Coherent noise mitigating circuit element. The circuit element comprises 4 \(C\!X\) gates, 4 fixed \(z\)-rotations by \(\pi/2\), respectively \(-\pi/2\), and 15 parameterized single-qubit rotations. The first 9 rotations (blue) account for different two-qubit coherent errors, and the final 6 rotations (red) mitigate single-qubit errors. Without the first \(C\!X\) gate, the circuit element represents a corrected \(C\!X\) gate. We represent the element by the symbol on the right-hand side, where \(\vec{\theta}\) corresponds to the angles of the 15 parameterized rotations. coherent errors and thus supporting the validity of the taken approximation. In Fig. 5(b), the mitigation effect is clearly visible, as the spreading behavior of the expectation values that are ideally 0 is largely suppressed. The expectation values of the Pauli operators, of which the prepared state is a +1 eigenstate, are higher in the mitigated case compared to the unmitigated estimation. However, the coherent noise mitigation does not retrieve their ideal-case expectation value of 1, as according to our model, the statistic is still affected by Pauli noise. To show that the examined noise follows the coherently rotated Pauli noise model in Eq. (2), we estimate the Pauli error rates from the mitigated curves of Fig. 5(b) and apply PEC to mitigate the Pauli noise. Note that different from the above presented coherent noise correction, PEC is a mitigation technique that does not correct deterministically individual circuits (as Pauli channels are not reversible) but allows noise-free estimation of expectation values from circuit ensembles. It is important to emphasize that compared to previous works demonstrating PEC [24, 43], we do not use Pauli twirling and, thus, avoid the associated circuit and sampling overhead. The Pauli eigenvalues of the Pauli noise channel are estimated by fitting exponential curves to the expectation values that are supposed to be one in the ideal case. The Pauli error rates can then be found by a Walsh-Hadamard transform [7]. We follow the PEC protocol of van den Berg _et al._[24] to mitigate the incoherent noise. It consists of applying the non-physical inverse of a Pauli channel by running an ensemble of unitary processes composing this inverse. Those results are weighted according to the Pauli error rates and combined in post-processing. The results of the echo-experiments with coherent error mitigation and PEC are displayed in Fig. 5(c). Compared to Fig. 5(b), the data are noisier, which we suspect to be due to the PEC sampling overhead. The obtained measurement statistics clearly resemble the main features of the ideal measurement statistics. Therefore, we can conclude that the introduced noise model explains the leading noise terms in the examined system and that the presented characterization protocol generates valid physical insights into the noise. Similar characterization and mitigation results were obtained on ibm_jakarta. Based on the data from Fig 5(a) and (b), we can reconstruct the density matrices for the 9 prepared initial states at every repetition of the noisy identity. We then estimate their fidelity with their ideal initial state. The standard state fidelity formula is used \(F(\sigma,\rho)=\mathrm{Tr}\left[\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}\right]^{2}\). We observe in Fig. 6 that the mitigation decreases the average infidelity by about a factor of two. ## V Generalization of the protocol We motivated our coherently rotated Pauli noise model and devised the protocol for its characterization in a manner agnostic toward the choice of the qubit-based quantum computing architecture. The same procedure can be employed to characterize coherent errors in superconducting devices as we demonstrated in this work, but also Figure 5: Noise mitigation according to the coherently rotated Pauli noise model in Eq. (2) on ibm_lagos. The plot shows the estimated expectation values of all 9 two-qubit Pauli operators for the initial state \(\ket{++}\) after \(m\) repetitions of a noisy identity (a) without mitigation, (b) with coherent error mitigation based on data of (a), and (c) with probabilistic error cancellation based on data of (b) and coherent error mitigation. \(\ket{++}\) is an eigenstate of the operators \(P_{IX}\), \(P_{XI}\), and \(P_{XX}\) (blue), as well as the identity \(P_{II}\) (purple), in the noiseless case, their expectation values are 1. The expectation values of the other 12 Pauli operators (red) are 0. The circuit diagrams contain the preparation of each qubit in one of the states \(\ket{0},\ket{+},\ket{i}\), by applying Hadamard (\(H\)) and/or phase (\(S\)) gates. The evolved states are measured in 9 bases to estimate all 16 Pauli expectation values. In (c), specific Pauli operators (\(P\)) are inserted between the circuit elements according to the PEC method. in trapped ions, neutral atoms, or any other platforms, as it only requires the preparation of elementary Pauli states without the need for entanglement. Furthermore, we can readily extend the proposed model to qudit-based quantum processors, which are also susceptible to coherent errors [67; 68; 69; 70; 71]. In these systems, Pauli \(X\) and \(Z\) operators are generalized as \(X_{d}\ket{s}=\ket{s+1\text{ mod }d}\) and \(Z_{d}\ket{s}=\omega^{s}\ket{s}\), respectively. Here, \(\ket{s}\) represents a qudit computational state with \(s\in\{1,2,\dots,d\}\) and \(\omega=\exp(i2\pi/d)\). Consequently, the Pauli noise model for a \(n\)-qudit system can be expressed similarly to Eq. (1) but with \(P_{i}\in\{(X_{d})^{p}(Z_{d})^{q}\mid q,p\in\{1,2,\dots,d\}\}^{\otimes n}\) representing the generalized Pauli operators. Since the generalized Pauli operators exhibit similar properties to qubit Pauli gates (such as being unitary 1-designs and normalized by the qudit Clifford group), the Pauli transfer matrix formalism remains valid in the qudit framework. It thus enables the same twirling and characterization techniques employed in the qubit case to be utilized [67; 68; 69; 70; 71]. Moreover, this implies that the unitary errors can be effectively modeled by the coherently rotated Pauli noise model described in Eq. (2) and that the characterization protocol presented in Sec. III can be utilized. Similarly, our characterization protocol can be expanded to encompass the logical level, as logical qubits can be susceptible to unitary noise often caused by coherent errors at the physical level [72; 73]. To extend our protocol, it becomes necessary to prepare and measure logical Pauli basis states. These operations are inherent to various error correcting codes, thereby enabling the generalization of our protocol. ## VI Conclusion and outlook In this paper, we introduced a characterization technique to estimate coherent errors in noisy quantum devices. An existing Pauli noise model is extended to incorporate coherent errors by adding unitary rotations to the model. We devise a procedure to learn the unknown parameters of this coherently rotated Pauli noise model. The scalability of our approach is affirmed by assuming a limited correlation length of the coherent noise. As the results of the characterization of a physical quantum processor showed, the protocol is feasible to be executed on an actual device. For the tested device, the characterization revealed the leading coherent errors introduced by the probed gate layer. Notably, these characterized coherent errors remained stable over multiple re-calibration cycles, suggesting a systematic nature. To verify the validity of our noise model, we mitigated the coherent errors and the characterized Pauli noise. The coherent error mitigation was achieved with a gate-level correction scheme for two-qubit coherent errors. To counter the Pauli noise, we applied probabilistic error cancellation. The combination of the two techniques resulted in almost ideal measurement statistics. This finding shows that the model is sufficiently expressible to explain the main noise contributions of a specific process on an actual processor. Moreover, the successful mitigation of the characterized errors confirms the applicability of the developed protocol. Our protocol is hardware agnostic and thus expected to be executable on a variety of hardware platforms. Our work offers several promising avenues for future research. The method can enhance mitigation schemes that rely on detailed knowledge of the noise structure. One such approach is optimizing the resilience to coherent noise during circuit transpilation based on the noise insights for each gate or gate layer [36; 40; 41; 74; 75]. A clear example of this is the hidden inverses method [36; 41], which mitigates the impact of coherent errors by compiling circuits in a way that induces destructive interference among the coherent errors of the gates. However, implementing hidden inverses requires knowledge of the physical nature of these errors, which in Ref. [41] was obtained through gate-set tomography. Integrating our approach into the calibration process can provide valuable information to the compiler, enabling it to leverage the hidden inverses technique while actively inverting the non-corrected coherent errors. Similarly, our technique opens the door to studying and developing noise-aware versions of advanced mitigation techniques such as dynamical decoupling or composite pulses. Moreover, information on coherent errors facilitates Pauli conjugation [76]. An efficient noise twirling method that limits sampling overhead compared to common Pauli twirling and, therefore, diminishes the reduction of the threshold of error-correcting codes. Finally, identifying the nature and strength of the errors is also crucial for the fault-tolerant operation of a quantum processor unit at the logical level [51; Figure 6: Influence of the coherent error mitigation on fidelity measurement between the ideal state and states evolved by the noisy identity. From the data of Fig. 8, the density matrix of the evolved state is estimated for each number of repetitions of the noisy identity and each of the 9 prepared states. The plot shows the fidelity between these state estimates and the ideally prepared state. The thin lines represent the evolution of the fidelity with the number of repetitions of the noisy identity for each state, and the thick lines are the average of the 9 thin lines. 72, 77, 78]. While quantum error correction protocols transform local unitary errors into correctable probabilistic ones [49, 50], coherent noise can lead to significantly larger worst-case logical errors than only incoherent noise [78, 79, 80, 9, 73, 78, 72, 73, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90]. The knowledge gained from coherent errors, which effectively represents soft information, can then be leveraged to enhance the performance of quantum error-correcting codes at both the decoding and encoding stages. Indeed, while optimal decoders are notoriously hard to find [81], recent studies have demonstrated the benefit of soft information in improving existing decoders and overall quantum error correction performance [82, 83, 84]. At the encoding level, it is possible to design codes to be naturally robust against specific coherent noises, as demonstrated in Refs. [85, 86]. Thus, our protocol has the potential to be a diagnostic tool that could help to decide on the next higher-level encoding. In the long run, the presented characterization protocol paves the way for the development of improved hardware platforms by identifying the limiting imperfections. Yet, already in the near term it can serve as a tool for various applications that rely on knowledge of the existing noise structure, such as quantum error correction or hardware-aware algorithm design. ###### Acknowledgements. The authors thank Jonathan Home, Elias Zapusek, Roland Matt, Jeremy Flannery, and Luca Huber for helpful comments and discussions throughout the project. This work was supported by the Swiss National Science Foundation (SNSF) through the National Centre of Competence in Research - Quantum Science and Technology (NCCR QSIT) grant 51NF40-160591. I.R. and F.R. acknowledge financial support by the Swiss National Science Foundation (Ambizione grant no. PZ00P2_186040). We acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team. ## Appendix A Addressing Pauli noise neglect during coherent noise characterization This section complements the intuitive reasoning of Sec. III, on why we can neglect the influence of the Pauli noise channel when characterizing the present coherent noise, with a mathematically more rigorous explanation. The action of a noisy channel modeled by \(\mathcal{E}_{\mathcal{P},\theta}(\rho)\) as defined in Eq. (2) can be represented in terms of the vectorized form of the density matrix of \(\rho\) and the PTM of the channel. It reads \(\left|\mathcal{E}_{\mathcal{P},\theta}(\rho)\right\rangle=T_{\mathcal{E}_{ \mathcal{P},\theta}}\left|\rho\right\rangle=T_{U_{g}}T_{\mathcal{P}}T_{U_{ \theta}}\left|\rho\right\rangle\) where \(T_{U_{g}}\), \(T_{\mathcal{P}}\) and \(T_{U_{I}}\) are the transfer matrices of the coherent, Pauli and the ideal unitary channels, respectively. As stated in the main text, we discuss the case of \(T_{U_{I}}\) being the identity. We moreover assume that \(\left|\mathcal{E}_{\mathcal{P},\theta}(\rho)\right\rangle\) and \(\left|\rho\right\rangle\) can be experimentally estimated. The estimation of the 15 parameters of \(T_{U_{\theta}}\) can be written as a minimization problem over the estimator \(\hat{T}_{U_{\theta}}\). **Statement 1**.: _With the assumption \(T_{\mathcal{P}}\) and \(T_{U_{\theta}}\) being close to the identity. The minimization of the function_ \[\left\|\left|\mathcal{E}_{\mathcal{P},\theta}(\rho)\right\rangle-\hat{T}_{U_{ \theta}}T_{\mathcal{P}}\left|\rho\right\rangle\right\|^{2} \tag{10}\] _over \(\hat{T}_{U_{\theta}}\) is equivalent to the minimization of_ \[\left\|\left|\mathcal{E}_{\mathcal{P},\theta}(\rho)\right\rangle-\hat{T}_{U_{ \theta}}\left|\rho\right\rangle\right\|^{2}. \tag{11}\] Proof.: A noisy identity process described by the coherent noise model, \(\left|\mathcal{E}_{\mathcal{P},\theta}(\rho)\right\rangle\) equals \(T_{U_{\theta}}T_{\mathcal{P}}\left|\rho\right\rangle\). We show that \(\theta^{\prime}=\theta\) minimizes the expression \(\left\|T_{\mathcal{P}}T_{U_{\theta}}\left|\rho\right\rangle-T_{U_{\theta^{ \prime}}}\left|\rho\right\rangle\right\|\) independent of the choice of \(\rho\). \[\begin{split}&\min_{\theta^{\prime}}\left\|\left|\mathcal{E}_{ \mathcal{P},\theta}(\rho)\right\rangle-T_{U_{\theta^{\prime}}}\left|\rho \right\rangle\right\|^{2}\\ &=\min_{\theta^{\prime}}\left\|T_{U_{\theta}}T_{\mathcal{P}} \left|\rho\right\rangle-T_{U_{\theta^{\prime}}}\left|\rho\right\rangle\right\| ^{2}\\ &=\min_{\theta^{\prime}}\left|\rho\right\rangle^{\top}\Big{(}T_{ \mathcal{P}}^{\top}T_{U_{\theta}}^{\top}-T_{U_{\theta^{\prime}}}^{\top}\Big{)} \Big{(}T_{U_{\theta}}T_{\mathcal{P}}-T_{U_{\theta^{\prime}}}\Big{)}\left|\rho \right\rangle\\ &=\min_{\theta^{\prime}}\left(\rho\left|\left(T_{\mathcal{P}}^{ \top}T_{U_{\theta}}^{\top}T_{U_{\theta}}T_{\mathcal{P}}-T_{U_{\theta^{ \prime}}}^{\top}T_{U_{\theta}}T_{\mathcal{P}}-\right.\right.\right.\\ &\qquad\qquad\qquad\left.\left.\left.-T_{\mathcal{P}}T_{U_{\theta} }^{\top}T_{U_{\theta^{\prime}}}+T_{U_{\theta^{\prime}}}^{\top}T_{U_{\theta^{ \prime}}}\right)\left|\rho\right\rangle\,.\end{split} \tag{12}\] Given that the noise channel is close to the identity, we take the approximation that the diagonal matrix \(T_{\mathcal{P}}\) commutes with the matrices \(T_{U_{\theta}}\) and \(T_{U_{\theta}^{\prime}}\). Furthermore, because of because of the unitarity of the coherent rotation channel, \(T_{U_{\theta}}^{\top}T_{U_{\theta}}=\mathbb{I}\) is valid. Then, the minimization problem boils down to \[\min_{\theta^{\prime}}\left(\rho\left|\left(\mathbb{I}+T_{\mathcal{P}}^{2}-T_{ \mathcal{P}}\Big{(}T_{U_{\theta}}^{\top}T_{U_{\theta^{\prime}}}+\big{(}T_{U_{ \theta}}^{\top}T_{U_{\theta^{\prime}}}\big{)}^{\top}\Big{)}\right)\right)\left| \rho\right\rangle\,. \tag{13}\] The off-diagonal part of \(T_{U_{\theta}}^{\top}T_{U_{\theta^{\prime}}}\) is anti-symmetric. Consequently, \(T_{U_{\theta}}^{\top}T_{U_{\theta^{\prime}}}+\big{(}T_{U_{\theta}}^{\top}T_{U_{ \theta^{\prime}}}\big{)}^{\top}\) is diagonal and smaller or equal than \(2\ \mathbb{I}\). Thereby, all involved matrices are diagonal. From \(1+T_{\mathcal{P}}^{2}\geq 2\ T_{\mathcal{P}}\), it follows that the expression in Eq. (13) is minimal when \(T_{U_{\theta}}^{\top}T_{U_{\theta^{\prime}}}+\big{(}T_{U_{\theta}}^{\top}T_{U_{ \theta^{\prime}}}\big{)}^{\top}=2\ \mathbb{I}\). This equation has a unique solution corresponding to \(T_{U_{\theta}}=T_{U_{\theta^{\prime}}}\), which finally implies \(\theta=\theta^{\prime}\). This statement implies that we can optimize for the coherent noise term without considering the present Pauli noise, allowing to apply the closed form solution of Eq. (5) or conduct a least squares optimization method for \(\left\|\mathcal{E}_{\mathcal{P},\theta}(\rho)\right\rangle-\hat{T}_{U_{\theta}} \left|\rho\right\rangle\|^{2}\). The approximation of the diagonal matrix \(T_{\mathcal{P}}\) commuting with the matrices \(T_{U_{\theta}}\) is limited by the term \(\max_{ijk}\left|p(i)-p(j)\right|\left|\theta_{k}\right|\ll 1\). The first part of this expression quantifies how isotropic the action of the Pauli noise is. For the depolarizing channel which for a single qubit is illustrated as a uniformly contracting Bloch sphere, \(\max_{ij}\lvert p(i)-p(j)\rvert\) is zero. In that case \(T_{\mathcal{P}}\) can be written as \(f\cdot\mathbbm{l}\), with \(f\) being the depolarizing factor, and thus commutes with all other matrices. The second term of the expression, namely \(\max_{k}\lvert\theta_{k}\rvert\), is a measure for the maximal off-diagonal terms in the coherent noise modeling PTM. As the Pauli noise modeling matrix is diagonal, the violation of the commutation is only facilitated by the off-diagonal of \(T_{U_{\theta}}\). The geometric pictures of scaling and rotation helps to understand this commutation relation better. ## Appendix B Coherent noise drift We conducted the characterization of the gate layer represented in Fig. 3 again after 6 hours as well as 19 days. The shifts in the characterization results between those three experiments are shown in Fig. 7. First, we observe that the leading coherent errors in the system did not change over the 19 days. However, as expected, the changes observed for the two experiments within 6 hours are significantly smaller than the differences over the complete time span. After 6 hours, the maximum deviation for single-qubit coherent errors was 0.009 radians and 0.004 radians on average. The two-qubit errors changed maximally by 0.008 radians and on average by 0.003 radians. In the total period (i.e. after 19 days), the most significant single-qubit coherent errors have changed by maximally 10% (corresponding to about 0.03 radians). The two-qubit coherent rotation errors have changed by maximally 0.018 radians. The average deviation was 0.007 radians for the single-qubit errors and 0.003 radians for the two-qubit errors. These results suggest that the platform is affected by systematic coherent errors which are not influenced by the hourly and daily re-calibrations of the system during which qubits' frequency and readout angle, as well as pulses' amplitude and phase of basic single- and two-qubit gates, are calibrated [46]. ## Appendix C Complementary mitigation data Fig. 8 displays the complete data set used to create Fig. 6. The influence on the expectation values that ideally are 0 for all number of repetitions is clearly visible for all states, as they disperse much less in the mitigated case. Furthermore, the expectation values of the operators, of which the prepared state is an eigenstate, are closer to the ideal value of 1 in the mitigated case. While the mitigation did not work equally well for all 9 states, the whole data set gives a clear indication of the positive effect of the presented coherent noise mitigation scheme. Figure 7: Coherent noise drift. We repeated the characterization result of Fig. 3 at 3 different times. Subplot (a) shows the baseline data of Fig. 3. Subplot (b) displays the observed changes in the coherent error characterization over a time span of 19 days and subplot (c) exhibits the differences over (6) hours. The experiments were run on the 7-qubit IBM Quantum Processor ibm_lagos[46] on September 22, 2022, and October 11, 2022.
2304.11740
A Neuro-Symbolic Approach for Enhanced Human Motion Prediction
Reasoning on the context of human beings is crucial for many real-world applications especially for those deploying autonomous systems (e.g. robots). In this paper, we present a new approach for context reasoning to further advance the field of human motion prediction. We therefore propose a neuro-symbolic approach for human motion prediction (NeuroSyM), which weights differently the interactions in the neighbourhood by leveraging an intuitive technique for spatial representation called Qualitative Trajectory Calculus (QTC). The proposed approach is experimentally tested on medium and long term time horizons using two architectures from the state of art, one of which is a baseline for human motion prediction and the other is a baseline for generic multivariate time-series prediction. Six datasets of challenging crowded scenarios, collected from both fixed and mobile cameras, were used for testing. Experimental results show that the NeuroSyM approach outperforms in most cases the baseline architectures in terms of prediction accuracy.
Sariah Mghames, Luca Castri, Marc Hanheide, Nicola Bellotto
2023-04-23T20:11:40Z
http://arxiv.org/abs/2304.11740v1
# A Neuro-Symbolic Approach for Enhanced Human Motion Prediction ###### Abstract Reasoning on the context of human beings is crucial for many real-world applications especially for those deploying autonomous systems (e.g. robots). In this paper, we present a new approach for context reasoning to further advance the field of human motion prediction. We therefore propose a neuro-symbolic approach for human motion prediction (NeuroSyM), which weights differently the interactions in the neighbourhood by leveraging an intuitive technique for spatial representation called Qualitative Trajectory Calculus (QTC). The proposed approach is experimentally tested on medium and long term time horizons using two architectures from the state of art, one of which is a baseline for human motion prediction and the other is a baseline for generic multivariate time-series prediction. Six datasets of challenging crowded scenarios, collected from both fixed and mobile cameras, were used for testing. Experimental results show that the NeuroSyM approach outperforms in most cases the baseline architectures in terms of prediction accuracy. ## I Introduction Human motion prediction has been the area of focus of many researchers to date, ranging from single human motion prediction (i.e. with no context) to the most developed frameworks in context-aware (dynamic and static context) human motion prediction. The importance given to this area of study traces back to the crucial impact it has on many real-world applications including but limited to video surveillance, anomaly detection, action and intention recognition, autonomous driving, and robot navigation. While many studies on human motion prediction have been relying on datasets collected from a fixed camera to enhance the accuracy and time complexity of their frameworks, very few have studied the field from a mobile camera perspective where the problem becomes more challenging with restriction on the global observability of the scene context and interactions. Hence, we focus in this work on studying the field from both fixed and mobile camera perspective targeting the autonomous systems (e.g. robotic) application. Reasoning on the context (e.g multi-agents interactions, static key objects) of humans is crucial primarily for safe autonomous systems navigation where human-robot coexistence, for example, is increasingly taking part in domestic, healthcare, warehouse, and transportation domains. A robot tasked to deliver an order to a table in a restaurant, bring a medicine to a patient in a hospital, or clean a road side-walks, needs to update on-the-fly its internal state representation of dynamic agents in the scene and, therefore, update its target plan. In addition to safety in navigation, reasoning on the motion of multi-agents presents also the advantage of implicit intent communication. For example, a social robot detecting a conversational group in the environment and predicting that the group will hold on its current interaction for a while, makes sure to not unnecessarily interfere with the group. On the other hand, if the robot predicts that a person is coming towards itself, e.g to handle a box in a warehouse, it can select an action that prioritizes the responsiveness to the human's intent. Context reasoning is presented over the literature in the form of human-human and human-objects interactions reasoning. The authors in [1] have jointly modeled human-robot and human-human interactions in a deep reinforcement learning framework to drive robot navigation. In [2], instead, the authors learn an optimal local trajectory from a global plan by fusing human trajectories, Lidar features, global path and odometry features in an attention layer. Context-awareness methods have also been proposed to deal with the challenges faced by the long-term prediction of single human motion [3, 4, 5, 6, 7, 8]. In the previous works, interactions are processed either in a grid-based pooling approach or in a global pooling mechanism to deal with the problem of dynamic neighbourhood size. Though context-awareness has proven better accuracy in predicting human motion, one problem remains unnoticed. Fig. 1: Conceptual illustration of the social cafe-bar scenario used for reasoning on the context and interactions of humans in dense environments. In the area of context-aware human motion prediction, a research gap remains for embedding all the neighbourhood interactions in the learning process without reasoning on the ones that are more or less stable (i.e. reliable) than others, and hence can be more or less important in affecting the future states of a single agent. Indeed, when interactions are defined spatially and hence retrieved from the relative motion between pairs of agents, not all neighbourhood interactions are of equal reliability and hence of equal importance to the prediction of future states of a single human being, as can be the case for the spatial interactions interconnecting the queue line and the table in Fig. 1. In this work, we address the problem of context-aware human motion prediction by injecting a-priori information on the interactions in a neuro-symbolic approach. Among spatial interaction representations, the qualitative approach presents an intuitive way for interaction description. Qualitative spatial interactions are defined as symbolic representations of interactions between a pair of agents in the spatial domain, i.e. 2D navigation. One way to model qualitative spatial interactions in multi-agent scenarios is by using the qualitative trajectory calculus (QTC) [9, 10]. QTC-based models of moving agent pairs can be described by different combinations of QTC symbols that represent spatial relations between pairs of interacting agents, like relative distance (i.e moving towards/away), velocity (i.e moving faster/slower), and orientation (i.e. moving to left/right). The contribution of this paper is therefore three-fold: (i) proposing a novel neuro-symbolic approach for enhancing human motion prediction (denoted NeuroSyM) using a-priori information on the spatial interactions between couple of agents, (ii) experimentally evaluating the proposed framework on two architectures from the state of art, one of which is a baseline for human motion prediction and the other is a baseline for generic multivariate time-series prediction, and on different datasets collected from both fixed and mobile camera perspective, (iii) releasing the source code as a _Github_ repository1 with some qualitative results to help in the testing and integration of NeuroSyM on other baselines for human motion prediction. Footnote 1: [https://github.com/sariahmghames/NeuroSyM-prediction](https://github.com/sariahmghames/NeuroSyM-prediction) The remainder of the paper is as follows: Sec. II presents an overview of the related works; Sec. III explains the approach adopted to reason on the context of humans in dense scenes; Sec. IV illustrates and discusses the results from experiments conducted on open-source datasets for human motion prediction and social navigation; finally, Sec. V concludes by summarising the main outcomes and suggesting future research work. ## II Related Works **Context-aware human motion prediction:** The state of the art shows extensive works in the area of context-aware human motion prediction. Among those works, some incorporate spatio-temporal dependencies [11, 12, 13] of interactions and others are limited to only spatial [3, 4, 5, 6, 7, 8]. Another sub-category differentiates related works into those dealing with both dynamic and static context [11, 14, 15, 4, 7], those neglecting the dynamic context [8], while others focus more on the dynamic context of interactions only [12, 13, 5, 6, 13]. The Dynamic and Static Context-aware Motion Predictor (DSCMP) in [11] integrates dynamic interactions between agents in a Social-aware Context Module (SCM), whereas the static context is incorporated in a latent space with a semantic scene mapping. The two most common baseline architectures used over the literature for human motion prediction are the Social-LSTM (S-LSTM) [3] and the Social Generative Adversarial Networks (SGAN) [5]. They use a spatially-aware pooling mechanism for incorporating the hidden states of proximal dynamic agents as a way to overcome the problem of variable and (potentially) large number of people in a scene. The SGAN, however, has outperformed S-LSTM in terms of both accuracy and time complexity by avoiding the grid-based pooling mechanism technique. In parallel, SGAN outperformed Stgat [13] with time complexity and parameters consumption. In this work, the fundamental SGAN architecture from the literature is used to evaluate our NeuroSyM approach for motion prediction, leaving room for potential integration of other architectures with static context awareness (e.g. the image-driven static context of [11]). Here we consider only raw trajectories (or metric coordinates) of the context (dynamic and/or static) as possible input to the deployed model architectures. **Human-human interactions modeling:** The methods for interactions modeling with nearby dynamic agents can be classified into two types of problem: (a) one-to-one modeling, and (b) crowd modeling [16, 17, 18]. A one-to-one interaction modeling was presented in the literature in the form of quantitative or qualitative representations. Quantitative representation of interactions leverages a multi-layer perceptron to embed relative pose (positions or velocities) between pairs of agents as in [1, 3, 5]. While qualitative representation of interactions was used in [19] and [20] to model human-robot spatial interactions using QTC. In [19] the use of qualitative rather than quantitative representations for analysing human-robot spatial interactions (HRSI) was motivated by the need of a more intuitive understanding of the observed interactions. In [21] and [22] similar models are used to implement human-aware robot navigation strategies. The prediction of interactions in [21] is based on a Bayesian temporal model limited to single human-robot pairs, without considering nearby static or dynamic objects, which limits the prediction performance. In our study of single human motion, we rely on one-to-one (i.e. pairwise) interactions modeling due to the different nature of interactions that may occur in the neighbourhood of a single agent. Hence, we build on previous works from qualitative representation of interactions [19] to weight the quantitative embedding of neighbourhood interactions. ## III NeuroSyM Prediction Approach ### _Problem Definition_ While most, if not all, works in the area of context-aware human motion prediction embeds equally all kind of interactions in the pre-defined neighbourhood size of a single agent, in this work we formulate the problem of context-aware human motion prediction in terms of weighted interactions embedding between pairs of agents. We show that a-priori information on the kind of interactions helps the network to predict motion with a better accuracy. In the following, we present the formulation of spatial interactions which will be used later on to label (or weight) the interactions as a symbolic reasoning called by the neural model. ### _Spatial Interactions: a Qualitative Formulation_ A qualitative spatial interaction is defined by a vector of \(m\) QTC relations [9], which consist of qualitative symbols (\(q_{i}\), \(i\in\mathbb{Z}\)) in the domain \(U=\{-,0,+\}\). We can distinguish between four types of QTC: (a) \(QTC_{B}\) basic, (b) \(QTC_{C}\) double-cross, (c) \(QTC_{N}\) network, and (d) \(QTC_{S}\) shape. Here, we focus on the use of \(QTC_{C}\), since it better represents the dynamics of the agents in our application scenario. Two types of \(QTC_{C}\) exist in the literature: \(QTC_{C_{1}}\), with four symbols \(\{q_{1},q_{2},q_{3},q_{4}\}\), and \(QTC_{C_{2}}\), with six symbols \(\{q_{1},q_{2},q_{3},q_{4},q_{5},q_{6}\}\). The symbols \(q_{1}\) and \(q_{2}\) represent the towards/away (relative) motion between a pair of agents; \(q_{3}\) and \(q_{4}\) represent the left/right relation; \(q_{5}\) indicates the relative speed, faster or slower; finally, \(q_{6}\) depends on the (absolute) angle with respect to the reference line joining a pair of agents. The \(QTC_{C_{1}}\) type is illustrated in Fig. 2 for a case of interaction between three body points. Given the time series of two moving points, \(P_{k}\) and \(P_{l}\), the qualitative interaction between them is expressed by the symbols \(q_{i}\) as follows: \[(q_{1}) -:d(P_{k}|t^{-},P_{l}|t)>d(P_{k}|t,P_{l}|t)\] \[0:d(P_{k}|t^{-},P_{l}|t)=d(P_{k}|t,P_{l}|t)\] \[+:d(P_{k}|t^{-},P_{l}|t)<d(P_{k}|t,P_{l}|t)\] \[(q_{2})\text{ same as }q_{1}\text{, but swapping }P_{k}\text{ and }P_{l}\] \[(q_{3}) -:\|P_{k}^{\vec{t}^{\prime}}P_{k}^{\prime}\wedge P_{l}^{\vec{t}}P_ {k}^{\prime}\|<0\] \[0:\|P_{k}^{\vec{t}^{\prime}}P_{k}^{\prime}\wedge P_{l}^{\vec{t}}P_ {k}^{\prime}\|=0\] \[+:\text{all other cases}\] \[(q_{4})\text{ same as }q_{3}\text{, but swapping }P_{k}\text{ and }P_{l}\] \[(q_{5}) -:\|\vec{V_{k}^{\prime}}\|<\|\vec{V_{l}^{\prime}}\|\] \[0:\|\vec{V_{k}^{\prime}}\|=\|\vec{V_{l}^{\prime}}\|\] \[+:\text{all other cases}\] \[(q_{6}) -:\theta(\vec{V_{k}^{\prime}},P_{k}^{\prime}P_{l}^{\prime})< \theta(\vec{V_{l}^{\prime}},P_{l}^{\vec{t}}P_{k}^{\prime})\] \[0:\theta(\vec{V_{k}^{\prime}},P_{l}^{\vec{n}}P_{l}^{\prime})= \theta(\vec{V_{l}^{\prime}},P_{l}^{\vec{n}}P_{k}^{\prime})\] \[+:\text{all other cases}.\] where \(d(.)\) is the euclidean distance between two positions, \(V(.)\) the velocity vector of a body point, \(\theta(.)\) is the absolute angle between two vectors, and \(\wedge\) is the cross-product notation between two vectors. In this paper, we propose a neuro-symbolic approach for motion prediction (NeuroSyM) that can be implemented to every related work in the field to enhance the accuracy of the motion. In order to narrow down the study, we take advantage of \(QTC_{C_{1}}\) to label the interactions as described in the following section. We leave therefore the investigation into the additional information provided by \(QTC_{C_{2}}\) to our future work. ### _Data Labeling_ We leverage our labeling technique for pairwise spatial interactions on the concept of Conceptual Neighbourhood Diagram (CND) presented in [23] and in the original work of qualitative spatial interactions in [9]. As per [9], the construction of a CND (as in Fig. 3) for QTC is based on the notion of conceptual distance (**d**), which is used to define the closeness of two QTC states at time t and t', respectively, and can be calculated as follows: \[\textbf{d}_{QTC}^{QTC^{\prime}}=\sum_{q_{i}}|\;q_{i}^{QTC^{\prime}}-q_{i}^{ QTC^{\prime}}\;|, \tag{1}\] where, for practical reasons, the symbols "+" and "-" are associated to the numerical values "+1" and "-1", as in [9]. In Fig. 3 (left), for each link (i.e. edge) between conceptual neighbours (the nodes) the conceptual distance between the adjacent relations is indicated. In a CND, and due to the laws of continuity, the conceptual neighbours of each particular relation constitute only a subset of the base relations. For example, \(QTC_{C_{1}}\) has 81 basic states or relations (each symbol \(q_{i}\) has 3 different possible types of transitions from domain \(U\)) but the conceptual neighbours of \(\{-,-,-,-\}\) relation as illustrated in Fig. 3 (right) reduce from 80 to 15 for the following reasons [23]: * Transition from "+" to "-" (and vice versa) is impossible without passing through 0, hence transition from \(\{-,-,+,+\}\) to \(\{-,+,+,+\}\) is impossible without passing through \(\{-,0,+,+\}\). * "0" Dominates "+" and "-", hence a transition from \(\{+,-,-,0\}\) to \(\{+,-,0,+\}\) is impossible without passing through \(\{+,-,-,+\}\) or \(\{+,-,0,0\}\). * The combination of both former rules. For the sake of labeling, we omit the need for information on the conceptual distance between states, and we focus on the possible transitional states for each QTC relation given the state at time t. The CND for \(QTC_{C_{1}}\) is not completely shown in Fig. 3 (right) as it is too complex to visualise on a two-dimensional medium. The label (\(\alpha_{end}\)) for each of the 81 states of a \(QTC_{C1}\) type of qualitative calculus is formulated as follows: Fig. 2: A case of \(QTC_{C_{1}}\) representation of interactions between three body points \(P_{k}\), \(P_{l}\), and \(P_{q}\). \[\alpha_{cnd}=\Pr(QTC^{{}^{\prime}}|QTC^{{}^{\prime}})=\frac{1}{N_{Tr}} \tag{2}\] where \(N_{Tr}\) represents the number of transitional states. \(\alpha_{cnd}\) represents the level of stability or reliability of a transitional state. The higher the number of possible transitional states, the lower the likelihood to transition into a single state and vice versa. In Fig. 3, the likelihood to transition from \(\{-,-,-,-\}\) into \(\{0,0,0,0\}\) is 0.067, however the likelihood increases to 0.2 if the 15 possible transitional states reduces to 5, rendering the \(\{0,0,0,0\}\) state more reliable in the learning process. Given an interaction at time t, we associate its label to the interaction (observed or predicted) at \(t+1\). In most related works, an interaction between agents A and B is calculated as an embedding of the relative pose between them, as follows: \[Inter_{AB}=Dense(X_{B}-X_{A}) \tag{3}\] where \(Dense()\) is the embedding layer. Given the pose X of each agent at time t, a QTC state can be formulated and the corresponding label "\(\alpha_{cnd}\)" will be loaded from a dictionary. Hence, the symbolic reasoning transforms Eq. 3 into the form of \(\alpha_{cnd}\)\(Inter_{AB}\). From a practical point of view, the symbolic knowledge of interactions between two moving body-points can be readily exploited by any neural architecture for context-aware motion prediction, since the CND dictionary (associating QTC states with their corresponding \(\alpha_{cnd}\)), generated for a specific QTC configuration, remains the same regardless of the data distribution domain. ## IV Experiments As anticipated in Sec. II, we evaluated our approach for enhancing human motion prediction on two architectures from the state of the art, using raw trajectories as input. The first one is a well-known baseline architecture, the socially-acceptable trajectories with generative adversarial networks (SGAN [5]), used in the literature to enhance the accuracy and speed of human motion prediction in crowds. It relies on datasets collected from a fixed top-down camera in public spaces, capturing the entire scene dynamics (ETH dataset [24] - sequences ETH and Hotel), and on datasets collected from a mobile stereo rig mounted on a car (UCY dataset [25] - sequences Zara01-02 and Univ). In order to generalise our evaluation to robotics application, we chose another well-known dataset, the JackRabbot (JRDB [26]), which provides multi-sensor data of human behaviours from a mobile robot perspective in populated indoor and outdoor environments (Fig. 3(c)). JackRabbot was never exploited in the literature for human motion prediction, although it clearly benefits applications of social robot navigation, where local interactions can be extracted from the on-board 360\({}^{\circ}\) Lidar (Velodyne) and Fisheye camera sensors. To this end, we chose to use JackRabbot on a generic network architecture for time series prediction and where the following features can be incorporated: (a) the ability to integrate a dynamic context; (b) the ability to integrate key static objects of potential interactions (e.g. door, table, bar), differently from S-LSTM and SGAN; (c) the ability to test our neuro-symbolic approach on prediction architectures that, instead of using a pooling mechanism to overcome the size problem of dynamic input series (representing the neighbourhood in social scenarios), weights every single input (i.e. neighbour) by giving special attention to each one separately. One of the recent architectures that satisfy the last features is the dual-stage attention mechanism (DA-RNN) developed for time-series forecasting in [27]. ### _Neuro-Symbolic SGAN_ **NeuroSyM SGAN Architecture:** The original SGAN architecture (SGAN-20VP-20 in [5]) has proven good performance in terms of accuracy, collision avoidance, and time complexity with respect to its precedent baselines, as the social LSTM [3]. The core of SGAN is a generator and a discriminator trained adversarially. The generator model \(G\) has the role of generating candidate trajectories, while the discriminator model \(D\) estimates the probability that a sample comes from the training data (i.e. real) rather than from the generator output samples. The generator Fig. 4: Examples from UCY and JackRabbot datasets. Fig. 3: (left) The complete CND for \(QTC_{B1}\) in n-dimensional space. The \(QTC_{B1}\) has only \(q_{1}\) and \(q_{2}\) symbols. Straight edges represent a conceptual distance of 1 while dashed edges represent a conceptual distance of 2. (right) one part of the \(QTC_{C1}\) illustrating the possible transitions of the QTC relation \(\{-,-,-,-\}\), resulting in 15 possible transitions and therefore in \(\alpha_{cnd}\) of 0.067 weighting that same QTC relation at the next time step. consists of an encoder and a decoder, separated by a pooling mechanism, while the discriminator is mainly an encoder. In SGAN, a variety loss is introduced on top of the adversarial (min-max) loss in order to encourage the generator to output diverse samples, thanks to a noise distribution injected to the pooling mechanism output. For details on the SGAN architecture, the reader is advised to refer to the original work [5]. The performance measures used in SGAN for the evaluation process are the absolute displacement error (ADE) and the final displacement error (FDE) of the predicted trajectory (\(\bar{X}\)). The measures are calculated as follows: \[ADE=\frac{\sum_{i=1}^{N}\sum_{t=1}^{T_{pred}}\|\bar{\mathcal{X}}_{t}^{i}-X_{t}^{ i}\|_{2}}{N*T_{pred}} \tag{4}\] \[FDE=\frac{\sum_{i=1}^{N}\|\bar{\mathcal{X}}_{T_{pred}}^{i}-X_{T_{pred}}^{i}\|_{2} }{N} \tag{5}\] where \(N\) is the total number of training trajectories. The neuro-symbolic version of SGAN proposed in this paper is illustrated in Fig. 5, highlighting the difference to the original pooling mechanism of SGAN [5]. NeuroSyM acts mainly on the pooling mechanism of the predictive models, where it represents human-human interactions by (a) embedding first their relative pose in all the observed states of each agent through a dense layer, then (b) weighing the embedded relative pose based on the CND-inspired label (\(\alpha_{cnd}\)) associated to the interaction at a previous time step, and finally (c) max-pooling the weighted embedding across neighbours in the global scene. On the contrary, the original SGAN considers relative poses at the final observed state only, with no attention given to the reliability or stability level the interactions might have to help inferring future states of the agent under consideration. _Results:_ For a reliable comparison between SGAN and NeuroSyM SGAN, we trained again the former on our computing system (11th Gen Intel(r) Core(tm) i7-11800H processor and NVIDIA GeForce RTX 3080 16GB GPU), which was able to replicate almost the same hyper-parameters of the original work on SGAN, except for the batch size, in our case limited to 10 instead of 64. A comprehensive list of the hyper-parameters used to train and validate all model architectures is reported in the appendix Sec. V. The ADE and FDE results for both architectures are reported together with their standard deviations (DE-STD and FDE-STD) in Table I for \(T_{pred}=8\) steps (i.e. 3.2 seconds) and 12 steps (i.e. 4.8 seconds), and on the five sequences from the publicly available datasets ETH and UCY. The results show a better ADE, FDE, DE-STD, and FDE-STD for the NeuroSyM approach compared to the original SGAN. The relative gain in terms of error drop is represented in Table I by a positive percentage for all the four measures with NeuroSyM with respect to SGAN on each dataset. The average relative gain for ADE, FDE, DE-STD, and FDE-STD, over the 5 datasets, is 60.84%, 58.4%, 28%, and 33.68%, respectively, for \(T_{pred}=8\); and 78.58%, 76.97%, 43.5%, 46.3%, respectively, for \(T_{pred}=12\). _NeuroSyM DA-RNN Architecture:_ The original DA-RNN architecture [27] implements a dual-stage attention mechanism for time-series forecasting. The dual-stage network consists of an encoder with an input attention module weighing the \(n^{*}\) time-series data spatially, each of length \(T_{h}\), where \(T_{h}\) is the observed time history. The encoder is then followed by a decoder with a temporal attention layer, capturing the temporal dependencies in the input series. The encoder and decoder are based on an LSTM recurrent neural network. The network outputs the prediction of one time-series data of length \(T_{f}\), where \(T_{f}\) is the predictive time horizon. The reader can refer to [27] for a detailed explanation of the network components, where \(T_{f}\) was limited to 1. The NeuroSyM version of DA-RNN we propose in this paper takes advantage of the symbolic knowledge of the spatial interactions between pairs of agents. In DA-RNN, the encoder attention weights ("\(\alpha\)" in Fig. 6) highlights the importance of each input series at time \(t\) on the output prediction at \(t+1\). The input attention weights in DA-RNN are calculated as follows: \[\alpha_{t}^{k}=\frac{\exp(e_{t}^{k})}{\sum_{t=1}^{n}\exp(e_{t}^{t})} \tag{6}\] where \(e_{t}^{k}\) is the embedding of the \(k^{th}\) input series at time \(t\). It is implemented as: \[e_{t}^{k}=dense[tanh(dense(\mathbf{h}_{t-1};\mathbf{s}_{t-1})+dense(\mathbf{ x}_{1.T_{h}}^{k}))] \tag{7}\] where \(h_{t-1}\) and \(s_{t-1}\) are the hidden and cell state of the encoder LSTM at a previous time step. The NeuroSyM DA-RNN acts on the input series embedding \(e_{t}^{k}\) before the softmax function (Eq. 6) is applied on it. Hence, the NeuroRosSyM approach transforms Eq. 7 into \(\alpha_{cnd,t}^{k}\), \(e_{t}^{k}\), updating the encoder attention weights with an a-priori knowledge of the reliability or stability of each input series. For applications of human motion prediction in crowds (i.e. with context), \(\alpha_{cnd,t}^{k}\) is generated from Eq. 2. Each input series represents the motion history of a neighbour agent, whereas the first time series is the motion history of the considered person and the output is the predicted motion of that specific agent. Fig. 6 illustrates schematically where the NeuroSyM module intervenes on the original DA-RNN architecture with the injection of a CND layer at the interface between the embedding and the softmax layers. **Data Processing:** Social dense scenarios as the ones presented in the JackRabbot dataset often have an unpredictable number of people entering (\(P_{e}\)) and leaving (\(P_{l}\)) the environment, possibly leading to a combinatorial explosion in the input size of the predictive model and in its number of training parameters (i.e. when \(P_{e}\gg P_{l}\) and \(P_{e}\) is very large). Indeed, \(n\) individuals in a scene results in \(n(n-1)/2\) pairwise data points. That is in addition to the difficulty of deploying an online model with variable input size. As a consequence, we implement a crowd clustering approach on JRDB for local interactions embedding (as shown in Figs. 1 and 6). For each agent \(i\) in a given scene, we generate a cluster with a fixed interaction radius \(R=3.7m\). The latter is selected based on the proxemics' literature [28], where the social distance for interactions among acquaintances is indeed between \(2.1m\) and \(3.7\)m. Each cluster includes \(n\) input series, with \(n\) being the maximum number of agents entering the cluster of agent \(i\) in a time interval \(T\). The maximum number of input series among all clusters, \(n^{*}\), is fixed for practical (training) purposes. Each cluster is then post-processed to include \((n^{*}-n)\) input series with complementary "fake" values. We make use of the open-source annotated 3D point clouds from JRDB, provided as metric coordinates of human (dynamic context) bounding boxes centroid, and extracted from the upper Velodyne sensor, as raw data and ground truth to our network architecture. The raw data are further processed to extract QTC representations of spatial interactions between pairs of agents in a cluster. We then make use of the CND dictionary to associate each local QTC representation to its corresponding weight \(\alpha_{end,J}^{k}\), which is then called by the NeuroSyM architecture as an a-priori information on the reliability of each input series and hence their importance on the predicted output. The a-priori information is then used to weight the embedding of the inputs. The environments considered in JRDB are fairly crowded. Among them, we selected a cafe shop (_bytes-cafe-2019-02-07_0_). DA-RNN embeds input series using dense layers, facilitating the possibility to integrate static context as series of constant metric coordinates. In the cafe scenario, the static context includes objects such as bar order and check-out points, exit door, and drinking water station, as illustrated in Fig. 1. These objects were manually selected based on a previous investigation to identify the most common ones used by people in the scenario, although in the future we plan to learn them automatically in order to adapt to different environments. The spatial coordinates of the selected key objects are incorporated in the network architecture as any other dynamic agent. _Results:_ We evaluated the DA-RNN and the NeuroSyM DA-RNN for the cafe scenario over to medium (i.e. 48 time steps, or 3.2 seconds) and long (i.e. 80 time steps, or 5.33 seconds). The results are shown in Fig. 2. The results are shown in Fig. 3. The results are shown in Fig. 4. The results are shown in Fig. 5. The results are shown in Fig. 6. The results are shown in Fig. 7. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. The results are shown in Fig. 8. 8. seconds) term horizons. The parameters for medium and long term horizon prediction were chosen based on relevant literature of human motion prediction [5, 11]. The results of DA-RNN architecture on the JRDB dataset, and the NeuroSyM DA-RNN, are illustrated in Table II, showing root mean square error (RMSE) and mean absolute error (MAE) between the predicted \((x^{\prime},y^{\prime})\) coordinates and their true labels \((x,y)\). We can clearly see that the NeuroSyM version of the architecture succeeds in decreasing the RMSE metric by 22% on the 48 steps prediction horizon, while influencing the performance negatively by 4% on the longer 80 steps horizon. At the same time, the neuro-symbolic approach decreased the MAE by 21% on the 80 steps horizon, while influencing negatively by 3% the 48 time steps prediction. Although the improvement percentage of NeuroSyM DA-RNN is superior, by a large extent, to its counterpart on one of the two horizon windows, the unequal performance suggests that the prediction is affected by some outliers on the long term. This issue will be addressed in our future work by exploiting the influence of cluster radius selection on the prediction, a factor that we foresee to affect the hidden states embedding of the relative context. For a complete performance evaluation, another fundamental point that we will address in the future is the difference in computational cost between neuro-symbolic architectures and their neural counterparts. ## V Conclusion In this work, we presented a neuro-symbolic approach for context-aware human motion prediction (NeuroSyM) in dense scenarios, leveraging a qualitative representation of interactions between dynamic agents to assess the type of neighbourhood interactions and weight them accordingly. We formulate the spatial interactions in terms of a qualitative trajectory calculus (QTC) and we use the conceptual neighbourhood diagram (CND) to anticipate possible interactions that might influence the future state of an individual agent. The likelihood of the next-step interaction state is used for labeling it given the current state of each agent (position and interaction). We tested the NeuroSyM approach on a fundamental baseline architecture for context-aware human motion prediction, i.e. SGAN, and on another baseline architecture for multivariate time-series prediction, i.e. DA-RNN. Differently from SGAN, where interactions are pooled altogether, DA-RNN includes an attention mechanism on the input time series. We show that, in most of the cases, our neuro-symbolic approach outperforms the baseline architectures in terms of prediction accuracy on the medium- and long-term horizons. We plan in our future work to test the NeuroSyM approach for motion prediction on other architectures, as S-LSTM and those incorporating static context in addition to the dynamic one. Also, we will exploit the proposed neuro-symbolic approach for human motion prediction in social robot navigation environments, incorporating causal (symbolic) models from literature as [29, 30]. \begin{table} \begin{tabular}{l|l|l} Architecture & RMSE & MAE \\ \hline \hline DA-RNN (Baseline) & 3.61 / **3.572** & **2.097** / 2.753 \\ \hline NeuroSyM DA-RNN & **2.815** / 3.728 & 2.162 / **2.166** \\ \hline Relative Gain (\%) & +22 / -4.37 & -3.1 / +21.32 \\ \hline \end{tabular} \end{table} TABLE II: Performance comparison between the baseline architecture DA-RNN and the NeuroSyM approach on the JackRabbot dataset. The results’ format refers to the 48/80 prediction time steps. RMSE and MAE values are in meters, and the best results are highlighted in bold (i.e. the lower error, the better). Fig. 6: A neuro-symbolic approach for attention-based time-series prediction models. Differently from SGAN-like architectures, attention-based mechanisms have no pooling modules. The diagram is extended from [27] and modified for multi-steps attention-based context-aware human motion prediction in crowded environments. The input are \(n^{*}\) time series of agents, within a cluster centered at the first time series, while the output is the prediction of the cluster center’s agent. The vector \(\mathbf{e}\) denotes the input embeddings normalised to \(\alpha\) after passing through the CND layer, which adds a-priori knowledge to them in the form of \(\alpha_{cmd,f}^{k}\). The CND layer weights differently the spatial relations (represented by mixed arrows colour) of the neighbour agents with the central one. The vector \(\mathbf{l}\) denotes the temporal attention weights of the encoder’s hidden states output, normalised to \(\beta\), while \(\mathbf{c}\) represents the context. \(\mathbf{X}=\{x,y\}\) is the input driving vector; \(\mathbf{Y}=\{x^{\prime},y^{\prime}\}\) is the label vector; \(\mathbf{h}\) and \(\mathbf{d}\) are the encoder and decoder hidden states, respectively. The input and temporal attention layers are constructed from dense layers. ## Appendix: Hyper-parameters The hyper-parameters used to train and validate each of the network architectures deployed in this work are specified in Tab III. For a complete list of SGAN hyper-parameters and a better understanding of their roles, the reader should refer to the original open-source repository at [https://github.com/agrimgupta92/sgan](https://github.com/agrimgupta92/sgan).
2308.00598
On the properties of the linear conjugate gradient method
The linear conjugate gradient method is an efficient iterative method for the convex quadratic minimization problems $ \mathop {\min }\limits_{x \in { \mathbb R^n}} f(x) =\dfrac{1}{2}x^TAx+b^Tx $, where $ A \in R^{n \times n} $ is symmetric and positive definite and $ b \in R^n $. It is generally agreed that the gradients $ g_k $ are not conjugate with respective to $ A $ in the linear conjugate gradient method (see page 111 in Numerical optimization (2nd, Springer, 2006) by Nocedal and Wright). In the paper we prove the conjugacy of the gradients $ g_k $ generated by the linear conjugate gradient method, namely, $$g_k^TAg_i=0, \; i=0,1,\cdots, k-2.$$ In addition,a new way is exploited to derive the linear conjugate gradient method based on the conjugacy of the search directions and the orthogonality of the gradients, rather than the conjugacy of the search directions and the exact stepsize.
Zexian Liu, Qiao Li
2023-08-01T15:20:27Z
http://arxiv.org/abs/2308.00598v1
# On the properties of the linear conjugate gradient method ###### Abstract The linear conjugate gradient method is an efficient iterative method for the convex quadratic minimization problems \(\min\limits_{x\in\mathbb{R}^{n}}f(x)=\dfrac{1}{2}x^{T}Ax+b^{T}x,\) where \(A\in R^{n\times n}\) is symmetric and positive definite and \(b\in R^{n}.\) It is generally agreed that the gradients \(g_{k}\) are not conjugate with respective to \(A\) in the linear conjugate gradient method (see page 111 in Numerical optimization (2nd, Springer, 2006) by Nocedal and Wright). In the paper we prove the conjugacy of the gradients \(g_{k}\) generated by the linear conjugate gradient method, namely, \[g_{k}^{T}Ag_{i}=0,\ i=0,1,\cdots,k-2.\] In addition,a new way is exploited to derive the linear conjugate gradient method based on the conjugacy of the search directions and the orthogonality of the gradients, rather than the conjugacy of the search directions and the exact stepsize. Keywords:Conjugacy Orthogonality Conjugate gradient method Convex quadratic optimization Msc: 90C06 65K ## 1 Introduction We consider the following convex quadratic optimization problem: \[\min\limits_{x\in R^{n}}f(x)=\dfrac{1}{2}x^{T}Ax+b^{T}x, \tag{1.1}\] where \(A\in R^{n\times n}\) is symmetric and positive definite and \(b\in R^{n}.\) The linear conjugate gradient method for solving (1.1) has the following form \[x_{k+1}=x_{k}+\alpha_{k}d_{k}, \tag{1.2}\] where \(d_{k}\) is the search direction given by \[d_{k}=\left\{\begin{aligned} &-g_{0},&\text{if }k=0,\\ &-g_{k}+\beta_{k}d_{k-1},&\text{if }k>0,\end{aligned}\right. \tag{1.3}\] and \(\alpha_{k}\) is the stepsize determined by the exact line search, namely, \[\alpha_{k}=-\frac{g_{k}^{T}d_{k}}{d_{k}^{T}Ad_{k}}=\arg\min_{\alpha>0}f\left(x_{k} +\alpha d_{k}\right). \tag{1.4}\] It follows from (1.4) that \[g_{k+1}^{T}d_{k}=0. \tag{1.5}\] Here the parameter \(\beta_{k}\) is often derived by making it satisfy the following condition: \[d_{k+1}^{T}Ad_{k}=0, \tag{1.6}\] which together with (1.5) can yield the conjugacy of the whole search directions, as well as the global convergence. Some well-known formulae for \(\beta_{k}\) are called the Fletcher-Reeves (FR) [5], Hestenes-Stiefel (HS) [6], Polak-Ribiere-Polyak (PRP) [8; 7] and Dai-Yuan (DY) [4] formulae, and are given by \[\beta_{k}^{FR}=\frac{\left\|g_{k}\right\|^{2}}{\left\|g_{k-1}\right\|^{2}}, \quad\beta_{k}^{HS}=\frac{g_{k}^{T}y_{k-1}}{d_{k-1}^{T}y_{k-1}},\quad\beta_{k} ^{PRP}=\frac{g_{k}^{T}y_{k-1}}{\left\|g_{k-1}\right\|^{2}},\quad\beta_{k}^{DY }=\frac{\left\|g_{k}\right\|^{2}}{d_{k-1}^{T}y_{k-1}},\] where \(y_{k-1}=g_{k}-g_{k-1}\), and \(\left\|\cdot\right\|\) denotes the Euclidean norm. In the case that \(f\) is given by (1.1) and the exact line search (1.4) is performed, all \(\beta_{k}\) should be the same. It is well-known that the linear conjugate gradient method enjoys the following nice properties. Theorem 1.1: _[_1; 2_]_ _Suppose that the iterate \(\{x_{k}\}\) is generated by the linear conjugate gradient method for solving (1.1), and \(x_{k}\) is not the solution point \(x^{*}\). Then,_ \[g_{i}^{T}d_{i}=-\|g_{i}\|^{2},\,\,\,i=0,1,2,\cdots,\] \[d_{i}^{T}Ad_{j}=0,\,\,\,j=0,1,2,\cdots,i-1,\] \[g_{i}^{T}d_{j}=0,\,\,\,j=0,1,2,\cdots,i-1,\] \[g_{i}^{T}g_{j}=0,\,\,\,j=0,1,2,\cdots,i-1.\] _Further, the sequence \(\{x_{k}\}\) converges to \(x^{*}\) in at most \(n\) steps._ It follows from Theorem 1.1 that the gradients \(g_{k}\) are mutually orthogonal in the linear conjugate gradient method. However, it is generally agreed that the gradients \(g_{k}\) generated by the linear conjugate gradient method are not conjugate with respective to \(A\). For example, one can find the well-known description "_Since the gradients \(r_{k}\) are mutually orthogonal, the term "conjugate gradient method" is actually a **misnomer**. It is the search directions, not the gradients, that are conjugate with respect to \(A\)._ " in Page 11 of Nocedal and Wright's monograph [3]. Must the gradients generated by the linear conjugate gradient method not be conjugate with respective to the matrix \(A\)? In the paper we will give a negative answer to this question, as well as exploiting a new way for deriving the linear conjugate gradient method based on the conjugacy of the search direction and the orthogonality of the gradient, rather than the conjugacy of the search direction and the exact stepsize. ## 2 The conjugacy of the gradients In the section we will establish the conjugacy of the gradients generated by the linear conjugate gradient method. **Theorem 2.1**: _Suppose that \(\{x_{k}\}\) and \(\{g_{k}\}\) are generated by linear conjugate gradient method for solving (1.1). Then,_ \[g_{k+1}^{T}Ag_{k} =-\frac{\left\|g_{k+1}\right\|_{2}^{2}}{\alpha_{k}}\, \tag{2.1}\] \[g_{k+1}^{T}Ag_{i} =0,\ i=0,1,\cdots,k-1.\] Proof: We first prove the first equality in (2.1). If \(k=0\), then it follows from (1.3) and Theorem 1.1 that \[g_{1}^{T}Ag_{0}=-g_{1}^{T}Ad_{0}=-\frac{1}{\alpha_{0}}g_{1}^{T}\left(g_{1}-g_{ 0}\right)=-\frac{\left\|g_{1}\right\|^{2}}{\alpha_{0}}.\] When \(k>0\), we also obtain from the orthogonality of gradients and (1.3) that \[g_{k+1}^{T}Ag_{k} =g_{k+1}^{T}A(-d_{k}+\beta_{k}d_{k-1})\] \[=-\frac{g_{k+1}^{T}(g_{k+1}-g_{k})}{\alpha_{k}}+\frac{\beta_{k}g _{k+1}^{T}(g_{k}-g_{k-1})}{\alpha_{k-1}}\] \[=-\frac{\left\|g_{k+1}\right\|_{2}^{2}}{\alpha_{k}}.\] We then prove the second equality in (2.1). If \(k=1\), by (1.3) and Theorem 1.1, we obtain \[g_{2}^{T}Ag_{0}=-g_{2}^{T}Ad_{0}=-\frac{1}{\alpha_{0}}g_{2}^{T}\left(g_{1}-g_{ 0}\right)=0. \tag{2.2}\] When \(k>1\), it follows from (1.3) and Theorem 1.1 that \[g_{k+1}^{T}Ag_{i} =g_{k+1}^{T}A(-d_{i}+\beta_{i}d_{i-1})\] \[=-\frac{g_{k+1}^{T}(g_{i+1}-g_{i})}{\alpha_{i}}+\frac{\beta_{i}g _{k+1}^{T}(g_{i}-g_{i-1})}{\alpha_{i-1}}\] \[=0,\] where \(\ i=0,1,\cdots,k-1\). It completes the proof. **Remark 1.** It follows from Theorem 2.1 that \(g_{k+1}\) is conjugate to \(g_{i}\) (\(i=0,1,\cdots,k-1\)) with respective to \(A\). Due to the conjugacy of the gradients, it seems that it is also reasonable to name the iterative method (1.2) and (1.3) with (1.4) the linear conjugate gradient method. ## 3 New way for deriving the linear conjugate gradient method In the section, we will exploit a new way for deriving the linear conjugate gradient method. The orthogonality of gradients is crucial to the linear conjugate gradient method. Different from the stepsize (1.4) which makes \(f\) reduce mostly along the search direction \(d_{k}\), the new stepsize \(\alpha_{k}\) in (1.3) is choose such that the resulting new gradient \(g_{k+1}\) is orthogonal to the latest gradient \(g_{k}\), namely, \[g_{k+1}^{T}g_{k}=0, \tag{3.1}\] which implies that \[\alpha_{k}=-\frac{g_{k}^{T}g_{k}}{g_{k}^{T}Ad_{k}}. \tag{3.2}\] Therefore, the new iterative method has the form (1.2) and (1.3) with (3.2), which is derived based on the conditions: \[g_{k+1}^{T}g_{k}=0\ \ \text{and}\ d_{k+1}^{T}Ad_{k}=0. \tag{3.3}\] Note that it is different from the conditions (1.5) and (1.6) used to derive the linear conjugate gradient method. In the following theorem, we will prove the equivalence of the iterative method (1.2) and (1.3) with (3.2) and the linear conjugate gradient method. Theorem 3.1: _The new stepsize (3.2) is also the exact stepsize, namely, \(\alpha_{k}=\arg\min\limits_{\alpha>0}f\left(x_{k}+\alpha d_{k}\right).\)_ Proof: It follows from \(d_{k}^{T}Ad_{k-1}=0\) that \(\alpha_{k}=\dfrac{g_{k}^{T}g_{k}}{d_{k}^{T}Ad_{k}}\). Thus, we have \[g_{k+1}^{T} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
2303.05792
Development of novel low-mass module concepts based on MALTA monolithic pixel sensors
The MALTA CMOS monolithic silicon pixel sensors has been developed in the Tower 180 nm CMOS imaging process. It includes an asynchronous readout scheme and complies with the ATLAS inner tracker requirements for the HL-LHC. Several 4-chip MALTA modules have been built using Al wedge wire bonding to demonstrate the direct transfer of data from chip-to-chip and to read out the data of the entire module via one chip only. Novel technologies such as Anisotropic Conductive Films (ACF) and nanowires have been investigated to build a compact module. A lightweight flex with 17 {\mu}m trace spacing has been designed, allowing compact packaging with a direct attachment of the chip connection pads to the flex using these interconnection technologies. This contribution shows the current state of our work towards a flexible, low material, dense and reliable packaging and modularization of pixel detectors.
J Weick, F Dachs, P Riedler, M Vicente Barreto Pinto, A M. Zoubir, L Flores Sanz de Acedo, I Asensi Tortajada, V Dao, D Dobrijevic, H Pernegger, M Van Rijnbach, A Sharma, C Solans Sanchez, R de Oliveira, D Dannheim, J V Schmidt
2023-03-10T08:53:26Z
http://arxiv.org/abs/2303.05792v1
# Development of novel low-mass module concepts based on MALTA monolithic pixel sensors ###### Abstract The MALTA CMOS monolithic silicon pixel sensors has been developed in the Tower 180 nm CMOS imaging process. It includes an asynchronous readout scheme and complies with the ATLAS inner tracker requirements for the HL-LHC. Several 4-chip MALTA modules have been built using Al wedge wire bonding to demonstrate the direct transfer of data from chip-to-chip and to read out the data of the entire module via one chip only. Novel technologies such as Anisotropic Conductive Films (ACF) and nanowires have been investigated to build a compact module. A lightweight flex with 17 \(\upmu\)m trace spacing has been designed, allowing compact packaging with a direct attachment of the chip connection pads to the flex using these interconnection technologies. This contribution shows the current state of our work towards a flexible, low material, dense and reliable packaging and modularization of pixel detectors. S + Footnote †: 1: institutetext: CERN, Switzerland, \({}^{2}\)Technical university of Darmstadt, Germany, \({}^{3}\)University of Oslo, Norway, \({}^{4}\)Universite de Geneve, Switzerland, \({}^{5}\)University of Zagreb, Croatia, \({}^{6}\)Karlsruhe Institute of Technology, Germany [email protected] ## 1 Introduction The upcoming upgrades of the LHC experiments and future detectors are setting new requirements for particle trackers including increased radiation hardness, timing and spacial resolution and rate capability as well as dense integration with a minimal material budget. To realize very large detector surfaces, a scalable production process to manufacture chip modules is also mandatory. This work will show a part of the current efforts of CERN's experimental physics R&D (EP-R&D) [1] project to develop flexible, low material, dense, reliable and scalable modules using radiation tolerant pixel chips. ## 2 Modularization of MALTA The MALTA chip is a monolithic pixel sensor produced in the 180nm Tower process featuring an asynchronous front-end and read-out. It has proven NIEL radiation hardness up to \(3\cdot 10^{15}\) n\({}_{eq}\)/cm\({}^{2}\), and TID hardness up to 100 Mrad.[2] The data transfer from the secondary (S1-S3) to the primary chip (Prim) has been demonstrated in a 4-chip MALTA module shown in figure 1 placed on a rigid PCB carrier housing over one million pixels. The interconnection from chip-to-chip and from chip to PCB is realized using wire-bonds. Current studies focus on the replac Figure 1: Four chip MALTA module demonstrating the data transfer from secondary (S1-S3) to the primary chip chip-to-chip data and power transmission with a silicon interposer (silicon bridge). This allows for a denser integration of the chips, compared to wire bonding, as no minimum spacing between the chips is needed. Furthermore, a flexible carrier (flex) has been designed to interconnect four MALTA2 chips using a flip-chip [3] process, connecting the chip pads to the respective pads on the flex directly. ## 3 Interconnection technologies The small pad size of MALTA and its dense pad layout require the study of new interconnection technologies such as anisotropic conductive films (ACF) or a surface nano structuration process (nanowires). These technologies have the potential to provide a scalable chip-to-chip or chip-to-flex interconnection realizable in a fast interconnection process and are suitable for a large number of pads. The interconnection is demonstrated using the aluminum pads of size 88\(\times\)88 \(\upmu\)m\({}^{2}\) of the MALTA chip. ### Anisotropic conductive film (ACF) ACF is a industry standard interconnection technology used in LCD screen production [4]. The interconnection is established with metal-coated polymer particles, embedded in glue. Figure 2 shows the ACF balls (black dots) on the MALTA pads. In a preparatory step, the Aluminium pads are plated using Electroless Nickel Immersion Gold (ENIG) [5], which serves as an elevation layer for the ACF particles and prevents the formation of an oxide layer. Afterwards, the device to be bonded is laminated with the ACF. Finally, the assembly is done in a flip-chip process using pressure to compress the ACF conductive particles between pads and heat to cure the glue. As such, ACF provides a cost-effective, maskless, and in-house capable assembly technology. ACF is currently developed as a bonding process for the silicon interposer for MALTA modules, and will be used in the flip chip assembly of the MALTA2 chips on the flex carrier introduced in section 5. ### Nanowires Nanowires are evaluated as a second interconnection technology. For this bonding process, a titanium layer is deposited on the aluminum pads of MALTA together with a gold finish to prevent oxidation. This serves as a base for a copper seed layer on which nanowires with a diameter in the hundred-nanometer range, and a length of several micrometers are grown. Figure 3 shows the copper nanowires on a MALTA pad. The bonding process can be realized using three different procedures, which differs in their applied pressure and heat, and the necessity to apply the nanowires on only one side (chip pads or targeted carrier pads), or on both: T Figure 2: ACF (black dots) on ENIG plated MALTA pad * **Sinter process:** The wires are applied to one side only. The connection is formed via a sintering process that requires high pressure and heat, compared to the other procedures. * **Cold welding process:** Wires are applied on both sides and allow for a large contact surface. The connection is established with a cold-welding process that requires minimal pressure and heat. * **Gluing process:** In this bonding process nanowires are only applied on one side. A non-conductive glue is used as underfill which allows for reduced pressure and heat and enhanced mechanical stability. Nanowires promise to offer low contact resistance and parasitic inductance/capacitance, and can be applied either on the targeted carrier or directly on the chip (chip or wafer level). ## 4 Interconnection test structure A test structure has been developed on a 300 \(\upmu\)m aluminum nitride (AlN) ceramic to validate the introduced interconnection technologies electrically and mechanically as well as pre-optimizing the process parameters for interconnecting the MALTA2 chips with the flex presented in section 5. AlN has been chosen due to its similar thermal conductivity to silicon and compatibility with the manufacturing process of the flex. The various pad layouts of this structure mirror the pads of the MALTA chip and the silicon interposer. They are designed to study, among other parameters, the bond success rate per pad, the probability of electrical shorts between neighboring pads and the resistivity of the interconnection. First electrical tests indicate a DC resistance between 1.7 \(\Omega\) to 3 \(\Omega\) for the ACF interconnection on the 88\(\times\)88 \(\upmu\)m\({}^{2}\) MALTA pad using an ACF with 3 \(\upmu\)m Ni-Au/Polymer conductive particles, while the current AlSi wire bonds with a diameter of 25\(\upmu\)m feature a resistance of <1 \(\Omega\) on our current carriers [6]. The setup to test the insertion loss of Figure 4: Concept of the silicon bridge test setup, to test the electrical characteristic of two ACF interconnections Figure 3: Copper nanowires grown on 88\(\times\)88 \(\upmu\)m\({}^{2}\) MALTA pad a differential signal over an ACF connection compared to a directly wired reference is shown in figure 4. The silicon bridge is bonded onto the test structure using ACF, and establishes a electrical connection between the probing pads. Figure 6 shows the bonded bridge on the substrate with the silicon bridge in top and the reference structure on the bottom. Since the structure is optimized for mechanical test rather than for high frequencies (HF) it only allows for a qualitative statement of the signal decay compared to the reference. The signal decay has been evaluated using a vector network analyzer with Picotest probes which allow to probe the large probing pads directly. First results are shown in figure 6. The signal magnitude over the reference as well as over the ACF interconnection decays at higher frequencies. The measured offset of the ACF connection in relation to the reference (<2dB) is indicating an acceptable signal loss for our purposes.The difference between ACF channel 1 and ACF channel 2 is in line with the spread in DC resistance between different pads and a result of the non optimized bonding process. To validate the signal transfer over ACF, further investigation in combination with a dedicated HF test structure is needed. ## 5 Low mass flexible MALTA2 carrier A Low mass flexible MALTA2 carrier (flex) for a 4-chip module is currently produced as a proof-of-concept for flip-chip mounting the MALTA2 chip on a flexible support structure, using minimal material while providing maximal testing capabilities. Figure 7 shows a model of the first flex prototype. We use the data readout scheme validated with the four-chip MALTA PCB in section 2, transferring the data from three secondary chips (S1-S3) to a primary chip (Prim) and from there over a dedicated connector flex circuit to the data acquisition FPGA. The layout of the flex allows for the chips to be tuned and powered individually. This gives us the possibility to test the in Figure 5: ACF bonded silicon MALTA interposer and reference on test structure for electrical characterization. fluence of constrained powering on the performance and the data transfer capabilities of the module. Due to the pad layout of MALTA, wire bonds only allow for arranging the chips in a single row module. This constraint does not apply to modules assembled on a flex circuit using a flip chip process. Furthermore, chips can be supplied with power through the flex support structure allowing to increase the module size compared to chip-to-chip powering. The same holds true if the tiling is replaced by a wafer-scale integrated sensor using stitched devices. The dense pad layout of the MALTA chip requires a flex circuit with fine structures in order to keep capacitive loads low and minimize the number of metal layers and thus material budget. The chosen flex production technology allows for structure sizes down to 15 \(\upmu\)m track and clearance, a 10 \(\upmu\)m polyamide layer thickness, and a 6 \(\upmu\)m copper thickness. Together with 20 \(\upmu\)m solder stop, the designed flex circuit has an overall thickness of roughly 50 \(\upmu\)m. The 20 \(\upmu\)m solder stop is only needed for assembly and population of the circuit as well as to enable soldering on debug pads. Figure 8 shows the manufactured flex on the right side and the respective layout on the left. ## 6 Summary and outlook In this work, we present first results from studies on mechanically robust and scalable interconnection techniques, which could offer an alternative to wire-bonding. We show an approach to densely package multiple MALTA pixel chips onto a flexible module carrier minimizing the needed material. In a next step, the flex will be assembled using the introduced interconnection technologies, followed by tests in the lab and at test beam to quantify the performance of the module. Figure 8: Layout (left) and produced flex (right) Figure 7: Flex carrier for four MALTA2 chips with chip to chip data transfer
2307.03614
Cyclically operated Single Microwave Photon Counter with $10^\mathrm{-22}$ $\mathrm{W/\sqrt{Hz}}$ sensitivity
Single photon detection played an important role in the development of quantum optics. Its implementation in the microwave domain is challenging because the photon energy is 5 orders of magnitude smaller. In recent years, significant progress has been made in developing single microwave photon detectors (SMPDs) based on superconducting quantum bits or bolometers. In this paper we present a practical SMPD based on the irreversible transfer of an incoming photon to the excited state of a transmon qubit by a four-wave mixing process. This device achieves a detection efficiency $\eta = 0.43$ and an operational dark count rate $\alpha = 85$ $\mathrm{s^{-1}}$, mainly due to the out-of-equilibrium microwave photons in the input line. The corresponding power sensitivity is $\mathcal{S} = 10^{-22}$ $\mathrm{W/\sqrt{Hz}}$, one order of magnitude lower than the state of the art. The detector operates continuously over hour timescales with a duty cycle $\eta_\mathrm{D}=0.84$, and offers frequency tunability of at least 50 MHz around 7 GHz.
Léo Balembois, Jaime Travesedo, Louis Pallegoix, Alexandre May, Eric Billaud, Marius Villiers, Daniel Estève, Denis Vion, Patrice Bertet, Emmanuel Flurin
2023-07-07T14:11:14Z
http://arxiv.org/abs/2307.03614v3
# Practical Single Microwave Photon Counter with \(10^{-22}\) W/\(\sqrt{\mathrm{Hz}}\) sensitivity. ###### Abstract Single photon detection played an important role in the development of quantum optics. Its implementation in the microwave domain is challenging because the photon energy is 5 orders of magnitude smaller. In recent years, significant progress has been made in developing single microwave photon detectors (SMPDs) based on superconducting quantum bits or bolometers. In this paper we present a new practical SMPD based on the irreversible transfer of an incoming photon to the excited state of a transmon qubit by a four-wave mixing process. This device achieves a detection efficiency \(\eta=0.43\) and an operational dark count rate \(\alpha=85\) s\({}^{-1}\), mainly due to the out-of-equilibrium microwave photons in the input line. The corresponding power sensitivity is \(\mathcal{S}=10^{-22}\) W/\(\sqrt{\mathrm{Hz}}\), one order of magnitude lower than the state of the art. The detector operates continuously over hour timescales with a duty cycle \(\eta_{\mathrm{D}}=0.84\), and offers frequency tunability of \(\sim 400\) MHz around 7 GHz. Single photon detection in the optical domain is a key enabling technology for many applications, ranging from fluorescence microscopy [1; 2; 3; 4] to measurement-based quantum computing [5]. In the microwave domain, single-photon detectors (SMPD) operating at millikelvin temperatures have only recently started to be developed, due to the 5 orders of magnitude difference in photon energy. Designs based either on superconducting quantum bits [6; 7; 8] or bolometers [9] are investigated. The advent of these SMPDs has already enabled new classes of protocols for quantum sensing such as the microwave fluorescence detection of small electronic spin ensemble [10; 11] and dark-matter search based on haloscopes [12; 13], but also for quantum computing[14; 15; 16] with new superconducting qubit readout [17] or the robust generation of quantum states [18]. The sensitivity of the detection, and the fidelity of these quantum protocols depends dramatically on the performances of the detectors. Two figures of merit especially matter: the dark count rate \(\alpha\) defined as the number of false positive detection per unit of time, and the operational efficiency \(\eta\) defined as the ratio of counts over incoming photons. Combining these two metrics, one can determine the power sensitivity \(\mathcal{S}\) of the detector as the noise equivalent power (NEP) for an integration time of 1 s: \[\mathcal{S}=\frac{\hbar\omega\sqrt{\alpha}}{\eta}. \tag{1}\] Currently, the detectors based on superconducting qubit [7; 8] show a dark count rate \(\alpha\sim 10^{4}-10^{5}\)s\({}^{-1}\), for an efficiency \(\eta\sim 0.5-0.7\) over a bandwidth of \(\sim 10-20\) MHz, resulting in a sensitivity \(\mathcal{S}\sim 2-9\cdot 10^{-21}\) W/\(\sqrt{\mathrm{Hz}}\) at 7 GHz. On the other hand, the most advanced bolometric detector based on graphene [9] reaches a sensitivity \(\mathcal{S}=7\cdot 10^{-19}\) W/\(\sqrt{\mathrm{Hz}}\) at 7.9 GHz, when operated at 190 mK. The bandwidth of this device varies between 861 MHz and 599 MHz depending on the operating parameters. Besides itinerant microwave photon detectors, other experiments have demonstrated high sensitivity detection of individual microwave photons in a high-Q cavity [19; 20; 13]. This paper presents a SMPD based on a superconducting qubit and a four-wave mixing process [21]. It detects itinerant photons, regardless of their waveform, in a \(\sim 1\) MHz bandwidth around a frequency tunable from 7.005 GHz to 6.824 GHz and operates by cycles of \(\sim 12\)\(\mu\)s duration, which can be repeated continuously over several hours, days, or even months. Here we demonstrate a dark count rate \(\alpha=85\) s\({}^{-1}\) for an operational efficiency \(\eta=0.43\) leading to a power sensitivity \(\mathcal{S}=10^{-22}\) W/\(\sqrt{\mathrm{Hz}}\), more than an order of magnitude lower than the state of the art. This new sensitivity has opened up new detection possibilities such as the single-spin detection experiment [22]. ## I Working Principle This device builds upon the superconducting circuit proposed and demonstrated in [21] and [10]. The working principle is based on the irreversible transfer of an incoming photon to an excitation of a transmon qubit. The detector "clicks" when the qubit is detected in its excited state using dispersive readout through a capacitively coupled resonator. This irreversible transfer is achieved by a four-wave mixing process, directly provided by the transmon qubit Hamiltonian. The incoming photon impinging on an input resonator with frequency \(\omega_{\rm b}\) (called "buffer" mode, orange in Fig. 1a) combines with a pump tone at frequency \(\omega_{\rm p}\) and is converted into an excitation in the transmon qubit mode at frequency \(\omega_{\rm q}\) and an additional photon in an output resonator mode at frequency \(\omega_{\rm w}\) (called "waste" mode, green in Fig. 1a). This four-wave mixing process is described by the Hamiltonian \[\hat{H}_{\rm 4WM}=\sqrt{\chi_{\rm b}\chi_{\rm w}}\left(\hat{\xi}\hat{b}\hat{ \sigma}^{\dagger}\hat{w}^{\dagger}+\xi^{*}\hat{b}^{\dagger}\hat{\sigma}\hat{w }\right), \tag{2}\] where \(\hat{b},\hat{w}\) are the annihilation operators corresponding to the buffer mode and waste mode, \(\hat{\sigma}\) is the lowering operator corresponding to the qubit, \(\xi\) is the pump amplitude in the qubit mode, and \(\chi_{\rm b}\) and \(\chi_{\rm w}\) are the dispersive shifts of the transmon qubit with respect to the buffer and waste modes [21]. For this process to be activated, the pump frequency is tuned such that \(\omega_{\rm p}+\omega_{\rm b}=\omega_{\rm q}+\omega_{\rm w}-\chi_{\rm w}\), to satisfy the four-wave mixing resonance condition. The irreversibility of the conversion is ensured by the coupling of the waste resonator to a dissipative environment. While the qubit remains excited, the photon in the waste resonator leaks out in the measurement line at the rate \(\kappa_{\rm w}\). The reciprocal four-wave mixing process (second term in the parenthesis of Eq.(2)) is therefore suppressed and the qubit is left in its excited state. The detector behaves as an energy integrator, which is independent of the incoming photon waveform provided that its spectral extension remains included than the frequency linewidth of the buffer mode. The four-wave mixing being a resonant process, it is intrinsically narrowband. To make it a practical detector, our device is made frequency tunable to match the photon frequency of interest by inserting a SQUID in the buffer resonator (see Fig. 1b). The detector frequency can be tuned from \(\omega_{\rm b}/2\pi=7.005\) GHz at zero magnetic flux applied to the SQUID to \(\omega_{\rm b}/2\pi=6.824\) GHz at 0.25 flux quantum (see Fig. 1c). Two bandpass Purcell filters are associated with the resonators to prevent spurious decay of the qubit into the lines [23]. Therefore, the buffer resonator linewidth depends on its frequency detuning with respect to its Purcell filter. The bandwidth \(\kappa_{\rm b}/2\pi=3\) MHz is maximal for \(\omega_{\rm b}/2\pi=6.824\) GHz. In the following, the detector is characterized at \(\omega_{\rm b}/2\pi=6.979\) GHz and \(\kappa_{\rm b}/2\pi=0.2\) MHz. The fixed resonance frequencies of the device are those of the waste resonator \(\omega_{\rm w}/2\pi=7.704\) GHz and the transmon qubit \(\omega_{\rm q}/2\pi=6.184\) GHz. The relaxation time \(T_{1}\) of the transmon qubit is measured to \(T_{1}\sim 37\)\(\mu\)s (see Fig. 2d) and its equilibrium population fluctuates around \(p_{\rm eq}\sim 2-4\cdot 10^{-4}\) (see Fig. 2c,d), close to the lowest reported [24; 25; 26; 27]. The optimal pump characteristics (frequency \(\omega_{\rm p}/2\pi\) and amplitude \(\xi\)) are determined experimentally by monitoring the qubit population while illuminating the buffer mode with a weak coherent signal and by sweeping the pump tone frequency and amplitude. As shown in Fig. 2b, a large excited state population is found in the qubit state conditioned on the presence of the illuminating tone for a pump frequency of \(\omega_{\rm p}/2\pi=6.885\) GHz. This value is in good agreement with the mode Figure 1: a) Principle of the photon detector. Two cavities, the buffer (orange) and the waste (green), are coupled to a transmon qubit whose non-linearity allows the modes to be mixed. A pump tone (purple) triggers a four-wave mixing process, converting an incoming buffer photon into a long-lived qubit excitation and a waste photon quickly dissipated into the environment, making the reversal process impossible. b) Schematic of the SMPD chip. The transmon qubit (blue) at frequency \(\omega_{\rm q}/2\pi=6.184\) GHz is capacitively coupled to two CPW resonators: the buffer (characteristic see c.) and the waste (\(\omega_{\rm w}/2\pi=7.704\) GHz, \(\kappa_{\rm w}/2\pi=1.8\) MHz). Two Purcell filters are added to protect the qubit from radiative relaxation. The tunability of the detector is ensured by inserting a SQUID, driven by a flux line (red), in the buffer resonator. c) Evolution of the buffer frequency with the respect to the magnetic flux through the SQUID. Orange points are data, solid red line is a fit, and dashed black line represents the buffer Purcell filter frequency. Due to the frequency detuning between the buffer and its filter, the buffer bandwidth \(\kappa_{\rm b}\) varies with its frequency. Red square represents the operating point. d) Cyclic operation of the SMPD consisting in three steps repeated continuously. The detection window (D) consists in switching on the pump tone during \(T_{\rm d}=10\)\(\mu\)s to allow the conversion of the incoming photon. The measurement window (M) consists in applying a readout pulse on the waste resonator to measure the qubit state. The reset window (R) is a conditional loop to reinitialize the qubit in its ground state. The average detector blind time is \(T_{\rm m}+T_{\rm r}=1.9\)\(\mu\)s. frequencies taking into account the qubit Stark shift induced by the pump and the dispersive shift of the resonators. In a restricted subspace where the buffer and the waste are never simultaneously populated, our device can be described by two cavities coupled with the strength \(g_{3}=2|\xi|\sqrt{\chi_{\rm b}\chi_{\rm w}}\)[10]. In this framework, the maximum detection efficiency is expected when the coupling strength \(g_{3}\) matches the geometric mean of decay rates of the buffer and waste resonators such that \(2|\xi|\sqrt{\chi_{\rm b}\chi_{\rm w}}=\sqrt{\kappa_{\rm b}\kappa_{\rm w}}\)[21]. The model also provides an explicit formula for the transfer efficiency \(\eta_{\rm 4wm}\) between a buffer photon and a qubit excitation: \[\eta_{\rm 4wm}=\frac{4C}{(1+C)^{2}}, \tag{3}\] where \(C=4|\xi|^{2}\frac{\chi_{\rm b}\chi_{\rm w}}{\kappa_{\rm b}\kappa_{\rm w}}\) is the cooperativity associated with the four-wave mixing. Unit transfer efficiency is reached for \(C=1\). Taking into account resonator losses, we expect a maximum transfer efficiency of \(\eta_{\rm 4wm}=0.86\). To determine the pump amplitude corresponding to the optimal cooperativity, we operate the four-wave mixing for various pump amplitude by sending photons on the buffer resonator. The resulting qubit excited population, plotted in Fig. 2a, is in good agreement with the theoretical two-coupled cavities model. In order to avoid spurious qubit heating due to the pump tone, a low pump amplitude is desirable. This is conveniently achieved if the dispersive shifts \(\chi_{\rm b,w}\) are larger than the \(\kappa_{\rm b,w}\). Here, the measured dispersive shifts are \(\chi_{\rm b}/2\pi=5.2\) MHz, \(\chi_{\rm w}/2\pi=18.8\) MHz and the resonators linewidths \(\kappa_{\rm b}/2\pi=0.2\) MHz (at the point considered) and \(\kappa_{\rm w}/2\pi=1.8\) MHz. At unit cooperativity, the pump energy in unit of qubit excitation is \(|\xi|^{2}=5\times 10^{-3}\). Note that the large dispersive shift between the resonators and the qubit are not detrimental for two reasons. First, the maximum number of excitation in modes during the transfer process never exceeds one, so that higher order non-linear terms do not contribute significantly to the dynamics as shown on Fig. 2b. Second, Purcell filters at the output of each of the resonators inhibit the spurious decay of the qubit into transmission lines. ## II Cycle operation mode The detector is operated cyclically, with the operation cycle consisting of three subsequent steps (see Fig. 1d). The first one called "detection" (D) consists in applying to the qubit a pump pulse at frequency \(\omega_{\rm p}\) during a detection time \(T_{\rm d}=10\)\(\mu\)s. If a photon enters into the buffer resonator, the four-wave mixing process triggers a qubit excitation and a dissipation of a photon into the waste resonator. In the second step of the cycle called "measurement" (M), the qubit state is dispersively readout using the waste resonator during a measurement time \(T_{\rm m}=0.5\)\(\mu\)s. Note that the threshold used to discriminate the qubit ground and excited states is chosen to maximize the SMPD power sensitivity defined in Eq.(1). As shown in Figure 2c, this threshold favors the readout fidelity of the ground state at the expense of the readout fidelity of the excited state \(p(1|e)=\eta_{\rm RO}=0.73\). The dark count is minimized at the expense of a moderate reduction of the efficiency. The third step of the cycle consists of a conditional reset (R). If the qubit is previously found in its ground state, we directly go to the next cycle. If the qubit was found in its excited state, a \(\pi\)-pulse is applied though the pump line and the qubit state is measured again, the procedure being repeated until the ground preparation succeeds. Owing to the high fidelity of the qubit ground state readout, we reset the qubit well below its equilibrium population \(p_{\rm eq}\) as shown in Figure 2d, the reset infidelity is as low as \(p_{\rm reset}=10^{-5}\). The reset step Figure 2: a) Qubit excited population as a function of the pump amplitude \(|\xi|\) applied on the qubit when a coherent tone of 360 \(\mu\)W (77850 photon/s) is applied on the buffer resonator. Orange points are data, dark red solid line represents a fit using (3) and black solid line represents the chosen amplitude. b) Qubit excited population as the function of the pump frequency \(\omega_{\rm p}/2\pi\) when no signal is sent on the buffer (dark blue) and when a coherent tone (360 \(\mu\)W) is applied (orange). Solid red lines represent Lorentzian fit c) Qubit readout when no pulses are applied to the qubit (red) giving a qubit equilibrium population \(p_{\rm eq}=2\cdot 10^{-4}\) and qubit readout just after a reset sequence (green) giving a reset population \(p_{\rm reset}=10^{-5}\). Solid lines represent Gaussian fit. The dashed black line represents the qubit readout after a \(\pi-\)pulse, the vertical solid line corresponds to the chosen readout threshold. d) Qubit relaxation curve from the excited state (blue) and from the ground state (green). Solid lines represent exponential fits. The yellow window corresponds to the detection time \(T_{\rm d}=10\)\(\mu\)s. is non-deterministic, with an average reset time \(T_{\rm r}\approx 0.5\)\(\mu\)s. For the reset to work optimally, the Quantum Non Demolition character of the measurement must be ensured. To meet this condition, we use a Traveling Wave Parametric Amplifier (TWPA) and we tune carefully the readout pulse length and amplitude. Moreover, the readout pulse frequency is detuned with the respect to the waste resonator frequency by the dispersive shift of the qubit, such that the readout pulse enters the resonator if and only if the qubit is in its excited state. This allows to enhance the Quantum Non Demolition character of the measurement when the qubit is in its ground state. A waiting time of 1 \(\mu\)s is added at the end of the reset step to let the waste resonator return to its ground state. The average cycle time is \(T_{\rm cycle}=11.9\)\(\mu\)s, which sets the duty cycle of the detector \(\eta_{\rm D}=T_{\rm d}/T_{\rm cycle}=0.84\). This quantity could be made arbitrary close to one by increasing the duration of the detection window. However, the qubit relaxation in a characteristic time \(T_{1}=37\)\(\mu\)s (see Fig. 2d) sets an upper bound, by introducing a contribution \(\eta_{\rm qubit}=(T_{1}/T_{\rm d})(1-e^{-T_{\rm d}/T_{1}})\) to the overall efficiency, which actually limits the detection step duration. The detector is operated by continuously repeating the cycle \(\sim 80000\) per second. Its resolution time corresponds to the detection time \(T_{\rm d}=10\)\(\mu\)s while its dead-time is \(T_{\rm m}+T_{\rm reset}=1.9\)\(\mu\)s. ## III Detection Efficiency The operational efficiency is measured by sending a calibrated tone at the center of the SMPD line (see Fig. 3d) while the cycle is repeated. The power of the microwave tone is calibrated by using the dephasing of the qubit induced by the presence of photons in the buffer cavity [28]. Typical measurement records of the detector for various illumination powers are shown Fig. 3a. The operational efficiency of the SMPD \(\eta=0.43\) is obtained by measuring the ratio between the click event rate over the incoming photons rate as shown in Fig. 3b. The efficiency is in good agreement with the expected one that includes four different contributions: the transfer efficiency \(\eta_{\rm 4wm}\), the qubit relaxation \(\eta_{\rm qubit}\), the duty cycle \(\eta_{\rm D}\) and the readout fidelity \(\eta_{\rm R0}\) resulting into a theoretical efficiency \(\eta_{\rm theory}=\eta_{\rm 4wm}\cdot\eta_{\rm R0}\cdot\eta_{\rm D}\cdot\eta_{\rm qubit }=0.46\). ## IV Detection Bandwidth The detector bandwidth is measured by varying the frequency of a 10 \(\mu\)s photon pulse sent to the buffer resonator during the detection window. The qubit excitation probability is measured and multiplied by a constant factor so that the maximum value corresponds to the overall efficiency \(\eta=0.43\). This infered efficiency is then plotted with the respect to the frequency of the input photons as shown in Fig. 3d. The detector bandwidth, defined as the full width half-maximum is \(\kappa_{\rm d}/2\pi=0.57\) MHz. From a model of two coupled cavities explicitly derived in [10], we can obtain an analytical expression of \(\kappa_{\rm d}\) with respect to \(\kappa_{\rm b}\) and \(\kappa_{\rm w}\): \[\kappa_{\rm d}=\sqrt{2}\sqrt{\sqrt{\kappa_{\rm b}^{2}\kappa_{\rm w}^{2}+ \left(\frac{\kappa_{\rm b}-\kappa_{\rm w}}{2}\right)^{4}-\left(\frac{\kappa_{ \rm b}-\kappa_{\rm w}}{2}\right)^{2}}}, \tag{4}\] yielding to the theoretical \(\kappa_{\rm d,th}/2\pi=0.43\) MHz. Here \(\kappa_{\rm d,th}\approx 2\kappa_{\rm b}\) which corresponds to the limit where \(\kappa_{\rm w}\gg\kappa_{\rm b}\). We attribute the discrepancy between the theoretical and the measured bandwidth to the 100 kHz spectral broadening caused by the finite length of the Figure 3: a) Time traces of the SMPD operated in cyclic mode, each vertical line represents one photon detection. The power of the coherent tone sent on the buffer resonator is progressively increased from 0W (dark blue) to 54 zW (12000 photon/s) (orange). b) Detected count rate as a function of incoming photon rate. The efficiency \(\eta\) is extracted with a linear fit (solid line). The deviation to the linear behaviour is due the the detector saturation. c) dark count rate as the function of time. Each points is the average rate over \(\approx 12\)s which correspond to \(10^{6}\) cycles. The dashed line represent the dark count saturation \(\alpha=85\) s\({}^{-1}\) d). Inferred efficiency of the SMPD as a function of the frequency of the coherent tone applied on the buffer resonator. the power of the tone is still 360 zW. Solid lines represent a Lorentzian fit. The detector bandwidth defined as the FWMH is \(\kappa_{\rm d}/2\pi=0.57\) MHz excitation pulses. ## V Dark counts The dark count rate is estimated by measuring the count-rate of the detector in the absence of input photons as illustrated on the top panel of Fig. 3a. The dark count rate is found to be 60 s\({}^{-1}\) for few minute of the operation. As shown in Fig. 3c, when operated on hour time-scale, we observe a slight rise of the dark count rate to 85 s\({}^{-1}\) during 1 hours, after what it remains stable within \(\pm 4\) s\({}^{-1}\) over 10 hours. The initial dark count rise is attributed to the heating of the cold stage of the refrigerator due to the continuous power delivered by the qubit pump. The sensitivity of the detector in steady state regime is then simply given by Eq. 1 and yields to a value \(\mathcal{S}=10^{-22}\) W/\(\sqrt{\text{Hz}}\). ## VI Dark count budget The dark counts \(\alpha\) can be decomposed in three main contributions: the thermal population of the qubit \(\alpha_{\text{qubit}}\), the heating of qubit by the pump \(\alpha_{\text{4wm}}\), and the presence of thermal photons in the input lines \(\alpha_{\text{th}}\). The resulting dark count rate is the sum of the three contributions: \(\alpha=\alpha_{\text{qubit}}+\alpha_{\text{4wm}}+\alpha_{\text{th}}\), each of them can be addressed individually. The first contribution is the probability to find the qubit in its excited state in the absence of the four-wave mixing process. This depends on the qubit excitation probability after the reset \(p_{\text{reset}}\) and the relaxation rate \(T_{1}^{-1}\) of the qubit toward its equilibrium population \(p_{\text{eq}}\) (see Fig. 2c,d) such as \(\alpha_{\text{qubit}}=\frac{p_{\text{eq}}}{T_{1}}\eta_{\text{D}}+\frac{p_{ \text{reset}}}{T_{\text{cycle}}}\). We evaluate this contribution to \(\alpha_{\text{qubit}}=5\) s\({}^{-1}\) by using the parameters \(p_{\text{eq}},T_{1},p_{\text{reset}},\eta_{\text{D}}\) and \(T_{\text{cycle}}\) defined in the previous sections. To mitigate this source of noise, the qubit is thermalized by filtering the line on a broad frequency range until the IR domain (ecosorb filter) and by properly designing an electromagnetic shield composed of 3 interleaved screens in \(\mu-\)metal, copper and aluminium (see appendix). The second contribution is the spurious heating of the qubit by the pump tone \(\alpha_{\text{4wm}}\). This contribution is measured by applying a pump tone detuned from the four-wave mixing condition while measuring the equilibrium population of the qubit. As shown in Fig.2b, in these conditions \(p_{\text{eq}}=3\cdot 10^{-4}\), a value included in the fluctuation interval of the equilibrium population. This contribution to the overall dark count rate is therefore considered negligible. The third contribution \(\alpha_{\text{th}}\) is due to the presence of spurious photons in the input transmission lines. The integration of the mean number of photons per mode in the buffer input line \(\bar{n}_{b}\) over the linewidth of the detector \(\kappa_{\text{d}}\) gives the corresponding dark count rate \(\alpha_{\text{th}}=80\) s\({}^{-1}\). At the cryostat base temperature (10mK), this contribution should be negligible as the Planck law would predict an average number of photons per mode \(\bar{n}_{\text{b}}=3\cdot 10^{-15}\); however, it is notoriously difficult to thermalize the microwave field at such low temperatures. Based on a Johnson-Nyquist description of thermal noise, we can derive the explicit relation: \[\alpha_{\text{th}}=\frac{\kappa_{\text{d}}}{4}\eta\bar{n}_{\text{b}}, \tag{5}\] To verify the validity of this relation, we measure the thermal dark count as the function of a well defined mode population i.e when the refrigerator temperature is superior to 40 mK. This temperature is measured by a Figure 4: Johnson-Nyquist law demonstrated with the SMPD. Qubit relaxation time \(T_{1}\) a) and qubit equilibrium population \(p_{\text{eq}}\) b) taken every minute. These values are used to determine the value of \(\alpha_{\text{qubit}}\). The fidge temperature is shown in dashed grey lines. c) Thermal dark count rate \(\alpha_{\text{th}}=\alpha-\alpha_{\text{qubit}}\) for different refrigerator temperature shown in dashed grey lines. Each point corresponds to the average dark count rate over \(10^{5}\) cycles, the light blue areas correspond to the data selected to extract the averages by avoiding the transient regime. d) Relation between the mode population \(\bar{n}_{\text{b}}\), calculated from the refrigerator temperature, and the thermal dark count rate \(\alpha_{\text{th}}\). Each point (purple) corresponds to the average of the colored areas of (c), an example of distribution is given in inset for the green zone. The solid line correspond to the Johnson Nyquist relation where \(\eta\) is the SMPD efficiency and \(\kappa_{\text{d}}\) the SMPD bandwith. thermometer anchored to the mixing chamber plate. We acquired the dark count rate \(\alpha\) in a range from 10 mK to 100 mK, waiting for the end of the transient regime each time. Since the heating is not selective, all parts of the chip are affected including the transmon qubit which causes an increase in its equilibrium population \(p_{\mathrm{eq}}\) and a decrease in its relaxation time \(T_{1}\). To take these effects into account, these two quantities are measured every minutes during the experiment (Fig. 4a,b), giving minute-by-minute monitoring of \(\alpha_{\mathrm{qubit}}\). The thermal dark count rate \(\alpha_{\mathrm{th}}=\alpha-\alpha_{\mathrm{qubit}}\) is plotted as a function of time (see Fig. 4c) and with the respect to the thermal photon population \(\bar{n}_{\mathrm{b}}\) calculated from the refrigerator temperature (see Fig. 4d). The relationship between \(\alpha_{\mathrm{th}}\) and \(\bar{n}_{\mathrm{b}}\) is linear with a slope of \(\eta\kappa_{\mathrm{d}}/4\), therefore validating Eq. (5). As explained earlier, at 10 mK, \(\bar{n}_{\mathrm{b}}\) is decoupled from the refrigerator temperature, but we can estimate an equivalent electromagnetic temperature from the dark count measurement. At 10 mK, the measured dark count is \(\alpha=85\) s\({}^{-1}\) (see Fig. 3c), as we evaluate \(\alpha_{\mathrm{qubit}}\) to 5 s\({}^{-1}\), the equivalent \(\alpha_{\mathrm{th}}\) is 80 s\({}^{-1}\), corresponding to \(\bar{n}_{\mathrm{b}}=6.5\cdot 10^{-5}\). By using Eq. (5), the equivalent electromagnetic temperature of the input line is 35 mK. In principe, one could further improve the SMPD performances by improving the attenuation of the lines. However, the temperature of the microwave radiations is challenging to reduce arbitrarily close to the cryostat base temperature as it requires a large amount of attenuator that are well thermally anchored. ## VII Conclusion In conclusion, we have demonstrated the operation of a single microwave photon detector with a sensitivity \(\mathcal{S}=10^{-22}\) W/\(\sqrt{\mathrm{Hz}}\). The efficiency of the device reaches 0.43 and is quantitatively understood from the contributions of the detector duty cycle, the qubit ability to store an excitation, the qubit readout, and the 4 waves mixing efficiency. It can be improved in future devices with longer transmon relaxation times, noting that qubit T1s up to several hundred microseconds have been demonstrated [22; 29] The second key quantity studied in this article is the dark count rate. We have demonstrated that most of these false positives events are caused by spurious photons due to the electromagnetic temperature of the line. We have also verified the direct relation between the count rates and the thermal occupation of the lines, opening the way to using the SMPD as an absolute thermometer in the 10-150mK range. Even though further improvements of the device performance are desirable in the future, its high sensitivity already enabled new experiments, such as single-spin ESR spectroscopy [22], as well as proof-of-principle axion search. ## Acknowledgements We acknowledge technical support from P. Senat, D. Duet, P.-F. Orfila and S. Delprat, and are grateful for fruitful discussions within the Quantronics group. We acknowledge support from the Agence Nationale de la Recherche (ANR) through the DARKWADOR (ANR-19-CE47-0004) projects. We acknowledge support of the Region Ile-de-France through the DIM SIRTEQ (REIMIC project), from the AIDAS virtual joint laboratory, and from the France 2030 plan under the ANR-22-PETQ-0003 grant. This project has received funding from the European Research Council under grant no. 101042315 (INGENIOUS). We acknowledge IARPA and Lincoln Labs for providing the Josephson Traveling-Wave Parametric Amplifier.
2302.09281
A New Scientific Indexing Model: U-Index
H-index has become more popular nowadays and is used for some scientific performance criteria in the world widely. This indexing method does not correctly measure any performance or carrier specifications because of the parameters that are used to form the measurement basis. H-index is located based on citation(C) and paper(N) parameters that involve no logical criterion on the counting process and so measurement on this basis can only give quantity results not any quality information. Therefore, we need a new indexing instrument to find out also the scientific quality unique to an individual author even if that takes into account the effect of multiple coauthorships. Ipso facto, we create a new bibliometric indicator or academic performance indicator called the u-index.
Ugur Saglam, Fatih Canata
2023-02-18T10:36:54Z
http://arxiv.org/abs/2302.09281v1
# A New Scientific Indexing Model: U-Index ###### Abstract H-index has become more popular nowadays and is used for some scientific performance criteria in the world widely. This indexing method does not correctly measure any performance or carrier specifications because of the parameters that are used to form the measurement basis. H-index is located based on citation(C) and paper(N) parameters that involve no logical criterion on the counting process and so measurement on this basis can only give quantity results not any quality information. Therefore, we need a new indexing instrument to find out also the scientific quality unique to an individual author even if that takes into account the effect of multiple coauthorships. Ipso facto, we create a new bibliometric indicator or academic performance indicator called the u-index. Alternative indexing methods; Academic performance indicator; Impact factor; H-index; U-index. ## 1 Introduction Nowadays, it is getting important to quantify the scientific performance of authors. Research performance indicators mostly use bibliometric instruments based on parameters such as the number of papers, citations, or the impact factor of journals [1]. Papers are shared in scientific organizations such as journals, conferences, congresses, etc. with researchers and the public. Every scientific paper takes part in the scientific community by citing the former essays. The originality of papers is obtained by the number of citations and the scientific contribution. For the first time, Lotka [2] firstly suggested measuring the scientific performance by the number of papers, Gross and Gross [3] suggested it as the number of citations. Garfield [4] founded Institute for Scientific Information, firstly mentioned the journal impact factor(JIF), classified citations into categories, and found out Science Citation Index listed the journals as to main motivations. JIF was planned to support the librarians in finding the impact of journals according to citations and the journals were started to array according to the impact factors in the advancing years [5]. JIF is an assistant instrument to improve the core collection of librarians in earlier evolve to a different form as a dominant criterion in assignment and promotion, finding a job or a fund for projects, and measuring the performance of researchers [6]. Specifically in the last years, considering bibliometric quantifying as the performance criterion of researchers has attracted attention to bibliometric instruments. The most popular metric instrument that is used to measure the individual performance of researchers and causes lots of discussions is h-index [7]. For instance, citation databases such as Web of Science, Scopus, and Google Scholar feature h-index instead of the other metrics [8]. Hirsch [9] published the essay on the h-index which is assumed to inform about the productivity and the impact of papers of a researcher. H-index can be defined as when an author has published h papers that have each been cited at least h times. Hirsch [9] states that the index is an important instrument to evaluate the researchers' competition for the same fund and sources. Since published the advantages of the h-index assumed as a dominant and common metric from different citation databases are as follows: H-index is the primary and intelligible combined metric to measure the individual performances of researchers; the metric combines the number of essay and citation impacts. H-index is a cumulative metric and there is no effect on the metric of increasing the number of papers; the h-index measures steady performances rather than fast growth; the metric that measures the scientific performance of a researcher may play an important role in academic promotions, research fund assignments, and scientific awards. After the published h-index is criticized especially by the scientist that study bibliometrics. H-index due not to mind the publication year and the research area is accepted problematic to compare the people that work on different research areas and in different years [10]. Though the dependency of publications and citations on the research lifetime, comparing the researchers from different academic lifetimes is considered a big mistake [11, 12, 13]. Another problem of the h-index that suggests estimating the performance of an individual researcher is stated as that the metric does not notice the number of authors in a publication [9]. Being not reasoned or nonsense to define the papers according to the citation impacts indicates that the h-index does not comply with the bibliometric standard [14]. To lean h-index on not to be a logical relation between two parameters such as the number of papers and the number of citations is evaluated as an exceptional approximation according to bibliometric standard [11]. Nearly 37 different metric variants were suggested to overcome the problems and the deficiencies of h-index [15]. The variants evaluate the discipline difference, self-citations, co-authorship, career and publication lifetimes, and the cited publications except for the h-index core list also [15]. The fundamental aim of these models is to correct the deficiencies of the h-index and develop a comprehensive metric by changing the parameters such as research lifetime and co-authorship [16]. ## 2 Method We have been in quest of a new bibliometric indicator or scientific performance index for the reasons mentioned in the previous section. We have worked on numerous bibliometric indexes but especially h-index and its variations that have the same methodological defects and arrived at some serious criticisms about the current performance indicators. The parameters of the measurement should be determined properly, even the parameters should be increased in number and the performance measuring methods should not affect the value of the performance indicator. Thus, an alternative scientific index has been developed over the basic parameters that can be easily obtained from the databases such as citation(C), the impact factor(IF), and paper count(N) for practical purposes. Moreover, a logical or so to say semantic condition has been put on the counting process of the measurement method. The currently used method is called h-index using only citation parameters to align the papers. However, we think that the papers have to be aligned according to three parameters a citation(C), the impact factor(IF) of the journal, and the average citation/average impact factor ratio(CIF). In this way, we suppose that the indexing method can be more reliable and effective for a performance indicator. The alternative indexing method exhibits a critical behavior by considering the relation between citation and impact factor. In this way lots of tendencies of an author that wants to increase the number of citations can be explainable: by using the popularity factor of the journal, author, or field; preferring the most influential journals or collaborations; publishing a series of biased journal publications, attending the large scientific communities, being in the experimental science organizations. Such cases can cause an artificial positive bias on the h-index or the performance of an author. The academic performance indicator of an individual author has to be measured by some indexing instrument. The new model called as u-index finds out an academic performance indicator by using a new measure method dissimilar to the h-index. We can take on defining the h-index first, then the differences between the two models. The h-index is the maximum value of h such that the given author has published h papers and each paper has been cited at least h times. The index is designed to be developed via simpler measures such as the total number of citations or publications. Though citation routines differ widely among different fields, the index is supposed to work properly only for comparing scientists working in the same field. The u-index is the maximum value of u such that the given author has published u papers that have been ordered according to the citation count from highest to lowest one and each paper has the citation/impact factor ratio(C/IF) at least the average citation/average impact factor ratio(CIF) \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline Author & C & IF & C/IF & CIF & N(C/IF \(\geq\) CIF) \\ \hline pub1 & highest &... &... & const. & 1 \\ \hline pub2 &... &... &... & const. & 2 \\ \hline... & & & & &... \\ \hline pubu &... &... &... & const. & u \\ \hline... & & & & & \(-\) \\ \hline pubn & lowest &... &... & const. & \(-\) \\ \hline \end{tabular} \end{table} Table 1: U-index is designed to be developed via simpler measures such as the total number of citations or publications and additionally the impact factor of the journals. Though citation routines differ widely among different fields, the index is supposed to work properly for comparing scientists working in all fields. To find out the realistic personal performance indicator among scientist working in different fields, the natural publication algorithm have to be determined theoretically: the papers in the higher impact factor journals that exhibit higher average citation behavior must have higher citations and vice versa; the index must be the performance indicator for scientist working in different areas. The papers of an author should be ordered to correspond to the citation count for each publication as in h-index. Then, the last position in which the citation/impact factor of the journal ratio(C/IF) for each publication is greater than or equal to the average citation/average impact factor ratio(CIF) gives the index, as shown in Fig. 1. In this way, the index value relatively is getting closer and can be used for the performance indicator in all field. U-index can be considered a powerful dynamical instrument to comment on the individual performances of researchers and fluctuates due to academic performance cyclically. The index tends to increase with new publications, decrease with citations to the current publications, and steady in the case of a balanced impact factor-citation ratio or no publication and citation. Thus this indicator exhibits a cyclical dynamical behavior according to the performance of researchers. Not being in the tendency to steadily increase is another important characteristic of the index and so we can mention a saturation behavior for every individual performance. The saturation of an index due to the end of the career of researchers is an essential characteristic of a performance indicator. This saturation point can be greater or lower than the index in the active years of a researcher. We get the h-index from the u-index in the case of a good correlation between the citation count and the journal impact factors for each publication. The u-index can be considered a generalized form of the h-index. In fact, the h-index only measures good carrier development in agreement with the productivity and honesty of an author. Besides, the h-index is an excellent indicator of idealized carrier development behavior for individual researchers from all research areas. However, nowadays researchers exhibit enormous behavioral anomalies throughout their individual career lives to compete for academic promotions, research funds, and scientific awards. Thus u-index can determine the carrier anomalies and give chance to mend competition conditions. The impact factor of journals is an effective parameter that determines the average citation of an essay published in a journal. Thus, an author's citation performance can be artificially affected by the impact factor or the average popularity of journals. When we want to take into account the pure scientific performance of an author, we have to distinguish the average citation factor of journals from the total number of citations. In this way, the vicious cycle that defines citation-impact factor correlation can be determined better. Citation-impact factor correlation can be defined as the number of citations that alter the journal impact factor or the number of average citations for a paper of an author and vice versa. The main point is that a paper should be published in a journal that minds the optimum scope and popularity conditions. In other words, the journal should not consider the popularity of authors in the refereeing and accepting process of papers. Figure 1: Citation/Impact Factor of Journal(C/IF)-Paper Number(N) Graph Authors and also journals exhibit some behavioral anomalies to save and enhance their own index parameter and the current impact factor respectively. For authors: the desire to publish all the papers in the higher impact factor journals, as a result, want to attend the groups of senior authors, popular scientific communities, or large research groups. For journals: to accept lots of papers of senior authors and the collaborations or scholars of senior authors. The behavioral anomalies mentioned above rise as a consequence of accepting quantitative academic measurements as an indicator of academic performance. The new model called u-index also foresees some futuristic outcomes. Every author minds the scope of journals and chooses the optimal one that corresponds to the scope, so the papers are published in the most relevant journals and so the popularity and the impact factor of a journal indicate better correlation. On the other hand, this model says if the scope and the impact factor are consistent, the impact factor and the citation count for a paper, so the citation count for a paper and the performance indicator index of an author will be consistent. Thus the impact factor of the journal in the same scope will be balanced and there will be no difference between the journal at the same scope. U-index can be considered a new bibliometric instrument that aims to be an alternative point of view to the current problems of quantitative academic measurements. In some instances, there may be seen a huge difference in index value between researchers that work even in the same field. This is not a problem only related to the individual performances of researchers but also their strategic decisions to enhance their quantitative performance indicators. ## 3 Approach Metrics have a stress factor on authors for only productivity, not creativity or originality which can be seen on scientific performances mostly. Thus in the publication process journals and authors can exhibit several strategies to preserve their popularity through impact factor and h-index parameters. Thereupon it can be seen that some kinds of abnormalities related to authors and journals constitute the backbone of metric systems. We investigated the publication lists of numerous researchers, then presented some sample careers to obtain and discuss abnormal behaviors. We compile commonly seen author-level abnormal behaviors on some categories such as the effect of the senior author, large research group collaboration, senior co-author, and influential journal. In the case of the senior author effect, journals tend to accept papers from authors that are well-known in their own research areas. However, we notice a low C/high IF ratio when we check the publication lists of these authors. For the h-index, there is no problem as a performance indicator but the u-index determines this as an abnormality. Author a will reach a greater index value over u-index if publication 3 is not in the publication list of the author in Table 2. In this situation, the author uses its popularity to publish the paper in the journal with a higher impact factor, but not cited as expected. In fact, the essay should be published in a journal with less impact factor related to its popularity. In the case of the large research group collaboration effect, junior authors tend to attend known research teams to raise their number of papers and citations. But we see an immediate decrease in the number of citations when we check the publication lists. For the h-index, all cited works should be assumed as performance indicators but the u-index determines abnormal publication-citation activities that are not coherent with the other papers in the publication list. Author b may reach a greater index value over the u-index if the first three publications are not in the publication list in Table 3. In this situation, author b uses the popularity of a scientific organization to have numer \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline **Author a** & C & IF & C/IF & CIF & N(C/IF \(\geq\) CIF) \\ \hline pub1 & 770 & 4,15 & 185,54 & 50 & 1 \\ \hline pub2 & 650 & 3,84 & 169,27 & 50 & 2 \\ \hline **pub3** & **120** & **6,15** & **19,51** & **50** & \(-\) \\ \hline pub4 & 100 & 1,86 & 53,76 & 50 & \(-\) \\ \hline \end{tabular} \end{table} Table 2: Senior author effect \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline **Author b** & C & IF & C/IF & CIF & N(C/IF \(\geq\) CIF) \\ \hline pub1 & 5200 & 3,56 & 1460,67 & 420 & 1 \\ \hline pub2 & 4160 & 4,88 & 852,46 & 420 & 2 \\ \hline pub3 & 3500 & 3,82 & 916,23 & 420 & 3 \\ \hline **pub4** & **180** & **2,12** & **84,91** & **420** & \(-\) \\ \hline pub5 & 100 & 1,88 & 53,19 & 420 & \(-\) \\ \hline \end{tabular} \end{table} Table 3: Large research group collaboration effect fact, author b should have an index value through the papers only published with the research community that is not coherent with the other papers in the publication list. In the case of the senior co-author effect, relatively junior authors tend to publish some papers with senior scientists in the related area to enhance their academic performances. But we see a critically low C/IF ratio in some papers even published in the same journal. For the h-index, there is only one parameter as C but u-index determines the abnormal correlation between C and IF in the publication list. Author c wants to reach a greater index value via the first two publications over the u-index as seen in the publication list in Table 4. In this situation, author c uses the popularity of a co-author to boost their career via the journal with a high impact factor. In fact, author c uses the opportunity factor to develop good scientific relations and give accelerate career development in the later years of the research lifetime. In the case of the influential journal effect, authors tend to publish their research in the journal with the highest impact factor to increase their index value. But we see commonly a low C/high IF ratio in this type of journal/paper correlation. For the h-index, there is no problem as long as the number of citations is higher but the u-index notices the journals with higher impact factors. Author d wants to publish some essays in journals with a higher impact factor to reach a greater index value via the h-index as seen in the publication list in Table 5. In this situation, author d uses the acceleration of their career to publish in the most influential journals. In fact, author d takes advantage of its papers with numerous citations to win admittance from prestigious journals. Here we sum up the author-level abnormal behaviors, and mention sometimes potential journal-level abnormal behaviors that are mostly an intricate correlation between authors and journals. H-index may be used as a tool by most authors for the purpose of academic promotions, research fund assignments, and scientific awards. Therefore a new metric that is not abuse-liable has to be developed as a more reliable academic performance indicator. U-index exhibits a behavior-responsive algorithm that may be expressed artifly as having semantic awareness in other words. ## 4 Conclusion U-index can be considered as a count criterion on the h-index that restricts the numbering according to the average value(CIF). Measuring academic performance via a bibliometric indicator may be seen as impossible or not a quantitative problem as we consider nowadays. However, researchers need to be assessed via positive measurement systems such as bibliometric instruments that want to be used for academic promotions, research fund assignments, and scientific awards. Though the quantity-quality dilemma is seen as not to be solved by any bibliometric instruments, we want to determine an academic performance indicator over the basic parameters of the measurement such as citation(C), the impact factor(IF), and average citation/average impact factor ratio(CIF) that can be obtainable from the platforms providing scientific and academic data, information and analytics. U-index is a semi-semantic algorithm that determines the abnormal behaviors of researchers and helps to identify them. Four common types of behavioral abnormality have been detected in the publication lists among numerous researchers and the situations are ensampled separately to identify the motivation of each type. The common angst as known is to get a good academic career opportunity, high \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline **Author c** & C & IF & C/IF & CIF & N(C/IF \(\geq\) CIF) \\ \hline pub1 & 120 & 4,58 & 26,20 & 12 & 1 \\ \hline pub2 & 100 & 4,16 & 24,04 & 12 & 2 \\ \hline **pub3** & **20** & **4,16** & **4,81** & **12** & \(-\) \\ \hline pub4 & 15 & 4,58 & 3,28 & 12 & \(-\) \\ \hline \end{tabular} \end{table} Table 4: Senior co-author effect \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline **Author d** & C & IF & C/IF & CIF & N(C/IF \(\geq\) CIF) \\ \hline pub1 & 2400 & 3,56 & 674,16 & 152 & 1 \\ \hline pub2 & 2200 & 4,18 & 526,32 & 152 & 2 \\ \hline pub3 & 1400 & 2,88 & 486,11 & 152 & 3 \\ \hline pub4 & 1000 & 3,12 & 320,51 & 152 & 4 \\ \hline **pub5** & **850** & **39,22** & **21,67** & **152** & \(-\) \\ \hline pub6 & 780 & 3,48 & 224,14 & 152 & \(-\) \\ \hline \end{tabular} \end{table} Table 5: Influential journal effect index value and so some kind of promotion, fund, and awards. Thus we can surmise that nearly all authors from all research fields feel the same pressure of getting high index values in the process of their career development. This is becoming a more common type of anxiety among early career researchers and even senior ones that do not have a relatively satisfying career among contemporaries in evidence. Measuring academic performance should be a positive stimulant and an analysis not only quantitative but also qualitative. H-index is cited mostly as an example because of its popularity as a pure quantitative analyzer and the source of aforementioned anxiety of researchers also. The prior and present metric models that are created in the basic parameters and some mathematical instruments have not responded to the need for performance measurement standards. At this point, the need to develop a proper indicator of academic performance is the main motivation of this paper and so we create a model called u-index that constitutes simply accessible parameters from some databases. We have analyzed the u-index by carefully working on goodly numerous researchers, obtained some author-level abnormality categories, monitored as tables, and discussed in detail therein before.
2304.12249
Fuzzy clustering of ordinal time series based on two novel distances with economic applications
Time series clustering is a central machine learning task with applications in many fields. While the majority of the methods focus on real-valued time series, very few works consider series with discrete response. In this paper, the problem of clustering ordinal time series is addressed. To this aim, two novel distances between ordinal time series are introduced and used to construct fuzzy clustering procedures. Both metrics are functions of the estimated cumulative probabilities, thus automatically taking advantage of the ordering inherent to the series' range. The resulting clustering algorithms are computationally efficient and able to group series generated from similar stochastic processes, reaching accurate results even though the series come from a wide variety of models. Since the dynamic of the series may vary over the time, we adopt a fuzzy approach, thus enabling the procedures to locate each series into several clusters with different membership degrees. An extensive simulation study shows that the proposed methods outperform several alternative procedures. Weighted versions of the clustering algorithms are also presented and their advantages with respect to the original methods are discussed. Two specific applications involving economic time series illustrate the usefulness of the proposed approaches.
Ángel López Oriona, Christian Weiss, José Antonio Vilar
2023-04-24T16:39:22Z
http://arxiv.org/abs/2304.12249v1
# Fuzzy clustering of ordinal time series based on two novel distances with economic applications ###### Abstract Time series clustering is a central machine learning task with applications in many fields. While the majority of the methods focus on real-valued time series, very few works consider series with discrete response. In this paper, the problem of clustering ordinal time series is addressed. To this aim, two novel distances between ordinal time series are introduced and used to construct fuzzy clustering procedures. Both metrics are functions of the estimated cumulative probabilities, thus automatically taking advantage of the ordering inherent to the series' range. The resulting clustering algorithms are computationally efficient and able to group series generated from similar stochastic processes, reaching accurate results even though the series come from a wide variety of models. Since the dynamic of the series may vary over the time, we adopt a fuzzy approach, thus enabling the procedures to locate each series into several clusters with different membership degrees. An extensive simulation study shows that the proposed methods outperform several alternative procedures. Weighted versions of the clustering algorithms are also presented and their advantages with respect to the original methods are discussed. Two specific applications involving economic time series illustrate the usefulness of the proposed approaches. ## 1 Introduction Time series clustering concerns the problem of splitting a set of unlabelled time series into homogeneous groups in such a way that similar series are placed together in the same group and dissimilar series are located in different groups. Indeed, the clustering task is driven by the desired similarity notion, which can be established in different ways by dealing with time series. Frequently, the purpose is to identify groups with similar generating models, which allows to characterize a few dynamic patterns without requiring to analyze and model each single time series. The latter, besides being computationally demanding, rarely is the objective when dealing with a huge number of series. Complexity inherent to clustering objects evolving over time (fixing a suitable dissimilarity principle, dealing with series of unequal length, high computational complexity,...) together with the vast range of applications where time series clustering plays a fundamental role account for the growing interest on this challenging topic. Comprehensive overviews including current advances, future prospects, interesting references, and specific application areas are provided by [1, 2, 3]. The majority of clustering methods focus on real-valued time series. For instance, some techniques are based on discriminating between different geometric profiles in the time series data set by employing the dynamic time warping (DTW) distance or some related dissimilarities [4, 5, 6]. A different approach consists of assuming that each time series has been generated from a specific class of models and then executing a clustering algorithm based on estimated models [7, 8, 9, 10]. Some other works propose to replace each series in the collection by a vector of model-free features describing its behaviour in a suitable way. Then, the computed vectors are used as input to a standard clustering method [11, 12, 13, 14, 15, 16, 17]. Alternative techniques are based on reducing the dimensionality of the original time series as a preliminary step [18, 19, 20]. Then, a specific clustering procedure is applied to the set of reduced objects. The suitability of each class of algorithms usually depends on the nature of the time series and the final goal of the user, with no approach dominating the remaining ones in every possible context. According to the cluster assignment criterion, two different paradigms are considered depending on whether a "hard" or "soft" partition is constructed. Traditional clustering leads to hard solutions, where each data object is located in exactly one cluster. Overlapping groups are not allowed, which could become too inflexible in many real life applications where the cluster boundaries are not clearly determined or some objects are equidistant from various clusters. Soft cluster techniques provide a more versatile tool by allowing membership of data objects to clusters. In a soft partition, every object is associated with a vector of membership degrees indicating the amount of confidence in the assignment to each of the respective clusters. A well known approach to perform soft clustering is via the fuzzy clustering methods [21, 22], based on minimizing a cost function involving distances to centroids and the so-called _fuzzifier_ controlling the allowed overlapping level. Adoption of fuzzy approach is usually advantageous when dealing with time series data sets because regime shifts are frequent in practice. A considerably smaller number of works have dealt with the clustering of time series having another range than a real-valued one. For instance, [23] and [24] introduced clustering algorithms for count time series based on Poisson mixtures and integer-valued generalized autoregressive conditional heteroscedasticity (INGARCH) models, respectively. Several approaches to cluster categorical time series (CTS) were also proposed. [25] considered two model-based procedures relying on time-homogeneous first-order Markov chains, which are applied to a panel of Austrian wage mobility data. A dissimilarity assessing both closeness of raw categorical values and proximity between dynamic behaviours was proposed by [26]. The metric is used to perform clustering aimed at identifying different web-user profiles according to their navigation behaviour. [27] constructed a robust tree-based sequence encoder for clustering CTS, which is applied to some real-world data sets containing biological sequences. Two novel distances between CTS were introduced by [28] and employed to perform hard and soft clustering. An overview of model-based clustering of categorical sequences is provided in [29], and several methods based on finite mixtures of Markov models are implemented in the R package **ClickCluster**[30]. To the best of our knowledge, all the proposed clustering methods for CTS are designed for the general case of series taking _nominal_ values, i.e., if no underlying ordering exists in the categorical range. Clearly, these methods are still valid to cluster _ordinal_ time series (OTS). However, when applying these procedures to OTS data sets, one completely ignores the latent ordering, which could be of great help to identify the underlying partition. For instance, consider a data set including three clusters, \(\mathcal{C}_{1},\mathcal{C}_{2}\), and \(\mathcal{C}_{3}\), characterised by time series taking "low", "moderate", and "large" values, respectively. In such a case, it is reasonable to consider the groups \(\mathcal{C}_{1}\) and \(\mathcal{C}_{3}\) to be furthest away, and some degree of ordinal information should be provided to the clustering algorithm. Additionally, OTS data sets appear rather naturally in several application domains, including economics (e.g., wage mobility data of different individuals [25] or credit ratings of different countries [31]), environmental sciences (e.g., amount of cloud coverage in different regions [32]), or medicine (e.g., clinical scores of different subjects [33]), among others. The previous considerations clearly highlight the need for clustering algorithms specifically designed to deal with OTS. Moreover, given the complex nature of time series databases, the adoption of the fuzzy approach would allow to gain versatility in the resulting partition to capture changes in the dynamic behaviours of the series over time. The main goal of this paper is to introduce fuzzy clustering algorithms for OTS being capable of: (i) grouping together ordinal sequences generated from similar stochastic processes, (ii) achieving accurate results with series coming from a broad variety of ordinal models, and (iii) performing the clustering task in an efficient manner. To this aim, we first introduce two dissimilarity measures between OTS. Since our objective is to group series with similar underlying structures, both metrics are based on extracted features providing information about marginal properties and serial dependence patterns. Specifically, the first dissimilarity considers proper estimates of the cumulative probabilities, while the second one combines structure-based statistical features characterizing a given OTS (dispersion, skewness, serial dependence...), which, in turn, are defined in terms of the estimated cumulative probabilities. Thus, the distances take advantage of the latent ordering existing in the series' range. Both metrics are used as input to the standard fuzzy \(C\)-medoids algorithm, which allows for the assignment of gradual memberships of the OTS to clusters. Assessment of the clustering approaches is carried out by means of a comprehensive simulation study including different ordinal processes commonly used in the literature. The performance of some alternative dissimilarities designed to deal with real-valued or nominal time series are also examined for comparison. Two evaluation schemes are considered. The first one aims at analysing the ability of the procedures to assign high (low) membership values if a given series pertains (not pertains) to a specific cluster defined in advance. The second scheme also assesses the ability of the approaches to handle OTS showing an ambiguous behaviour i.e., series whose dynamic structure is not associated with a specific group. More sophisticated versions of both clustering procedures are also constructed by giving different weights to the marginal and serial components of the proposed metrics. In this way, the influence of each component in the computation of the clustering solution can be automatically determined during the optimisation process. Lastly, two specific applications involving economic time series are presented to show the usefulness of the proposed clustering techniques. The rest of the paper is organised as follows. Several quantities for describing an ordinal process and two distances between OTS based on proper estimates of these features are introduced in Section 2, where some simple examples are also shown to illustrate the suitability of the metrics. In Section 3, fuzzy clustering algorithms based on the proposed dissimilarities are constructed. The methods are evaluated in Section 4 by means of a broad simulation study where several alternative procedures are analysed as well. Section 5 presents two applications of the proposed techniques to data sets containing economic time series. Some concluding remarks are summarised in Section 6. The Appendix provides the proof of two results presented in the manuscript. ## 2 Two distance measures between ordinal time series In this section, after providing some background on ordinal stochastic processes, two novel distances between ordinal time series are introduced. The use of estimated cumulative probabilities to construct both metrics is also discussed and properly motivated. ### Some background on ordinal processes Let \(\{X_{t}\}_{t\in\mathbb{Z}}\), \(\mathbb{Z}=\{\ldots,-1,0,1,\ldots\}\), be a strictly stationary stochastic process having the ordered categorical range \(\mathcal{S}=\{s_{0},\ldots,s_{n}\}\), with \(s_{0}<s_{1}<\ldots<s_{n}\). The process \(\{X_{t}\}_{t\in\mathbb{Z}}\) is often referred to as an _ordinal process_, while the categories in \(\mathcal{S}\) are frequently called the _states_. Let \(\{C_{t}\}_{t\in\mathbb{Z}}\) be the count process with range \(\{0,\ldots,n\}\) generating the ordinal process \(\{X_{t}\}_{t\in\mathbb{Z}}\), i.e., \(X_{t}=s_{C_{t}}\). It is well known that the distributional properties of \(\{C_{t}\}_{t\in\mathbb{Z}}\) (e.g., stationarity) are properly inherited by \(\{X_{t}\}_{t\in\mathbb{Z}}\)[34]. In particular, the marginal probabilities can be expressed as \[p_{i}=P(X_{t}=s_{i})=P(C_{t}=i),\quad i=0,\ldots,n, \tag{1}\] while the lagged joint probabilities (for a lag \(l\in\mathbb{Z}\)) are given by \[p_{ij}(l)=P(X_{t}=s_{j},X_{t-l}=s_{i})=P(C_{t}=j,C_{t-l}=i),\quad i,j=0,\ldots,n. \tag{2}\] Note that both the marginal and the joint probabilities are still well defined in the general case of a stationary stochastic process with nominal range. By contrast, in an ordinal process, one can also consider the corresponding cumulative probabilities defined, for \(i,j=0,\ldots,n-1\) and \(l\in\mathbb{Z}\), as \[\begin{split} f_{i}=& P(X_{t}\leq s_{i})=P(C_{t} \leq i),\\ f_{ij}(l)=& P(X_{t}\leq s_{j},X_{t-l}\leq s_{i})=P(C_ {t}\leq j,C_{t-l}\leq i),\end{split} \tag{3}\] for the marginal and the joint case, respectively. In practice, the values of \(p_{i}\), \(p_{ij}(l)\), \(f_{i}\), and \(f_{ij}(l)\) must be estimated from a \(T\)-length realization of the ordinal process, \(X_{T}=\{x_{1},\ldots,x_{T}\}\), usually referred to as _ordinal time series_ (OTS). Natural estimates of these probabilities are given by \[\widehat{p}_{i}=\frac{1}{T}\sum_{k=1}^{T}I(x_{k}=s_{i}),\;\;\; \widehat{p}_{ij}(l)=\frac{1}{T-l}\sum_{k=1}^{T-l}I(x_{k}=s_{i})I(x_{k+l}=s_{j}), \tag{4}\] \[\widehat{f}_{i}=\frac{1}{T}\sum_{k=1}^{T}I(x_{k}\leq s_{i}),\;\; \;\widehat{f}_{ij}(l)=\frac{1}{T-l}\sum_{k=1}^{T-l}I(x_{k}\leq s_{i})I(x_{k+l} \leq s_{j}), \tag{5}\] where \(I(\cdot)\) denotes the indicator function. Probabilities \(p_{i}\), \(p_{ij}(l)\), \(f_{i}\) and \(f_{ij}(l)\) summarize the marginal and joint distributional properties of the process \(\{X_{t}\}_{t\in\mathbb{Z}}\). An alternative way to characterize the process consists of constructing a fixed-length vector formed by statistical features measuring structural properties (centrality, dispersion, skewness, serial dependence...). Following [31], a range of this type of features can be quantified by using expected values of suitable distances between ordinal categories. Thus, specific expected values of the so-called _block_ distance, \(d_{\text{o},1}(s_{i},s_{j})=|i-j|\), lead to the set of structural features provided in Table 1. For example, \(\text{loc}_{d_{\text{o},1}}\) is the expected value of the block distance between the marginal variable \(X_{t}\) and the state \(s_{0}\), \(\text{disp}_{d_{\text{o},1}}\) is the expected value of the block distance between two copies of the marginal variable,...(see [31] for details). While the first four measures in Table 1 summarise the marginal behaviour of the process, the ordinal Cohen's \(\kappa\), \(\kappa_{d_{\text{o},1}}(l)\), evaluates the degree of serial dependence at a given lag \(l\in\mathbb{Z}\). Thus, we have available a unified distance-based approach to obtain relevant features providing a comprehensive picture of the process. Note that the block distance between two given categories simply counts the number of categories between them, but it makes use of the latent ordering thus providing a natural way of assessing dissimilarity between ordinal categories. Furthermore, \(d_{\text{o},1}\) does not depend on the labeling selected for the categories, which ensures that the features based on the expected values of this distance are invariant to scale transformations. These nice properties justify the use of this distance-based approach. When dealing with a realization \(X_{T}\), estimates \(\widehat{\text{loc}}_{d_{\text{o},1}}\), \(\widehat{\text{disp}}_{d_{\text{o},1}}\), \(\widehat{\text{asym}}_{d_{\text{o},1}}\), \(\widehat{\text{skew}}_{d_{\text{o},1}}\), and \(\widehat{\kappa}_{d_{\text{o},1}}(l)\) for the respective features in Table 1 can be obtained by considering their sample counterparts, i.e. using \(\widehat{f}_{i}\) and \(\widehat{f}_{ij}(l)\) in (5). A detailed analysis of the asymptotic properties of these estimators is provided in [31]. ### Two novel dissimilarities between ordinal time series Suppose we have two stationary ordinal processes \(\{X_{t}^{(1)}\}_{t\in\mathbb{Z}}\) and \(\{X_{t}^{(2)}\}_{t\in\mathbb{Z}}\) having the same range \(\mathcal{S}\). A simple dissimilarity criterion between both processes can be established by measuring the discrepancy between their corresponding representations in terms of cumulative probabilities. In this way, for a given collection of \(L\) lags, \(\mathcal{L}=\{l_{1},\ldots,l_{L}\}\), we define a distance \(d_{1}\) as \[d_{1}\big{(}X_{t}^{(1)},X_{t}^{(2)}\big{)}=d_{1,M}\big{(}X_{t}^{(1)},X_{t}^{(2 )}\big{)}+d_{1,B}\big{(}X_{t}^{(1)},X_{t}^{(2)}\big{)}, \tag{6}\] with \[\begin{split} d_{1,M}\big{(}X_{t}^{(1)},X_{t}^{(2)}\big{)}=& \sum_{i=0}^{n-1}\Big{(}f_{i}^{(1)}-f_{i}^{(2)}\Big{)}^{2},\\ d_{1,B}\big{(}X_{t}^{(1)},X_{t}^{(2)}\big{)}=&\sum _{k=1}^{L}\sum_{i=0}^{n-1}\sum_{j=0}^{n-1}\Big{(}f_{ij}^{(1)}(l_{k})-f_{ij}^{( 2)}(l_{k})\Big{)}^{2},\end{split} \tag{7}\] where the superscripts (1) and (2) indicate that the corresponding probabilities refer to the processes \(\{X_{t}^{(1)}\}_{t\in\mathbb{Z}}\) and \(\{X_{t}^{(2)}\}_{t\in\mathbb{Z}}\), respectively. The terms \(d_{1,M}\) and \(d_{1,B}\) assess dissimilarity between marginal and lagged bivariate probabilities, respectively. The latter term involves the set \(\mathcal{L}\), which must be fixed in advance according to the lags at which one wishes to evaluate the serial dependence. It is worth remarking that, by considering the cumulative probabilities in the definition of \(d_{1}\), we obtain an appropriate dissimilarity measure taking into account the ordering existing in both processes (see Section 2.3). An alternative dissimilarity measure considering features based on the block distance \(d_{\text{o},1}\) is defined as \[d_{2}\big{(}X_{t}^{(1)},X_{t}^{(2)}\big{)}=d_{2,M}\big{(}X_{t}^{(1)},X_{t}^{( 2)}\big{)}+d_{2,B}\big{(}X_{t}^{(1)},X_{t}^{(2)}\big{)}, \tag{8}\] \begin{table} \begin{tabular}{l l} \hline Feature & Definition \\ \hline Location & \(\text{loc}_{d_{\text{o},1}}=\sum_{i=0}^{n-1}(i+1)(f_{i+1}-f_{i})\) \\ Dispersion & \(\text{disp}_{d_{\text{o},1}}=2\sum_{i=0}^{n-1}f_{i}(1-f_{i})\) \\ Asymmetry & \(\text{asym}_{d_{\text{o},1}}=\sum_{i=0}^{n-1}(1-f_{i}-f_{n-i-1})^{2}\) \\ Skewness & \(\text{skew}_{d_{\text{o},1}}=2\sum_{i=0}^{n-1}f_{i}-1\) \\ Ordinal Cohen’s \(\kappa\) at lag \(l\in\mathbb{Z}\) & \(\kappa_{d_{\text{o},1}}(l)=\frac{\sum_{i=0}^{n-1}(f_{ii}(l)-f_{i}^{2})}{\sum_{ i=0}^{n-1}f_{i}(1-f_{i})}\) \\ \hline \end{tabular} \end{table} Table 1: Some features of an ordinal process based on expected values of the block distance. with \[\begin{split} d_{2,M}\big{(}X_{t}^{(1)},X_{t}^{(2)}\big{)}=& \Big{\|}\frac{1}{n}\Big{(}\text{loc}_{d_{o,1}}^{(1)},2\text{disp}_{d_{o,1}}^{( 1)},\text{asym}_{d_{o,1}}^{(1)},\text{skew}_{d_{o,1}}^{(1)}\Big{)}\\ &-\,\frac{1}{n}\Big{(}\text{loc}_{d_{o,1}}^{(2)},2\text{disp}_{d_ {o,1}}^{(2)},\text{asym}_{d_{o,1}}^{(2)},\text{skew}_{d_{o,1}}^{(2)}\Big{)} \Big{\|}^{2},\\ d_{2,B}\big{(}X_{t}^{(1)},X_{t}^{(2)}\big{)}=&\sum_ {k=1}^{L}\Big{(}\kappa_{d_{o,1}}^{(1)}(l_{k})-\kappa_{d_{o,1}}^{(2)}(l_{k}) \Big{)}^{2}.\end{split} \tag{9}\] In the same way as \(d_{1}\), dissimilarity \(d_{2}\) is formed by the terms \(d_{2,M}\) and \(d_{2,B}\) comparing the marginal and serial behaviours of both processes, respectively. Specifically, the marginal component contains the normalised versions of the quantities in Table 1 (see [31]). This way, each one of the features is expected to exhibit approximately the same influence in the computation of \(d_{2,M}\). In practice, \(d_{1}\) and \(d_{2}\) will be approximated on the basis of realizations \(X_{T_{1}}^{(1)}\) and \(X_{T_{2}}^{(2)}\) of both ordinal processes with respective lengths \(T_{1}\) and \(T_{2}\) by means of \[\widehat{d}_{p}\big{(}X_{T_{1}}^{(1)},X_{T_{2}}^{(2)}\big{)}=\widehat{d}_{p,M} \big{(}X_{T_{1}}^{(1)},X_{T_{2}}^{(2)}\big{)}+\widehat{d}_{p,B}\big{(}X_{T_{1 }}^{(1)},X_{T_{2}}^{(2)}\big{)},\ \ \text{for}\ \ p=1,2, \tag{10}\] where \(\widehat{d}_{p,M}\) and \(\widehat{d}_{p,B}\) are proper estimates of \(d_{p,M}\) and \(d_{p,B}\) computed by using the sample values \(\widehat{f}_{i}^{(h)}\), \(\widehat{f}_{ij}^{(h)}(l_{k})\), \(h=1,2\), given in (5). Some remarks concerning the proposed dissimilarities are provided below. _Remark 1_.: _Independent consideration of metrics \(d_{1}\) and \(d_{2}\). Metrics \(d_{1}\) and \(d_{2}\) could be jointly considered to define a combined dissimilarity \(d_{1}+d_{2}\). Although this distance could be seen as more informative than both \(d_{1}\) and \(d_{2}\), this is not usually the case in practice. In fact, some numerical experiments have revealed that, in most cases, a clustering algorithm based on one of the individual distances, \(\widehat{d}_{1}\) or \(\widehat{d}_{2}\), outperforms a method based on the combined distance \(\widehat{d}_{1}+\widehat{d}_{2}\) in terms of clustering accuracy. This is due to the fact that, by using all features to describe a given OTS, redundant information is being provided (note that the features in Table 1 are defined in terms of cumulative probabilities). It is worth noting that the use of redundant features is known to be counterproductive in clustering and classification contexts. _Remark 2_.: _Advantages of feature-based distances. Both \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) belong to the class of feature-based distances, since they are aimed at comparing extracted features. The discriminatory capability of this kind of distances depends on selecting the most suitable features for a given context. Whether a proper set of features is used, then this class of distances present very nice properties such as dimensionality reduction, low computational complexity, robustness to the generating model, and versatility to compare series with different lengths. It is worth remarking that these properties are not satisfied by other dissimilarities between time series. For instance, metrics based on raw data usually involve high computational cost and require series having the same length, while model-based metrics are expected to be strongly sensitive to model misspecification._ _Remark 3_.: _On the distance \(d_{2}\). Distance \(d_{2}\) and its estimate rely on features based on expectations of the block distance between ordinal categories, \(d_{o,1}\). Other feature-based distances can be introduced following an analogous approach, but starting from alternative distances defined on \(\mathcal{S}\times\mathcal{S}\) (see Section 2 in [31]). However, these alternative vias led to distances showing a worse performance than \(d_{2}\) in the numerical experiments carried out throughout this work for clustering purposes, i.e. \(d_{2}\) exhibited the highest capability to discriminate between different OTS (see Sections 3 and 4). For this reason, \(d_{2}\) was selected. ### Motivating the use of cumulative probabilities This section illustrates the advantages of using cumulative probabilities to differentiate between ordinal processes. For the sake of simplicity, we first consider a toy example involving synthetic data and put the focus on the marginal case. Let us consider three stationary processes with ordinal range \(\mathcal{S}=\{s_{0},s_{1},s_{2},s_{3}\}\), denoted by \(X_{t}^{(1)}\), \(X_{t}^{(2)}\), and \(X_{t}^{(3)}\), with marginal probabilities given by the vectors \(\mathbf{p}_{i}=\Big{(}P\big{(}X_{t}^{(i)}=s_{0}\big{)},\ldots,P\big{(}X_{t}^{(i)}= s_{3}\big{)}\Big{)}\), \(i=1,2,3\), respectively, such that \[\mathbf{p}_{1}=(0.4,0.1,0.1,0.4),\ \mathbf{p}_{2}=(0.1,0.4,0.1,0.4),\ \mathbf{p}_{3}=(0.1,0.1,0.4,0.4). \tag{11}\] The distance between two processes can be measured as the squared Euclidean distance between their corresponding marginal probability vectors, that is by defining \(d^{*}\big{(}X_{t}^{(i)},X_{t}^{(j)}\big{)}=\|\mathbf{p}_{i}-\mathbf{p}_{j}\|^{2}\). Based on this metric, we have \[d^{*}\big{(}X_{t}^{(1)},X_{t}^{(2)}\big{)}=d^{*}\big{(}X_{t}^{(1)},X_{t}^{(3)} \big{)}=d^{*}\big{(}X_{t}^{(2)},X_{t}^{(3)}\big{)}=0.18, \tag{12}\] thus concluding that the three processes are equidistant. However, the underlying ordering in the set \(\mathcal{S}\) suggests that process \(X_{t}^{(1)}\) should be closer to \(X_{t}^{(2)}\) than to \(X_{t}^{(3)}\), since category \(s_{1}\) is closer to \(s_{0}\) than category \(s_{2}\). Therefore, distance \(d^{*}\) ignores the latent ordering and one could conclude that it is not appropriate to compare two ordinal processes. Now, consider \(d_{1,M}\) in (7) defined as the squared Euclidean distance between the vectors of cumulative probabilities \(\mathbf{f}_{i}=\Big{(}P\big{(}X_{t}^{(i)}\leq s_{0}\big{)},\ldots,P\big{(}X_{t}^{( i)}\leq s_{2}\big{)}\Big{)}\), for \(i=1,2,3\). From (11) follows that \[\mathbf{f}_{1}=(0.4,0.5,0.6),\ \mathbf{f}_{2}=(0.1,0.5,0.6),\ \mathbf{f}_{3}=(0.1,0.2,0.6), \tag{13}\] and the pairwise distances based on \(d_{1,M}\) take the values \[d_{1,M}\big{(}X_{t}^{(1)},X_{t}^{(2)}\big{)}=d_{1,M}\big{(}X_{t}^{(2)},X_{t}^{ (3)}\big{)}=0.09,\ \ d_{1,M}\big{(}X_{t}^{(1)},X_{t}^{(3)}\big{)}=0.18. \tag{14}\] According to distance \(d_{1,M}\), the pair \((X_{t}^{(1)},X_{t}^{(2)})\) is closer than the pair \((X_{t}^{(1)},X_{t}^{(3)})\). Moreover, process \(X_{t}^{(2)}\) is located at the same distance from \(X_{t}^{(1)}\) and \(X_{t}^{(3)}\). This is reasonable since the marginal distribution of both \(X_{t}^{(1)}\) and \(X_{t}^{(3)}\) can be obtained from the distribution of \(X_{t}^{(2)}\) by transferring the same amount of probability either one step backward (\(s_{0}\)) or upward (\(s_{2}\)) from category \(s_{1}\), respectively. In essence, cumulative probabilities allow us to better differentiate between ordinal distributions because they implicitly take into account the underlying ordering of the states. Specifically, the amount of dissimilarity is lower when the differences between marginal distributions happen at closer categories. Therefore, metric \(d_{1,M}\) assigns distance values consistent with the inherent order of the range \(\mathcal{S}\). The above example highlights the importance of considering cumulative probabilities to properly measure dissimilarity between ordinal processes. In fact, computations in (12) show that the use of probability mass functions can lead to misleading results when an ordinal range is considered. These arguments can be justified by means of the following proposition, which expresses the metric \(d_{1,M}\) in terms of discrepancies between the probability mass functions. _Proposition 1_.: Let \(\{X_{t}\}_{t\in\mathbb{Z}}\) and \(\{Y_{t}\}_{t\in\mathbb{Z}}\) be two stationary ordinal processes with range \(\mathcal{S}=\{s_{0},s_{1},\ldots,s_{n}\}\) and vectors of marginal probabilities \((p_{0},p_{1},\ldots,p_{n})\) and \((q_{0},q_{1},\ldots,q_{n})\), respectively. Then, the distance \(d_{1,M}\) between them can be written as \[d_{1,M}\big{(}X_{t},Y_{t}\big{)}=\sum_{i=0}^{n-1}(n-i)(p_{i}-q_{i})^{2}+2\sum_ {j=0}^{n-2}\sum_{k=j+1}^{n-1}(n-k)(p_{j}-q_{j})(p_{k}-q_{k}). \tag{15}\] The proof of Proposition 1 is shown in the Appendix. Proposition 1 expresses \(d_{1,M}\) as the sum of two terms. The first term involves the squared differences \((p_{i}-q_{i})^{2}\) appearing in the definition of the metric \(d^{*}\), while the second one includes the cross products \((p_{j}-q_{j})(p_{k}-q_{k})\). In both cases, specific weights are given to the corresponding differences. The weights are higher when marginal probabilities at lower categories are considered, i.e. discrepancies in earlier states have a larger influence in the computation of \(d_{1,M}\). Note that this property sheds light on the differences encountered between the distance computations obtained in (12) and (14). Previous considerations illustrate the advantage of using cumulative probabilities when measuring dissimilarity between the marginal distributions of two ordinal processes. An analogous argument could be provided when assessing dissimilarity between lagged joint distributions. In other words, the metric \(d_{1,B}\) is more appropriate to evaluate dissimilarity in the ordinal setting than an analogous distance based on the probabilities \(p_{ij}(l)\) in (2). We omit the theoretical considerations for the bivariate case for the sake of simplicity. Next, we show an interesting example involving real-world data. Let us consider the data set described in Section 8 of [31], which contains credit ratings according to Standard & Poor's (S&P) for the 27 countries of the European Union (EU) plus the United Kingdom (UK). Each country is described by means of a monthly time series with values ranging from "D" (worst rating) to "AAA" (best rating). Specifically, the whole range consists of the \(n+1=23\) states \(s_{0},\ldots,s_{22}\), given by "D", "SD", "R", "CC", "CCC", "CCC", "CCC+", "B-", "B", "B+", "BB-", "BB", "BB+", "BBB-", "BBB", "BBB+", "A-", "A", "A+", "AA-", "AA", "AA+" and "AAA", respectively. The sample period spans from January 2000 to December 2017, thus resulting serial realizations of length \(T=216\). Figure 1 shows the time series associated with Estonia (top panel) and Slovakia (bottom panel). For a clear visualization, the \(y\)-axis was limited to ratings above "B+". It is clear from Figure 1 that both countries exhibit a stepwise upward pattern during the whole period, which indicates that their creditworthiness has shown a gradual improvement since the year 2000. However, the ascending trend involves different states for each one of the countries. For instance, Slovakia shows a broader range than Estonia in terms of monthly credit ratings, which leads to a higher number of different (1-step) transitions between the states. Therefore, to properly measure the distance between the serial dependence structures of both series, one should take into consideration how far a given category is from the rest. Note that, in this example, the distance based on the joint probabilities in (2) could lead to meaningless conclusions, since the direction in which a particular transition occurs is ignored and therefore each pair of states would be treated as equidistant. This problem can be circumvented by employing the cumulative bivariate probabilities in (3), which would allow to detect the corresponding upward movements and identify a similar underlying pattern in both time series. It is worth highlighting that some of the series of the remaining countries show similar behaviours than the ones displayed in Figure 1. Figure 1: Monthly series of S&P credit ratings for Estonia (top panel) and Slovakia (bottom panel). ## 3 Fuzzy clustering algorithms for ordinal time series This section is devoted to introduce fuzzy clustering algorithms for ordinal series based on the proposed distances \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\). First, a standard fuzzy \(C\)-medoids method relying on both metrics is presented. Next, an extension of this model is constructed by giving weights to the marginal and serial components of \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\). The iterative solutions of the weighted models are derived. ### A fuzzy \(C\)-medoids model based on the proposed dissimilarities Consider a set of \(s\) ordinal time series, \(\mathbb{S}=\{X_{T_{1}}^{(1)},\ldots,X_{T_{s}}^{(s)}\}\), where the \(i\)th series has length \(T_{i}\). We wish to perform fuzzy clustering on the elements of \(\mathbb{S}\) in such a way that the series generated from similar stochastic processes are grouped together. To this aim, we propose to use fuzzy \(C\)-medoids clustering models based on the distances \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) introduced in Section 2.2. Thus, the objective is to find the subset of \(\mathbb{S}\) of size \(C\), \(\widetilde{\mathbb{S}}=\{\widetilde{X}_{t}^{(1)},\ldots,\widetilde{X}_{t}^{(C )}\}\), whose elements are usually referred to as medoids, and the \(s\times C\) matrix of fuzzy coefficients, \(\mathbf{U}=(u_{ic})\), with \(i=1,\ldots,s\) and \(c=1,\ldots,C\), solving the minimization problem \[\min_{\widetilde{\mathbb{S}},\mathbf{U}}\sum_{i=1}^{s}\sum_{c=1}^{C}u_{ic}^{m} \widehat{d}_{p}(i,c),\ \ \ \text{with respect to}\ \sum_{c=1}^{C}u_{ic}=1,\,u_{ic}\geq 0, \tag{16}\] where \(\widehat{d}_{p}(i,c)=\widehat{d}_{p}\big{(}X_{T_{i}}^{(i)},\widetilde{X}_{t}^ {(c)}\big{)}\), \(p=1,2\), \(u_{ic}\in[0,1]\) represents the membership degree of the \(i\)th CTS in the \(c\)th cluster, and \(m>1\) is a real number, usually referred to as the fuzziness parameter, regulating the fuzziness of the partition. For \(m=1\), the crisp version of the algorithm is obtained, so the solution takes the form \(u_{ic}=1\) if the \(i\)th series pertains to cluster \(c\) and \(u_{ic}=0\) otherwise. As the value of \(m\) increases, the boundaries between clusters get softer and the resulting partition is fuzzier. The constrained optimisation problem in (16) can be solved by means of the Lagrangian multipliers method, which leads to an iterative algorithm that alternately optimizes the membership degrees and the medoids. Specifically (see [35]), the iterative solutions for the membership degrees are given by \[u_{ic}=\Bigg{[}\sum_{c^{\prime}=1}^{C}\Bigg{(}\frac{\widehat{d}_{p}(i,c)}{ \widehat{d}_{p}(i,c^{\prime})}\Bigg{)}^{\frac{1}{m-1}}\Bigg{]}^{-1}, \tag{17}\] for \(p=1,2\), \(i=1,\ldots,s\), and \(c=1,\ldots,C\). Once the membership degrees are obtained through (17), the \(C\) series minimising the objective function in (16) are selected as the new medoids. Specifically, for each \(c\in\{1,\ldots,C\}\), it is obtained the index \(j_{c}\) satisfying \[j_{c}=\operatorname*{arg\,min}_{1\leq j\leq s}\sum_{i=1}^{s}u_{ic}^{m} \widehat{d}_{p}\big{(}X_{T_{i}}^{(i)},X_{T_{j}}^{(j)}\big{)},\ \ p=1,2. \tag{18}\] This two-step procedure is repeated until there is no change in the medoids or a maximum number of iterations is reached. An outline of the corresponding clustering algorithm is given in Algorithm 1. ``` 1:Fix \(C\), \(m\), max.iter and \(p\in\{1,2\}\) 2:Set \(iter\,=0\) 3:Pick the initial medoids \(\widetilde{\mathbb{S}}=\{\widetilde{X}_{t}^{(1)},\ldots,\widetilde{X}_{t}^{(C)}\}\) 4:repeat 5:Set \(\widetilde{\mathbb{S}}_{\mathrm{OLD}}=\widetilde{\mathbb{S}}\) {Store the current medoids} 6:Compute \(u_{ic}\), \(i=1,\ldots,s\), \(c=1,\ldots,C\), using (17) 7:For each \(c\in\{1,\ldots,C\}\), determine the index \(j_{c}\in\{1,\ldots,s\}\) using (18) 8:return\(\widetilde{X}_{t}^{(c)}=X_{t}^{(j_{c})}\), for \(c=1,\ldots,C\) {Update the medoids} 9:\(iter\gets iter\,+1\) 10:until\(\widetilde{\mathbb{S}}_{\mathrm{OLD}}=\widetilde{\mathbb{S}}\) or \(iter\,=\,max.iter\) 11:return The final fuzzy partition and the corresponding set of medoids ``` **Algorithm 1** Fuzzy \(C\)-medoids algorithm based on the proposed distances. _Remark 4_.: _Advantages of the fuzzy \(C\)-medoids model_. The fuzzy \(C\)-medoids procedure outlined in Algorithm 1 allows us to identify a set of representative OTS belonging to the original collection, the medoids, whose overall distance to all other series in the set is minimal when the membership degrees with respect to a specific cluster are considered as weights (see the computation of \(j_{c}\) in Algorithm 1). As observed by [36], it is often desirable that the prototypes synthesising the structural information of each cluster belong to the original data set, instead of obtaining "virtual" prototypes, as in the case of fuzzy \(C\)-means-based approaches [37; 21]. For instance, the original set of series could be replaced by the set of medoids for exploratory purposes, thus substantially reducing the computational complexity of subsequent data mining tasks. The fuzzy \(C\)-medoids algorithm also exhibits classical advantages related to the fuzzy paradigm, including ability to produce richer clustering solutions than hard methods, identifying the vague nature of the prototypes, and the possibility of dealing with time series sharing different dynamic patterns, among others. The behaviour of the fuzzy \(C\)-medoids algorithm based on the metrics \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) is analysed in Section 4.2 through an extensive simulation study. ### A weighted fuzzy \(C\)-medoids model based on the proposed dissimilarities Both \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) are formed by two terms measuring respectively the amount of discrepancy between the marginal and bivariate features of the corresponding OTS. By construction, each term receives the same weight (one) in the objective function (16). However, it is reasonable to think that one of these components may have a higher influence than the other one to identify the true clustering structure. This would be the case if, for example, the prototypes present different marginal distributions but all the series exhibit a similar serial dependence structure. By contrast, the lagged joint distributions might play a more important role when time series display significant serial dependence at different lags. According to these considerations, an extension of the fuzzy \(C\)-medoids model outlined in Algorithm 1 is proposed by modifying the objective function in (16) in order to permit different weights for each one of the components of \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\). For \(p=1,2\), the weighted model is formalized by means of the minimization problem \[\begin{cases}\min_{\widetilde{\widetilde{s}},\mathbf{U},\beta}\sum_{i=1}^{s}\sum_ {c=1}^{C}u_{ic}^{m}\Big{[}\beta^{2}\widehat{d}_{p,M}(i,c)+(1-\beta)^{2} \widehat{d}_{p,B}\big{(}i,c\big{)}\Big{]}\\ \text{with respect to}\\ \sum_{c=1}^{C}u_{ic}=1,\ u_{ic}\geq 0,\ \text{ for }i=1,\ldots,s,\,c=1,\ldots,C,\ \text{and }\beta\in[0,1],\end{cases} \tag{19}\] where \(\widehat{d}_{p,M}(i,c)=\widehat{d}_{p,M}\big{(}X_{T_{i}}^{(i)},\widetilde{X}_ {t}^{(c)}\big{)}\) and \(\widehat{d}_{p,B}(i,c)=\widehat{d}_{p,B}\big{(}X_{T_{i}}^{(i)},\widetilde{X}_ {t}^{(c)}\big{)}\). The minimization problem (19) involves the additional parameter \(\beta\), referred to as weight, regulating the influence of each distance component in the computation of the clustering solution. Note that this approach implies that \(\beta\) has to be objectively estimated via the optimization algorithm, instead of being fixed a priori by the user. It is worth highlighting that the weighted approach for fuzzy clustering of time series has been considered in several works (see e.g. [38, 10, 39]). The following proposition provides the iterative solutions of problem (19) regarding the membership degrees and the weight \(\beta\). _Proposition 2_.: For \(p=1,2\), \(i=1,\ldots,s\) and \(c=1,\ldots,C\), the optimal iterative solutions of the minimization problem (19) are given by \[u_{ic}=\Bigg{[}\sum_{c^{\prime}=1}^{C}\left(\frac{\beta^{2}\widehat{d}_{p,M}( i,c)+(1-\beta)^{2}\widehat{d}_{p,B}(i,c)}{\beta^{2}\widehat{d}_{p,M}(i,c^{ \prime})+(1-\beta)^{2}\widehat{d}_{p,B}(i,c^{\prime})}\right)^{\frac{1}{m-1} }\Bigg{]}^{-1} \tag{20}\] and \[\beta=\frac{\sum_{i=1}^{s}\sum_{c=1}^{C}u_{ic}^{m}\widehat{d}_{p,B}(i,c)}{ \sum_{i=1}^{s}\sum_{c=1}^{C}u_{ic}^{m}\Big{[}\widehat{d}_{p,M}(i,c)+\widehat{ d}_{p,B}(i,c)\Big{]}}. \tag{21}\] The proof of Proposition 2 is presented in the Appendix. Proposition 2 provides a way of updating the membership matrix and the weight \(\beta\). For fixed \(\mathbf{U}\) and \(\beta\), the medoid for the cluster \(c\), denoted by \(j_{c}\), \(c=1,\ldots,C\), is obtained as solution of the minimization problem \[j_{c}=\operatorname*{arg\,min}_{1\leq j\leq s}\sum_{i=1}^{s}u_{ic}^{m}\Big{[} \beta^{2}\widehat{d}_{p,M}(X_{T_{i}}^{(i)},X_{T_{j}}^{(j)})+(1-\beta)^{2} \widehat{d}_{p,B}\big{(}X_{T_{i}}^{(i)},X_{T_{j}}^{(j)}\big{)}\Big{]}. \tag{22}\] The three-step procedure given by (20), (21), and (22) is repeated until there is no change in the medoids anymore, or a maximum number of iterations is reached. An outline of the corresponding clustering algorithm is given in Algorithm 2. ``` 1:Fix \(C\), \(m\), _max.iter_ and \(p\in\{1,2\}\) 2:Set \(iter\) = 0 3:Pick the initial medoids \(\widetilde{\mathbb{S}}=\{\widetilde{X}_{t}^{(1)},\ldots,\widetilde{X}_{t}^{(C)}\}\) and \(\beta\in[0,1]\) 4:repeat 5: Set \(\widetilde{\mathbb{S}}_{\mathrm{OLD}}=\widetilde{\mathbb{S}}\) {Store the current medoids} 6: Compute \(u_{ic}\), \(i=1,\ldots,s\), \(c=1,\ldots,C\), using (20) 7: Compute \(\beta\) using (21) 8: For each \(c\in\{1,\ldots,C\}\), determine the index \(j_{c}\in\{1,\ldots,s\}\) using (22) 9:return\(\widetilde{X}_{t}^{(c)}=X_{t}^{(j_{c})}\), for \(c=1,\ldots,C\) {Update the medoids} 10:\(iter\)\(\leftarrow\)\(iter\)\(\downarrow\) 1 11:until\(\widetilde{\mathbb{S}}_{\mathrm{OLD}}=\widetilde{\mathbb{S}}\) or \(iter\) = \(max.iter\) 12:return The final partition, corresponding set of medoids, and value for \(\beta\) ``` **Algorithm 2** The weighted fuzzy \(C\)-medoids algorithm based on the proposed distances. _Remark 5_: _Meaning of the weight \(\beta\)._ The parameter \(\beta\) in Algorithm 2 has an interesting statistical meaning. In particular, it attempts to mirror the heterogeneity of the total intra-cluster deviation with respect to both component distances. Specifically, the value of \(\beta\) increases as long as the total intra-cluster deviation concerning the marginal component decreases (in comparison with the serial component). An analogous reasoning holds for the weight \(1-\beta\). Thus, the optimisation procedure tends to give more emphasis to the component distance capable of increasing the within-cluster similarity. The performance of the weighted fuzzy \(C\)-medoids algorithm based on the proposed dissimilarities is assessed in Section 4.3 by means of several numerical experiments. ## 4 Simulation study In this section, we carry out a set of simulations with the aim of evaluating the behaviour of the proposed algorithms in different scenarios of OTS clustering. First, we describe some procedures based on alternative distances that we consider for comparison purposes. Next, we explain how the performance of the algorithms is measured along with the corresponding simulation mechanism and results. Lastly, a sensitivity analysis is carried out to analyse how the clustering accuracy changes with respect to the set of lags (\(\mathcal{L}\)), and a reasonable method for selecting this set is provided. ### Alternative metrics To shed light on the performance of the proposed fuzzy clustering algorithms, they were compared with some other models based on alternative dissimilarities. The considered approaches are described below. * _A procedure based on the probability mass functions._ This method considers a distance defined in the same way as \(\widehat{d}_{1}\), but replacing the estimates \(\widehat{f}_{i}^{(k)}\) and \(\widehat{f}_{ij}^{(k)}(l)\) by the probabilities \(\widehat{p}_{i}^{(k)}\) and \(\widehat{p}_{ij}^{(k)}(l)\) in (4), respectively, \(k=1,2\). The corresponding metric is called \(\widehat{d}_{PMF}\). Note that \(\widehat{d}_{PMF}\) is still well defined when dealing with nominal time series, although ignoring the underlying ordering. Therefore, the performance of \(\widehat{d}_{PMF}\) is an essential benchmark for the proposed metric \(\widehat{d}_{1}\), which is specifically designed to deal with ordinal series. * _Autocorrelation-based clustering_. [11] proposed a distance measure between real-valued time series based on the autocorrelation function. Each time series is described by means of a vector \(\big{(}\widehat{\rho}(l_{1}),\ldots,\widehat{\rho}(l_{L})\big{)}\) whose components are the estimated autocorrelations for a given set of lags. Then, the metric is defined as the squared Euclidean distance between the vectors representing two time series. We denote this dissimilarity as \(\widehat{d}_{ACF}\). Note that, although \(\widehat{d}_{ACF}\) is well defined only for numerical time series, the distance can be easily computed in the ordinal case by considering the associated count time series (see Section 2.1). * _Quantile-based clustering_. [14] introduced a clustering method using a dissimilarity based on quantile dependence. Here, each series is replaced by a feature vector containing estimates of the so-called quantile autocorariance function for several pairs of probability levels \((\tau,\tau^{\prime})\in[0,1]^{2}\) and a fixed set of lags. The proposed metric, denoted by \(\widehat{d}_{QAF}\), is defined as the squared Euclidean distance between two vector representations. As in the case of \(\widehat{d}_{ACF}\), in an ordinal context, the computation of \(\widehat{d}_{QAF}\) must be based on the corresponding count time series. Several time series clustering procedures using quantile-based features have been proposed in the literature [40; 15; 16; 17]. These methods usually show a great performance when the clusters are characterised by different nonlinear structures. * _Model-based approaches relying on first-order Markov chains_. [25] proposed two methods for clustering nominal time series based on first-order Markov chains. The first one assumes the same transition matrix for all the series in a given cluster, while the second technique allows for some degree of intra-group heterogeneity by considering the Dirichlet distribution. Both methods fit finite mixtures of Markov chains by using a Bayesian approach. Although these procedures do not directly use a distance metric, for the sake of homogeneity, we are going to refer to them as \(\widehat{d}_{MC}\). ### Experimental design and results A broad simulation study was carried out to evaluate the behaviour of the fuzzy \(C\)-medoids algorithm based on metrics \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\). We intended to drive the evaluation process in a way that general conclusions on the performance of both distances can be reached. To this end, two different assessment schemes were designed. The first one includes scenarios with four different groups of OTS, and is aimed at evaluating the ability of the procedures to assign high (low) memberships if a given OTS belongs (not belongs) to a given cluster. The second one consists of scenarios formed by two different groups of OTS plus one additional OTS not belonging to any of the groups. We examine again the membership degrees of the series in the two groups, but also that the isolated series is not placed in any of the clusters with a high membership. In this case, a cutoff value is used to determine whether or not a membership degree in a given group is enough to assign the OTS to that cluster. #### 4.2.1 First assessment scheme We considered three simple scenarios consisting of four clusters represented by the same type of generating processes, denoted by \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\), \(\mathcal{C}_{3}\), and \(\mathcal{C}_{4}\). Each one of the groups contains five 6-state OTS, which gives rise to a set of 20 OTS defining the true clustering partition. We attempted to construct scenarios with a wide variety of ordinal models commonly used in practice to deal with OTS. The generating models concerning the count process \(\{C_{t}\}_{t\in\mathbb{Z}}\) in each group are given below for each one of the scenarios. **Scenario 1**. Fuzzy clustering of OTS based on binomial AR(\(p\)) models [41]. Let \(\pi\in(0,1)\), \(\rho\in\Big{(}\max\big{\{}\frac{-\pi}{1-\pi},\frac{1-\pi}{-\pi}\big{\}},1 \Big{)}\), \(\beta=\pi(1-\rho)\), \(\alpha=\beta+\rho\). Let the count process \(\{C_{t}\}_{t\in\mathbb{Z}}\) be defined by the recursion \[C_{t}=\sum_{i=1}^{p}D_{t,i}\Big{(}\alpha\;\bullet_{t}C_{t-i}+\beta\;\bullet_{t }\big{(}n-C_{t-i}\big{)}\Big{)}, \tag{23}\] where the \(\big{(}D_{t,1},\ldots,D_{t,p}\big{)}\) are independent variables distributed according to MULT(\(1;\phi_{1},\ldots,\phi_{p}\)), with \(\phi_{1}+\ldots+\phi_{p}=1\), and \(\;\bullet_{t}\;\) denotes the binomial thinning operator performed at a specific time \(t\). Here, the binomial thinning operator \(\;\bullet\;\) applied to a count random variable \(Y\) is defined by a conditional binomial distribution, \(\alpha^{\prime}\;\bullet\;Y\sim\mathrm{Bin}(Y,\alpha^{\prime})\), where \(\alpha^{\prime}\in(0,1)\). The processes considered in this scenario are binomial AR(\(1\)), for clusters \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\), and binomial AR(\(2\)) models for \(\mathcal{C}_{3}\) and \(\mathcal{C}_{4}\), with vectors of coefficients given by \[\begin{array}{llll}\mathcal{C}_{1}:&(\alpha,\beta)=(0.70,0.20)&\mathcal{C} _{3}:&(\alpha,\beta,\phi_{1},\phi_{2})=(0.76,0.06,0.5,0.5)\\ \mathcal{C}_{2}:&(\alpha,\beta)=(0.72,0.12)&\mathcal{C}_{4}:&(\alpha,\beta, \phi_{1},\phi_{2})=(0.91,0.01,0.5,0.5)\end{array}\] **Scenario 2**. Fuzzy clustering of OTS based on binomial INACH(\(p\)) models [42]. Let \(\beta,\alpha_{1},\ldots,\alpha_{p}\) be real numbers such that \(\beta,\beta+\sum_{i=1}^{p}\alpha_{i}\in(0,1)\), and assume that the count process \(\{C_{t}\}_{t\in\mathbb{Z}}\) satisfies \[C_{t}|C_{t-1},C_{t-2},\ldots\;\sim\;\mathrm{Bin}\bigg{(}n,\beta+\frac{1}{p}\sum _{i=1}^{p}\alpha_{i}C_{t-i}\bigg{)}. \tag{24}\] The considered processes are binomial INACH(\(1\)) models for \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\), and binomial INACH(\(2\)) for \(\mathcal{C}_{3}\) and \(\mathcal{C}_{4}\), with vectors of coefficients given by \[\begin{array}{llll}\mathcal{C}_{1}:&(\alpha_{1},\beta)=(0.30,0.35)&\mathcal{ C}_{3}:&(\alpha_{1},\alpha_{2},\beta)=(0.1,0.1,0.2)\\ \mathcal{C}_{2}:&(\alpha_{1},\beta)=(0.30,0.40)&\mathcal{C}_{4}:&(\alpha_{1}, \alpha_{2},\beta)=(0.1,0.1,0.4)\end{array}\] **Scenario 3**. Fuzzy clustering of ordinal logit AR(1) models (see Examples 7.4.6 and 7.4.8 in [34]). Let \(\{C_{t}\}_{t\in\mathbb{Z}}\) be a count process with range \(\{0,1,\ldots,n\}\), and denote by \(\{\boldsymbol{Y}_{t}=(Y_{t,0},\ldots,Y_{t,n})^{\top}\}_{t\in\mathbb{Z}}\) its binarization (i.e., \(C_{t}=k\) if and only if \(Y_{t,k}=1\) and \(Y_{t,k^{\prime}}=0\), \(k^{\prime}\neq k\)) and by \(\{\boldsymbol{Y}_{t}^{*}=(Y_{t,0},\ldots,Y_{t,n-1})^{\top}\}_{t\in\mathbb{Z}}\) its reduced binarization. Let \(\{Q_{t}\}_{t\in\mathbb{Z}}\) be the process formed by independent variables following a standard logistic distribution and assume that \[C_{t}=j\quad\text{if and only if}\quad Q_{t}-\boldsymbol{\alpha}^{\intercal} \boldsymbol{Y}_{t}^{*}\in[\eta_{j-1},\eta_{j}). \tag{25}\] Here, \(\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{n})^{\top}\in\mathbb{R}^{n}\), and \(-\infty=\eta_{-1}<\eta_{0}<\ldots<\eta_{n-1}<\eta_{n}=+\infty\) are threshold parameters which can be represented by means of the vector \(\boldsymbol{\eta}=(\eta_{0},\ldots,\eta_{n-1})\). The considered processes are four 6-state ordinal logit AR(1) models with vectors of coefficients given by \(\boldsymbol{\eta}=(-2,-1,0,1,2)\) and \[\begin{array}{ll}\mathcal{C}_{1}:&\boldsymbol{\alpha}=(0.4,-0.8,1.2,1.6,2) ^{\top}\\ \mathcal{C}_{2}:&\boldsymbol{\alpha}=(0.6,-1.2,1.8,2.4,3)^{\top}\end{array} \qquad\begin{array}{ll}\mathcal{C}_{3}:&\boldsymbol{\alpha}=(0.8,-1.6,2.4,3.2,4)^{\top}\\ \mathcal{C}_{4}:&\boldsymbol{\alpha}=(1,-2,3,4,5)^{\top}\end{array}\] As a preliminary step, metric two-dimensional scaling (2DS) based on both \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) was carried out to gain insight about the capability of these metrics to discriminate between the underlying groups. Given a distance matrix \(\boldsymbol{D}=(D_{ij})_{1\leq i,j\leq s}\), a 2DS finds the points \(\{(a_{i},b_{i}),i=1,\ldots,s\}\) minimizing the loss function called stress given by \[\sqrt{\frac{\sum_{i\neq j=1}^{s}(\|(a_{i},b_{i})-(a_{j},b_{j})\|-D_{ij})^{2}} {\sum_{i\neq j=1}^{s}D_{ij}^{2}}} \tag{26}\] Thus, the goal is to represent the distances \(D_{ij}\) in terms of Euclidean distances into a 2-dimensional space so that the original distances are preserved as well as possible. The lower the value of the stress function, the more reliable the 2DS configuration. This way, a 2DS plot provides a valuable visual representation of how the elements are located with respect to each other according to the original distances. To obtain informative 2DS plots, 50 OTS of length \(T=600\) from each generating model were simulated for each scenario. The 2DS was carried out for each set of 200 CTS by computing the pairwise dissimilarity matrices based on \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\). We considered the set of lags \(\mathcal{L}=\{1,2\}\) in Scenarios 1 and 2 and \(\mathcal{L}=\{1\}\) in Scenario 3. The resulting plots are shown in Figure 2, where a different colour was used for each generating process. It is worth highlighting that the \(R^{2}\) value associated with the scaling is above 0.85 in all cases, thus concluding that the graphs in Figure 2 provide an accurate picture of the underlying representations according to both metrics. The reduced bivariate spaces in Figure 2 show different configurations. In Scenario 1, the metrics seem able to detect the underlying clustering partition, which is expected, since the four generating processes in this scenario are clearly dissimilar. In Scenario 2, both distances place cluster \(\mathcal{C}_{3}\) quite far from the rest, which is reasonable in view of the coefficients defining the models. In addition, there is a high degree of overlap between clusters \(\mathcal{C}_{1}\) and \(\mathcal{C}_{4}\). This is logical, since the generating models of both clusters are quite similar in terms of marginal distributions and serial dependence (the properties of a binomial \(\mathrm{INARCH}(p)\) model can be seen in [42]). Concerning Scenario 3, \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) produce plots with substantially different structures. While \(\widehat{d}_{1}\) is capable of successfully identifying the four groups, \(\widehat{d}_{2}\) struggles to separate \(\mathcal{C}_{3}\) and \(\mathcal{C}_{4}\). This is because some of the features employed by \(\widehat{d}_{2}\) take similar values for the processes behind these clusters (e.g., \(\widehat{\mathrm{disp}}_{d_{o,1}}\) or \(\widehat{\kappa}_{d_{o,1}}(1)\)). In sum, the plots in Figure 2 suggest that \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) have different levels of difficulty to identify the true clustering partition. The simulation study was carried out as follows. For each scenario, 5 OTS of length \(T\in\{200,600\}\) were generated from each process in order to execute the clustering algorithms twice and examine the effect of the series length. In all cases, the range of the count process \(\{C_{t}\}_{t\in\mathbb{Z}}\) was set to \(\{0,1,\ldots,5\}\), giving rise to ordinal realizations with range \(\{s_{0},s_{1},\ldots,s_{5}\}\). Several values of the fuzziness parameter \(m\) were considered, namely \(m\in\{1.2,1.4,1.6,1.8,2\}\). The problem of selecting a proper value for \(m\) has been extensively addressed in the literature, although there seems to be no consensus about the optimal way of choosing Figure 2: Two-dimensional scaling planes based on distances \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) between simulated time series in Scenarios 1, 2 and 3. The series length is \(T=600\). this parameter (see the discussion in Section 3.1.6 of [12]). When \(m=1\), the hard version of the fuzzy \(C\)-medoids algorithm is obtained, while excessively large values of \(m\) result in a partition with all memberships close to \(1/C\), thus having a large degree of overlap between groups. As a consequence, selecting these values for \(m\) is not recommended [43]. Moreover, in the context of time series clustering, several works consider a grid of values for \(m\) similar to our choice [11, 44, 15]. Given a scenario and fixed values for \(m\) and \(T\), 200 simulations were executed. In each trial, the fuzzy \(C\)-medoids algorithm based on \(\widehat{d}_{1}\), \(\widehat{d}_{2}\), \(\widehat{d}_{PMF}\), \(\widehat{d}_{ACF}\) and \(\widehat{d}_{QAF}\) was applied with each value of \(m\) as input. The number of clusters was set to \(C=4\). The collection of lags was \(\mathcal{L}=\{1,2\}\) in Scenarios 1 and 2 and \(\mathcal{L}=\{1\}\) in Scenario 3, thus considering the maximum number of lags at each scenario. The same lags were used to obtain the alternative dissimilarities, e.g. \(\widehat{d}_{ACF}\) employed the two first autocorrelations in Scenarios 1 and 2. Concerning the distance \(\widehat{d}_{QAF}\), several sets of probability levels were independently considered for its computation, namely \(\mathcal{T}_{1}=\{0.1,0.5,0.9\}\), \(\mathcal{T}_{2}=\{0.3,0.5,0.7\}\) and \(\mathcal{T}_{3}=\{0.4,0.8\}\). Clustering acuraccy was assessed using the fuzzy extensions of the Adjusted Rand Index (ARIF) and the Jaccard Index (JIF) introduced by [45]. Both indexes are obtained by reformulating the original ones in terms of the fuzzy set theory, which allows to compare the true (hard) partition with a experimental fuzzy partition. ARIF and JIF take values in the intervals \([-1,1]\) and \([0,1]\), respectively, with values closer to 1 indicating a more accurate clustering solution. Note that the Bayesian method of citepamminger2010model (\(\widehat{d}_{MC}\)) can be seen as a soft clustering procedure by treating the posterior probabilities as membership degrees. However, this approach does not involve the fuzziness parameter \(m\) and, consequently, our results for \(\widehat{d}_{MC}\) only include one value of ARIF and JIF for a given series length. The average values of ARIF and JIF based on the 200 simulation trials are shown in Table 2, for all metrics except for \(\widehat{d}_{MC}\), and in Table 3, for \(\widehat{d}_{MC}\). Concerning \(\widehat{d}_{QAF}\), it is important to notice that only the highest ARIF and JIF are presented, regardless of the employed probability levels. From Table 2, it is concluded that all distances decrease their performance when increasing the value of \(m\). This is reasonable and expected, since larger values of \(m\) produce a smoother boundary between the four well-separated clusters, thus making the classification fuzzier and decreasing the value of ARIF and JIF. In Scenario 1, \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) show the best performance regardless of \(m\) and \(T\), with similar average values for both clustering quality indices. While the quantile-based distance \(\widehat{d}_{QAF}\) displays the worst results in this scenario, \(\widehat{d}_{PMF}\) and \(\widehat{d}_{ACF}\) also exhibit a high clustering effectiveness. Indeed, a suitable behaviour of \(\widehat{d}_{ACF}\) is here expected because of the generating processes in Scenario 1 have very different autocorrelations (note that e.g. the lag-1 autocorrelation for a binomial AR(1) process is \(\alpha-\beta\)). The proposed distances \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) attain the best average scores in Scenario 2, significantly outperforming the remaining metrics in most of the considered settings. However, their performance decreases with respect to Scenario 1. This is coherent with the 2DS plots in Figure 2, where both metrics seem to clearly identify the true clustering structure in Scenario 1 while struggling to distinguish between clusters \(\mathcal{C}_{1}\) and \(\mathcal{C}_{4}\) in Scenario 2. Metrics \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) are also the best-performing ones in Scenario 3. The best-performing ones in Scenario 4 are the best-performing ones in Scenario 5. The best-performing ones in Scenario 6 are the best-performing ones in Scenario 7. The best-performing ones in Scenario 8 are the best-performing ones in Scenario 9. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1. The best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1. The best-performing ones in Scenario 1. The best-performing ones in Scenario 1. The best-performing ones in Scenario 1 are the best-performing ones in Scenario 1. The best-performing ones in Scenario 1. Scenario 3. The autocorrelation-based metric \(\widehat{d}_{ACF}\) produces very inaccurate clustering partitions in Scenarios 2 and 3, which indicates a limited ability of the autocorrelation function to discriminate between the generating processes considered in these scenarios. Furthemore, \(\widehat{d}_{QAF}\) shows always a worse behaviour than the proposed distances, thus suggesting that the treatment of OTS as count time series is not advantageous for clustering purposes. As expected, all dissimilarities improve their performance when increasing the series length, although this is generally less pronounced in Scenario 2. According to Table 3, the Bayesian clustering approach (\(\widehat{d}_{MC}\)) attains moderate scores in the three scenarios, but its performance does not improve when increasing the series length. As \(\widehat{d}_{MC}\) does not require the fuzziness parameter \(m\), a direct comparison with the results in Table 2 is not possible. However, since the considered scenarios are formed by well-defined clusters (i.e. the underlying clustering structure is a hard partition), it is reasonable to compare the partitions based on \(\widehat{d}_{MC}\) with the ones generated by the remaining techniques when \(m=1.2\) (or even when \(m\) is lower than \(1.2\)). Thus, one could state that the proposed metrics \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) significantly outperform the Bayesian approach in most cases. In order to provide a more comprehensive evaluation of the proposed clustering methods, we designed two more challenging setups, Scenarios 4 and 5, where the complexity of the original experiments is increased. **Scenario 4**. It consists of six clusters, \(\mathcal{C}_{1},\mathcal{C}_{2},\dots,\mathcal{C}_{6}\), such that \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are defined as the first and second clusters in Scenario 1, respectively, \(\mathcal{C}_{3}\) is defined as the last cluster in Scenario 3, and \(\mathcal{C}_{4}\), \(\mathcal{C}_{5}\) and \(\mathcal{C}_{6}\) are binomial INARCH(3) models (see (24)) with vectors of coefficients given by \[\begin{array}{ll}\mathcal{C}_{4}:&(\alpha_{1},\alpha_{2},\alpha_{3},\beta)= (0.1,0.3,0.2,0.2)\\ \mathcal{C}_{5}:&(\alpha_{1},\alpha_{2},\alpha_{3},\beta)=(0.1,0.2,0.3,0.2)\\ \mathcal{C}_{6}:&(\alpha_{1},\alpha_{2},\alpha_{3},\beta)=(0.1,0.25,0.25,0.2) \end{array}\] Simulations in Scenario 4 were carried out by setting \(C=6\) and \(\mathcal{L}=\{1,2,3\}\), but selecting the remaining inputs (series length, series per cluster,...) in the same manner as in Scenarios 1-3. Compared to the above scenarios, Scenario 4 is clearly more complex: (i) there are a larger number of clusters, (ii) three different types of ordinal processes, and (iii) the processes behind \(\mathcal{C}_{4}\), \(\mathcal{C}_{5}\) and \(\mathcal{C}_{6}\) have identical marginal and one-lagged bivariate distributions, thus the series in these clusters can be well-located only by analysing higher-order dependencies. However, Scenario 4 still considers five series per cluster, a range with six categories (\(n=5\)), and \(T\in\{200,600\}\). In order to assess the performance of the different methods when varying the value of these parameters, we consider a second additional setup as described below. **Scenario 5**. The following random mechanism is incorporated into Scenario 4. At each simulation trial, the value of \(n\) defining the range \(\{s_{0},\dots,s_{n}\}\), the number of series in the \(i\)th cluster, \(i=1,2,\dots,6\), and the length of each series, are randomly selected with equiprobability from the sets \(\{1,2,\dots,10\}\), \(\{2,3,\dots,10\}\) and \(\{100,200,\dots,500\}\), respectively. In sum, Scenario 5 defines a challenging setting, which inherits the complexity of Scenario 4 besides giving rise to instances with unequal series lengths and cluster sizes. Note that a different true partition must be considered at each trial to compute the values of ARIF and JIF. The average results for these new scenarios are provided in Tables 4 and 5. In Scenario 4, \(\widehat{d}_{QAF}\) attains the worst average scores for all \(m\) and \(T\). In contrast, the proposed metrics yield again the best results, slightly improving the scores based on \(\widehat{d}_{ACF}\) and somewhat more sharply the ones obtained by \(\widehat{d}_{PMF}\), specially for large values of \(m\). Overall, all the dissimilarities decrease their performance with respect to Scenarios 1, 2 and 3, which is reasonable due to the higher degree of complexity in Scenario 4. Average scores in Scenario 5 lead to similar conclusions. The Bayesian procedure \(\widehat{d}_{MC}\) obtains moderate scores in Scenario 4, but displays a very poor performance in Scenario 5, where it gets negatively affected by the high level of variability of this scenario. Overall, the previous analyses showed the great performance of \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) to perform clustering of OTS when the true partition is formed by well-separated \begin{table} \begin{tabular}{c c c c c c c|c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c|}{ARIF} & \multicolumn{5}{c}{JIF} \\ \hline Scenario 4 & \multicolumn{3}{c}{\(T=200\)} & \(T=600\) & \(T=200\) & \(T=600\) \\ & \(0.447\) & \(0.434\) & \(0.384\) & \(0.374\) \\ \hline Scenario 5 & \multicolumn{3}{c|}{Variable \(T\)} & \multicolumn{3}{c}{Variable \(T\)} \\ & \(0.160\) & \multicolumn{3}{c}{\(0.230\)} \\ \hline \hline \end{tabular} \end{table} Table 5: Average values of ARIF and JIF obtained by the fuzzy \(C\)-medoids clustering algorithm based on \(\widehat{d}_{MC}\). Scenarios 4 and 5. \begin{table} \begin{tabular}{c c c c c|c c c|c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c|}{ARIF} & \multicolumn{5}{c}{JIF} \\ \hline Scenario 4 & \multicolumn{3}{c}{\(T=200\)} & \(T=600\) & \(T=200\) & \(T=600\) \\ & \(0.447\) & \(0.434\) & \(0.384\) & \(0.374\) \\ \hline Scenario 5 & \multicolumn{3}{c|}{Variable \(T\)} & \multicolumn{3}{c}{Variable \(T\)} \\ & \(0.160\) & \multicolumn{3}{c}{\(0.230\)} \\ \hline \hline \end{tabular} \end{table} Table 5: Average values of ARIF and JIF obtained by the fuzzy \(C\)-medoids clustering algorithm based on \(\widehat{d}_{MC}\). Scenarios 4 and 5. clusters. The superiority of both metrics with respect to distances ignoring the ordinal nature of the series (\(\widehat{d}_{PMF}\) and \(\widehat{d}_{MC}\)) and classical metrics in clustering of real-valued time series (\(\widehat{d}_{ACF}\) and \(\widehat{d}_{QAF}\)) was corroborated in scenarios characterized by well-known types of ordinal processes and different degrees of complexity. This highlights the importance of constructing dissimilarities specifically designed to deal with ordinal series. #### 4.2.2 Second assessment scheme A second simulation experiment was conducted to analyze the effect of isolated series, whose presence introduces certain degree of ambiguity and increases the fuzzy nature of the clustering task. Two new scenarios consisting of two well-separated clusters of 5 OTS each and a single isolated series arising from a different process are defined as follows. **Scenario 6**. A set of 11 OTS, where five series (cluster \(\mathcal{C}_{1}\)) are generated from a binomial AR(1) process with coefficients \((\alpha,\beta)=(0.52,0.12)\), five series (cluster \(\mathcal{C}_{2}\)) come from a binomial AR(2) process with coefficients \((\alpha,\beta,\phi_{1},\phi_{2})=(0.42,0.07,0.1,0.9)\), and one isolated series is generated from an ordinal logit AR(1) model with vectors of coefficients given by \(\boldsymbol{\eta}=(-2,-1,0,1,2)\) and \(\boldsymbol{\alpha}=(0.5,-1,1.5,2,2.5)^{\top}\). **Scenario 7**. Defined in the same way as Scenario 6, but with different generating models for clusters \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\). Here, \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are formed by OTS generated from binomial INARCH(2) models with vectors of coefficients \((\alpha_{1},\alpha_{2},\beta)=(0.1,0.1,0.1)\) and \((\alpha_{1},\alpha_{2},\beta)=(0.5,0.1,0.1)\), respectively. The values for \(n\), \(T\), and the number of simulation trials were fixed as in Scenarios 1-3. The number of clusters and the collection of lags were set to \(C=2\) and \(\mathcal{L}=\{1,2\}\), respectively. Assessment was performed in a different way. We computed the proportion of times that: the five series from \(\mathcal{C}_{1}\) grouped together in one group, the five series from \(\mathcal{C}_{2}\) clustered together in another group, and the isolated series had a relatively high membership degree with respect to each of the groups. To this aim, a cutoff point must be determined to conclude when a series is assigned to a specific cluster. We decided to use the cutoff value of 0.7, i.e. the \(i\)th OTS was placed into the \(c\)th cluster if \(u_{ic}>0.7\). On the contrary, a time series was considered to simultaneously belong to both clusters if its membership degrees were both below 0.7. The use of a cutoff value to assess fuzzy clustering algorithms has already been considered in prior works [12; 13; 16] (arguments for this choice are given in [12]). Note that this evaluation criterion is very sensitive to the selection of \(m\), since a single series with membership degrees failing to fulfil the required condition results in an incorrect classification. In fact, the different metrics could achieve their best behaviour for rather different values of \(m\). For this reason, we decided to run the clustering algorithms for a grid of values for \(m\) on the interval \((1,4]\). Figure 3 contains the curves of rates of correct classification as a function of \(m\) for \(\widehat{d}_{1}\), \(\widehat{d}_{2}\), \(\widehat{d}_{PMF}\), \(\widehat{d}_{ACF}\) and \(\widehat{d}_{QAF}\). The approach based on \(\widehat{d}_{MC}\) showed a very poor performance and their results are here omitted. Plots in Figure 3 confirm that the fuzziness parameter dramatically affects the clustering performance. In all cases, low and high values of \(m\) produce poor rates of correct classification since partitions with all memberships close to 1 or to 1/2 are respectively generated, thus resulting in failed trials. By contrast, moderate values of \(m\) generally result in higher clustering effectiveness, although the optimal range varies for each distance. In Scenario 6, the proposed distances \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) attain the best results, outperforming the alternative metrics for most values of \(m\), especially when \(T=600\). Metric \(\widehat{d}_{PMF}\) also shows a high clustering accuracy in this scenario, while \(\widehat{d}_{ACF}\) and \(\widehat{d}_{QAF}\) exhibit worse behaviour, particularly when \(T=200\). Distance \(\widehat{d}_{2}\) clearly leads to the best performing approach in Scenario 7 when \(T=200\). A different situation happens when \(T=600\), with \(\widehat{d}_{1}\), \(\widehat{d}_{2}\), and \(\widehat{d}_{PMF}\) attaining high scores for several values of \(m\). The quantile-based metric \(\widehat{d}_{QAF}\) behaves very poorly in this scenario. In all cases, increasing the series length results in better rates of correct classification. Note that our results account for the importance of a suitable selection of \(m\), although this issue is not addressed here because there are several procedures available in the literature for this purpose. Rigorous comparisons based on Figure 3 can be made by computing: (i) the Figure 3: Rates of correct classification as function of \(m\) obtained by the fuzzy \(C\)-medoids clustering algorithm based on several dissimilarities for a cutoff of 0.7. Scenarios 6 and 7. maximum value of each curve, and (ii) the area under each curve, denoted by AUFC (area under the fuzziness curve), which was already used by [16]. The values for both quantities are given in Table 6 and clearly corroborate the great performance of \(\widehat{d_{1}}\) and \(\widehat{d_{2}}\) when dealing with data sets including series whose dynamic pattern does not belong to one specific group. In terms of AUFC, both metrics substantially outperform the rest in all cases. Next step was to analyse the effect of the selected cutoff. A higher cutoff relaxes the condition for the membership degrees of the isolated series in order to be correctly classified, but establishes harder requirements for the membership degrees of the remaining series. The opposite happens with a lower cutoff. Thus, the experiments were repeated by fixing the cutoff values at 0.8 and 0.6, and the results are shown in Table 7. For all metrics, the maximum rates of correct classification are very similar to the ones obtained in Table 6 with the cutoff at 0.7, but the AUFC values are dramatically different in most cases, thus accounting for the heavy influence of the cutoff in the evaluation mechanism. Specifically, lower and higher values are obtained when 0.8 and 0.6 are respectively used as cutoff. In any case, the proposed distances still outperform the alternative metrics in all settings. To better understand the influence of the cutoff, we fixed the Scenario 6, the distance \(\widehat{d_{1}}\) and \(T=200\), and then examine the rates of correct classification with respect to \(m\) for the cutoff values 0.6, 0.7 and 0.8. The obtained curves are displayed in Figure 4. It is observed that the larger the cutoff value, the more concentrated and shifted to the left is the corresponding curve, which can be explained as follows. Low values of \(m\) imply that the maximum membership degree is close to one for all series, thus making it easier to exceed the cutoff value, which in turn leads to misclassify the isolated series and correctly classify the series in the regular clusters. Indeed, if a high cutoff (e.g. 0.8) is used, then the isolated series are still well-classified for small values of \(m\), but they would be misclassified in many trials if a smaller cutoff (e.g. 0.7 or 0.6) is used or when \(m\) increases. By contrast, moderate and large values of \(m\) move progressively the membership degrees towards 0.5, thus producing the opposite effect: failures \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Scenario 6 & & & & & & \\ \hline \multirow{2}{*}{\(T=200\)} & Maximum & **0.76** & 0.73 & 0.62 & 0.14 & 0.25 \\ & AUFC & **0.63** & 0.53 & 0.40 & 0.08 & 0.19 \\ \hline \multirow{2}{*}{\(T=600\)} & Maximum & 0.99 & **1.00** & 0.97 & 0.69 & 0.75 \\ & AUFC & **1.34** & 1.31 & 1.06 & 0.54 & 0.63 \\ \hline \hline \multirow{2}{*}{Scenario 7} & & & & & & \\ & & & & & & \\ & & & & & & \\ & & & & & & \\ \hline \multirow{2}{*}{\(T=200\)} & Maximum & 0.38 & **0.75** & 0.35 & 0.19 & 0.03 \\ & AUFC & 0.24 & **0.43** & 0.14 & 0.11 & 0.01 \\ \hline \multirow{2}{*}{\(T=600\)} & Maximum & 0.92 & **0.98** & 0.88 & 0.78 & 0.18 \\ & AUFC & **0.86** & 0.79 & 0.62 & 0.61 & 0.12 \\ \hline \hline \end{tabular} \end{table} Table 6: Maximum rates of correct classification and AUFC obtained by the fuzzy \(C\)-medoids clustering algorithm based on several distances for a cutoff value of 0.7. Scenarios 6 and 7. The best results are shown in bold. with series in regular clusters and successes with the isolated series. However, it is worthy remarking that even large values of \(m\) frequently generate maximum membership degrees above 0.6 for the non-isolated series, which justifies that the curve for a cutoff 0.6 is nonzero for a much broader range of values of \(m\) and a rather large value for the corresponding AUFC. Additional experiments showed that the situation illustrated in Figure 4 is also observed for alternative values of \(T\), in Scenario 7, and with the distance \(\widehat{d}_{2}\). In sum, the experiments from this section showed the clustering effectiveness of the proposed metrics also when series with a certain level of ambiguity are included in the data set subjected to clustering. Furthermore, the higher values \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Scenario 6 & & & & & & \\ & & & & & & \\ & & & & & & \\ \hline \(T=200\) & Maximum & **0.79 (0.77)** & 0.78 (0.73) & 0.61 (0.62) & 0.13 (0.17) & 0.25 (0.28) \\ & AUFC & **0.37 (1.23)** & 0.32 (1.03) & 0.24 (0.80) & 0.05 (0.19) & 0.11 (0.38) \\ \hline \(T=600\) & Maximum & **1.00 (1.00)** & **1.00 (1.00)** & **1.00 (1.00)** & 0.66 (0.73) & 0.73 (0.74) \\ & AUFC & **0.85 (2.78)** & 0.80 (2.67) & 0.67 (2.19) & 0.32 (1.09) & 0.40 (1.29) \\ \hline Scenario 7 & & & & & & \\ & & & & & & \\ \hline \(T=200\) & Maximum & 0.40 (0.47) & **0.70 (0.72)** & 0.29 (0.36) & 0.21 (0.23) & 0.02 (0.01) \\ & AUFC & 0.17 (0.56) & **0.26 (0.85)** & 0.08 (0.30) & 0.08 (0.26) & 0.01 (0.02) \\ \hline \(T=600\) & Maximum & 0.86 (0.92) & **0.98 (1.00)** & 0.88 (0.93) & 0.72 (0.76) & 0.21 (0.23) \\ & AUFC & **0.51 (1.76)** & 0.48 (1.66) & 0.39 (1.31) & 0.34 (1.23) & 0.07 (0.26) \\ \hline \end{tabular} \end{table} Table 7: Maximum rates of correct classification and AUFC obtained by the fuzzy \(C\)-medoids clustering algorithm based on several dissimilarities for cutoff values of 0.8 and 0.6 (in brackets). Scenarios 6 and 7. The best results are shown in bold. Figure 4: Rates of correct classification (as a function of \(m\)) obtained by the fuzzy \(C\)-medoids clustering algorithm based on \(\widehat{d}_{1}\) for cutoff values of 0.6 (dashed line), 0.7 (solid line) and 0.8 (dotted line). Scenario 6 with \(T=200\). of the AUFC attained by both \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) with respect to the alternative distances indicate a greater robustness of these distances to the choice of \(m\). This is a nice property since the optimal selection of this parameter is still an open problem in the fuzzy clustering literature. ### Evaluation of the weighted approach The weighted fuzzy \(C\)-medoids model based on the proposed distances (see (19)) was evaluated by considering the same scenarios, simulation parameters, and performance measures as in Sections 4.2.1 and 4.2.2. The corresponding average results are shown in Table 8. For the sake of simplicity and homogeneity, we decided to show only the values of the ARIF index for Scenarios 1 to 5, and the rates of correct classification associated with \(m\in\{1.2,1.4,1.6,1.8,2\}\) for Scenarios 6 and 7. The weighted versions using \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) are denoted by \(\widehat{d}_{1,W}\) and \(\widehat{d}_{2,W}\), respectively. To rigorously compare the weighted and non-weighted approaches, statistical tests based on the 200 trials were carried out, namely the Wilcoxon signed-rank test in Scenarios 1-5 and the McNemar test to compare two proportions in Scenarios 6 and 7. The tests were executed for each combination of scenario, metric, and values of \(m\) and \(T\), by considering paired-sample data and applying Bonferroni corrections for multiple comparisons. As regards the Wilcoxon signed-rank test, the alternative hypothesis stated that the average ARIF of the differences between the weighted and non-weighted versions is greater than 0.03. Asterisks in Table 8 indicate significant results at level \(\alpha=0.05\). According to Table 8, the weighted fuzzy \(C\)-medoids model based on \(\widehat{d}_{1}\) achieves higher average scores than its non-weighted counterpart in some settings (e.g., Scenario 3 with \(T=600\)), but the differences are always non-significant. This is because of the groups in Scenarios 1-7 can be distinguished by both the marginal distributions and the serial patterns. Thus, regarding that the marginal features can be directly obtained from the joint ones, it is expected that \(\widehat{d}_{1,W}\) and \(\widehat{d}_{1}\) show similar discriminatory power for any value of \(\beta\). A better performance of the weighted approach is expected when clusters have similar marginal distributions but different dependence patterns. In fact, the strongest improvements (although not significant) are given in Scenario 3, whose clusters present the highest amount of similarity between marginal distributions. By contrast, the weighted algorithm based on \(\widehat{d}_{2}\) yields significant improvements in several cases, especially for moderate to large values of \(m\). In fact, it leads to the highest scores among the four proposed ones (\(\widehat{d}_{1}\), \(\widehat{d}_{2}\), \(\widehat{d}_{1,W}\), and \(\widehat{d}_{2,W}\)) in some settings (e.g., Scenario 3 with \(T=200\)). Note that, as expected, giving weights to the marginal and bivariate components of \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) never results in a worse performance. Concerning Scenarios 1-5, similar results were obtained by using JIF to assess the clustering quality. Boxplots in Figure 5 show the distribution of the weights \(\beta\) returned by the algorithm based on \(\widehat{d}_{2,W}\) in Scenarios 1-3 with \(m=2\) (where the weighted approach outperforms the standard one). In Scenario 1, the algorithm usually leads to values of \(\beta\) below 0.5, indicating that the bivariate component, \(\widehat{d}_{2,B}\) plays a more important role than the marginal one, \(\widehat{d}_{2,M}\). Regarding that the four processes in Scenario 1 clearly differ in both marginal and serial dependence structures, the lower weight received by \(\widehat{d}_{2,M}\) is explained by the four terms defining this component, while \(\widehat{d}_{2,B}\) only contributes with two terms. On the contrary, \(\beta\) takes values above \(0.5\) in Scenarios 2 and 3. In Scenario 2, the higher weight for \(\widehat{d}_{2,M}\) is justified by the fact that the four groups have different marginal features, while the serial features \(\widehat{\kappa}_{d_{\mathrm{o},1}}(1)\) and \(\widehat{\kappa}_{d_{\mathrm{o},1}}(2)\) take similar values on the pairs of clusters \((\mathcal{C}_{1},\mathcal{C}_{2})\) and \((\mathcal{C}_{3},\mathcal{C}_{4})\). An analogous situation happens in Scenario 3, albeit to a lesser extent. It is also noticeable the low variability of \(\beta\) in all settings. For instance, in Scenario 2 with \(T=200\), \(\beta\) moves from \(0.7\) to \(0.8\) more than \(50\%\) of the times, which indicates that the optimal weight is approximated with high accuracy. Although not shown in the article \begin{table} \begin{tabular}{c c c c c c} \hline \hline Scenario 1 & \multicolumn{2}{c}{\(m=1.2\)} & \(m=1.4\) & \(m=1.6\) & \(m=1.8\) & \(m=2\) \\ \hline \(T=200\) & \(\widehat{d}_{1,W}\) & \(0.67\) & \(0.58\) & \(0.49\) & \(0.40\) & \(0.35\) \\ & \(\widehat{d}_{2,W}\) & \(0.81^{*}\) & \(0.69^{*}\) & \(0.58^{*}\) & \(0.47^{*}\) & \(0.40^{*}\) \\ \hline \(T=600\) & \(\widehat{d}_{1,W}\) & \(0.93\) & \(0.85\) & \(0.76\) & \(0.64\) & \(0.55\) \\ & \(\widehat{d}_{2,W}\) & \(0.96\) & \(0.89^{*}\) & \(0.79^{*}\) & \(0.69^{*}\) & \(0.59^{*}\) \\ \hline Scenario 2 & \multicolumn{2}{c}{\(m=1.2\)} & \(m=1.4\) & \(m=1.6\) & \(m=1.8\) & \(m=2\) \\ \hline \(T=200\) & \(\widehat{d}_{1,W}\) & \(0.57\) & \(0.54\) & \(0.47\) & \(0.41\) & \(0.36\) \\ & \(\widehat{d}_{2,W}\) & \(0.60\) & \(0.56\) & \(0.49^{*}\) & \(0.43^{*}\) & \(0.37^{*}\) \\ \hline \(T=600\) & \(\widehat{d}_{1,W}\) & \(0.70\) & \(0.64\) & \(0.59\) & \(0.52\) & \(0.47\) \\ & \(\widehat{d}_{2,W}\) & \(0.72\) & \(0.66\) & \(0.60\) & \(0.54\) & \(0.49^{*}\) \\ \hline Scenario 3 & \multicolumn{2}{c}{\(m=1.2\)} & \(m=1.4\) & \(m=1.6\) & \(m=1.8\) & \(m=2\) \\ \hline \(T=200\) & \(\widehat{d}_{1,W}\) & \(0.69\) & \(0.56\) & \(0.44\) & \(0.35\) & \(0.29\) \\ & \(\widehat{d}_{2,W}\) & \(0.67^{*}\) & \(0.64^{*}\) & \(0.55^{*}\) & \(0.45^{*}\) & \(0.38^{*}\) \\ \hline \(T=600\) & \(\widehat{d}_{1,W}\) & \(0.92\) & \(0.81\) & \(0.68\) & \(0.56\) & \(0.47\) \\ & \(\widehat{d}_{2,W}\) & \(0.81\) & \(0.76^{*}\) & \(0.68^{*}\) & \(0.59^{*}\) & \(0.51^{*}\) \\ \hline Scenario 4 & \multicolumn{2}{c}{\(m=1.2\)} & \(m=1.4\) & \(m=1.6\) & \(m=1.8\) & \(m=2\) \\ \hline \(T=200\) & \(\widehat{d}_{1,W}\) & \(0.51\) & \(0.46\) & \(0.38\) & \(0.31\) & \(0.25\) \\ & \(\widehat{d}_{2,W}\) & \(0.53\) & \(0.47\) & \(0.40^{*}\) & \(0.35^{*}\) & \(0.25\) \\ \hline \(T=600\) & \(\widehat{d}_{1,W}\) & \(0.59\) & \(0.57\) & \(0.50\) & \(0.43\) & \(0.36\) \\ & \(\widehat{d}_{2,W}\) & \(0.61\) & \(0.57\) & \(0.53^{*}\) & \(0.45^{*}\) & \(0.36\) \\ \hline Scenario 5 & \multicolumn{2}{c}{\(m=1.2\)} & \(m=1.4\) & \(m=1.6\) & \(m=1.8\) & \(m=2\) \\ \hline Variable \(T\) & \(\widehat{d}_{1,W}\) & \(0.52\) & \(0.49\) & \(0.44\) & \(0.35\) & \(0.30\) \\ & \(\widehat{d}_{2,W}\) & \(0.56\) & \(0.50\) & \(0.47^{*}\) & \(0.39^{*}\) & \(0.29\) \\ \hline Scenario 6 & \multicolumn{2}{c}{\(m=1.2\)} & \(m=1.4\) & \(m=1.6\) & \(m=1.8\) & \(m=2\) \\ \hline \(T=200\) & \(\widehat{d}_{1,W}\) & \(0.00\) & \(0.14\) & \(0.57\) & \(0.57\) & \(0.42\) \\ & \(\widehat{d}_{2,W}\) & \(0.05\) & \(0.36^{*}\) & \(0.61\) & \(0.54\) & \(0.45^{*}\) \\ \hline \(T=600\) & \(\widehat{d}_{1,W}\) & \(0.00\) & \(0.06\) & \(0.80\) & \(0.81\) & \(0.81\) \\ & \(\widehat{d}_{2,W}\) & \(0.08^{*}\) & \(0.62^{*}\) & \(0.90^{*}\) & \(0.76\) & \(0.75\) \\ \hline Scenario 7 & \multicolumn{2}{c}{\(m=1.2\)} & \(m=1.4\) & \(m=1.6\) & \(m=1.8\) & \(m=2\) \\ \hline \(T=200\) & \(\widehat{d}_{1,W}\) & \(0.00\) & \(0.02\) & \(0.13\) & \(0.35\) & \(0.37\) \\ & \(\widehat{d}_{2,W}\) & \(0.25^{*}\) & \(0.63^{*}\) & \(0.70\) & \(0.55^{*}\) & \(0.31^{*}\) \\ \hline \(T=600\) & \(\widehat{d}_{1,W}\) & \(0.00\) & \(0.01\) & \(0.08\) & \(0.88\) & \(0.85\) \\ & \(\widehat{d}_{2,W}\) & \(0.26^{*}\) & \(0.76\) & \(0.95\) & \(0.89^{*}\) & \(0.71^{*}\) \\ \hline \hline \end{tabular} \end{table} Table 8: Average values of ARIF (Scenarios 1 to 5) and rates of correct classification (Scenarios 6 and 7) obtained by the weighted fuzzy \(C\)-medoids clustering algorithm based on \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\). An asterisk indicates that the weighted approach is significantly better than its non-weighted counterpart at a significance level \(\alpha=0.05\). for the sake of simplicity, similar boxplots are obtained for other values of \(m\). ### Analysing clustering effectiveness with respect to selected lags To examine the effect of a misspecification of the set of lags \(\mathcal{L}\) required to compute \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\), a sensitivity analysis was performed by considering Scenarios 1-3 and five different collections of lags, namely \(\mathcal{L}_{i}=\{1,2,\ldots,i\}\), for \(i=1,\ldots,5\). The average ARIF attained with \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) are given in Tables 9 and 10, respectively. For the sake of simplicity, only the results for \(m=1.6\) are presented. The theoretical set of lags at each scenario is given in parentheses. Results in Table 9 suggest that metric \(\widehat{d}_{1}\) is clearly robust to the choice of \(\mathcal{L}\). In fact, in all scenarios and for both values of \(T\), no significant changes are observed in the average scores of the ARIF. A similar conclusion follows from Table 10 for \(\widehat{d}_{2}\) in Scenario 1. However, in Scenarios 2 and 3, \(\widehat{d}_{2}\) slightly Figure 5: Distribution of the final value for \(\beta\) produced by the weighted fuzzy \(C\)-medoids algorithm based on \(\widehat{d}_{2}\). Scenarios 1, 2 and 3 with \(m=2\). \begin{table} \begin{tabular}{l c c c c c c} \hline & \multicolumn{3}{c}{\(T=200\)} & \multicolumn{3}{c}{\(T=600\)} \\ \cline{2-7} Set & S1 (\(\mathcal{L}_{2}\)) & S2 (\(\mathcal{L}_{2}\)) & S3 (\(\mathcal{L}_{1}\)) & S1 (\(\mathcal{L}_{2}\)) & S2 (\(\mathcal{L}_{2}\)) & S3 (\(\mathcal{L}_{1}\)) \\ \hline \(\mathcal{L}_{1}\) & 0.47 & 0.46 & 0.40 & 0.75 & 0.59 & 0.64 \\ \(\mathcal{L}_{2}\) & 0.49 & 0.48 & 0.39 & 0.75 & 0.59 & 0.63 \\ \(\mathcal{L}_{3}\) & 0.49 & 0.47 & 0.39 & 0.76 & 0.59 & 0.63 \\ \(\mathcal{L}_{4}\) & 0.49 & 0.47 & 0.39 & 0.75 & 0.58 & 0.62 \\ \(\mathcal{L}_{5}\) & 0.49 & 0.47 & 0.39 & 0.76 & 0.58 & 0.62 \\ \hline \end{tabular} \end{table} Table 9: Average ARIF based on \(\widehat{d}_{1}\) for \(m=1.6\) and different sets of lags (\(\mathcal{L}_{j}=\{1,\ldots,j\},j=1,\ldots,5\)) in Scenarios 1, 2 and 3, denoted by S1, S2 and S3, respectively. The theoretical set of lags at each scenario is indicated in brackets. decreases its performance as more lags are added to \(\mathcal{L}\), i.e. including unnecessary features (noise) in the time series representation negatively affects the clustering performance. Note that \(\widehat{d}_{2}\) achieves the highest ARIF in Scenario 2 with \(\mathcal{L}_{1}\), although \(\mathcal{L}_{2}\) is here the theoretical situation. This is because \(\kappa_{d_{o,1}}(2)\) takes similar values for clusters \(\mathcal{C}_{3}\) and \(\mathcal{C}_{4}\) in this scenario. Thus, including \(\widehat{\kappa}_{d_{o,1}}(2)\) helps to separate \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) from \(\mathcal{C}_{3}\) and \(\mathcal{C}_{4}\), but makes it harder to distinguish between the series of clusters \(\mathcal{C}_{3}\) and \(\mathcal{C}_{4}\), which results in a lower accuracy. In sum, even for \(\widehat{d}_{2}\), small deviations from the nominal lag order do not have a substantial impact on the clustering accuracy. Thus, while the optimal lag selection is a critical issue in modelling and forecasting problems, the clustering approaches based on both \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) exhibit a reasonable robustness to a non-optimal choice of \(\mathcal{L}\). This is a particularly nice property in our setting because the proposed algorithms are model-free and no single lag selection procedure has been proven to perform properly with all time series models. The mentioned robustness property justifies to select \(\mathcal{L}\) through a simple and automatic procedure, mainly satisfying two properties: applicability without prior assumptions about the generating models and computational efficiency. To this aim, we propose a criterion based on assessing serial dependence at several lags for each OTS. Specifically, we consider the partial Cohen's \(\kappa\) at lag \(l\), denoted by \(\kappa^{p}_{d_{o,1}}(l)\), which is defined in an analogous way to the partial autocorrelation in the real-valued setting. In practice, the sample counterparts \(\widehat{\kappa}^{p}_{d_{o,1}}(1),\widehat{\kappa}^{p}_{d_{o,1}}(2),\widehat{ \kappa}^{p}_{d_{o,1}}(3),\ldots\) can be computed from \(\widehat{\kappa}_{d_{o,1}}(1),\widehat{\kappa}_{d_{o,1}}(2),\widehat{\kappa}_ {d_{o,1}}(3)\ldots\) via the Durbin-Levinson algorithm [46; 47], just as the partial autocorrelations are obtained from the autocorrelations. Using \(\widehat{\kappa}^{p}_{d_{o,1}}(l)\) instead of \(\widehat{\kappa}_{d_{o,1}}(l)\), the significant lags are free of quantifying dependence explained by shorter lags, which could be helpful to identify the maximum significant lag. On the other hand, according to Theorem 7.2.1 in [31], \(\kappa^{p}_{d_{o,1}}(l)\) has the same asymptotic distribution as \(\widehat{\kappa}_{d_{o,1}}(l)\) under serial independence. Based on these arguments, given the set \(\mathbb{S}=\{X^{(1)}_{T_{1}},\ldots,X^{(s)}_{T_{s}}\}\) of OTS subject to clustering, we propose to select \(\mathcal{L}\) as follows. 1. Fix a global significance level \(\alpha>0\) and a maximum lag \(L_{\mathrm{Max}}\in\mathbb{N}\). Adjust the significance level in a suitable way, obtaining the corrected significance level \(\alpha^{\prime}\). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{\(T=200\)} & \multicolumn{3}{c}{\(T=600\)} \\ \cline{2-7} Set & S1 (\(\mathcal{L}_{2}\)) & S2 (\(\mathcal{L}_{2}\)) & S3 (\(\mathcal{L}_{1}\)) & S1 (\(\mathcal{L}_{2}\)) & S2 (\(\mathcal{L}_{2}\)) & S3 (\(\mathcal{L}_{1}\)) \\ \hline \(\mathcal{L}_{1}\) & 0.47 & 0.49 & 0.45 & 0.70 & 0.62 & 0.64 \\ \(\mathcal{L}_{2}\) & 0.49 & 0.45 & 0.40 & 0.73 & 0.59 & 0.61 \\ \(\mathcal{L}_{3}\) & 0.50 & 0.42 & 0.35 & 0.72 & 0.58 & 0.56 \\ \(\mathcal{L}_{4}\) & 0.49 & 0.40 & 0.32 & 0.71 & 0.54 & 0.52 \\ \(\mathcal{L}_{5}\) & 0.48 & 0.37 & 0.30 & 0.71 & 0.53 & 0.49 \\ \hline \hline \end{tabular} \end{table} Table 10: Average ARIF based on \(\widehat{d}_{2}\) for \(m=1.6\) and different sets of lags (\(\mathcal{L}_{j}=\{1,\ldots,j\},j=1,\ldots,5\)) in Scenarios 1, 2 and 3, denoted by S1, S2 and S3, respectively. The theoretical set of lags at each scenario is indicated in brackets. 2. For each series \(X_{T_{i}}^{(i)}\) in \(\mathbb{S}\): 1. Use the sample version of the ordinal Cohen's \(\kappa\) to test for serial independence at all lags up to \(L_{\mathrm{Max}}\). Specifically, for \(l=1,2,\ldots,L_{\mathrm{Max}}\), the null hypothesis is rejected if \[\left|\frac{\sqrt{T_{i}}\widehat{\mathrm{disp}}_{d_{o,1}}^{(i)}\big{(}\widehat {\ell}_{d_{o,1}}^{p}(l)^{(i)}+1/T_{i}\big{)}}{2\sqrt{\sum_{k,l=0}^{n-1}\big{(} \widehat{\ell}_{\min\{k,l\}}^{(i)}-\widehat{f}_{k}^{(i)}\widehat{f}_{l}^{(i)} \big{)}^{2}}}\right|>z_{1-\alpha^{\prime}/2},\] (27) where the superscript \((i)\) indicates that the estimates are computed with respect to the \(i\)th series, and \(z_{\theta}\) is the \(\theta\)-quantile of the standard normal distribution. 2. Record the maximum significant lag, \(L^{(i)}\), according to (27). 3. Consider \(L^{*}=\max\{L^{(1)},L^{(2)},\ldots,L^{(s)}\}\) and define \(\mathcal{L}=\{1,2,\ldots,L^{*}\}\). Some remarks concerning the previous procedure are given below. The global significance level \(\alpha\) is corrected in Step 1 to address the problem of multiple comparisons, since \(sL_{\mathrm{Max}}\) statistical tests are simultaneously performed. The use of a conservative rule (e.g., the Bonferroni correction) is recommended, since frequently a few lags are sufficient to characterize the serial dependence. Nonetheless, other less conservative procedures ensuring that the family-wise error rate is at most \(\alpha\) could be employed. In Step 3, \(L^{*}\) is the highest lag within \(\mathcal{L}\). By construction, \(L^{*}\) is necessarily a significant lag for one or several series, although indeed some series might not exhibit significant serial dependence at \(L^{*}\) or lower lags. However, this is not an issue because the corresponding estimated features are expected to be close to zero for these series. Our proposal to select \(\mathcal{L}\) was examined via simulation by considering the series in Scenarios 1-3, and setting \(L_{\mathrm{Max}}=5\) and \(\alpha=0.05\) (with the Bonferroni correction). Based on 1000 simulation trials for each \(T\in\{200,600\}\), Table 11 provides the proportion of times that each set of lags was selected. It is observed that the proposed method works reasonably well, although the results differ among the considered scenarios. In Scenario 3, the theoretical set (\(\mathcal{L}_{1}\)) is selected almost 100% of the times regardless of the series length. The most often chosen set in Scenario 1 is also the theoretical one (\(\mathcal{L}_{2}\)), but here \(\mathcal{L}_{3}\), \(\mathcal{L}_{4}\) and \(\mathcal{L}_{5}\) are selected a non-negligible number of times. It is worth to recall that this is not a problem in our setting since the clustering effectiveness for both metrics \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) is approximately the same for all \(\mathcal{L}_{i}\), \(i=1,\ldots,5\) (see Tables 9 and 10). Lastly, in Scenario 2, the series length has a substantial impact on the selection of \(\mathcal{L}\). When \(T=200\), \(\mathcal{L}_{1}\) is erroneously chosen most of the trials. Basically, this series length is too short to detect the serial dependence exhibited by the series within clusters \(\mathcal{C}_{3}\) and \(\mathcal{C}_{4}\), generated by processes with coefficients \(\alpha_{1}\) and \(\alpha_{2}\) very close to zero. Again, the proposed clustering algorithms do not get negatively affected in Scenario 1 when only the first lag is considered (see Tables 9 and 10). Note that, when \(T=600\), the proper set \(\mathcal{L}_{2}\) is usually selected because the power of the corresponding tests substantially increases. Analogous conclusions were obtained using \(\alpha=0.01\) and \(\alpha=0.10\). ## 5 Applications. This section is devoted to show two real-data applications of the proposed clustering procedures. In both cases, we first describe the database along with some exploratory analyses and, afterwards, we show the results of applying the clustering algorithms. ### Fuzzy clustering of European countries in terms of credit ratings #### 5.1.1 Data set and exploratory analyses Let us consider the financial database introduced in Section 2.3 and formerly employed by [31], which contains monthly credit ratings according to S&P for the UK and the 27 countries of the EU, namely Austria (AT), Belgium (BE), Bulgaria (BG), Cyprus (CY), Czechia (CZ), Germany (DE), Denmark (DK), Estonia (EE), Spain (ES), Finland (FI), France (FR), Greece (GR), Croatia (HR), Hungary (HU), Ireland (IE), Italy (IT), Lithuania (LT), Luxembourg (LU), Latvia (LV), Malta (MT), Netherlands (NL), Poland (PL), Portugal (PT), Romania (RO), Sweden (SE), Slovenia (SL), and Slovakia (SK). The sample period spans from January 2000 to December 2017, thus resulting serial realizations of length \(T=216\). The range of the OTS consists of \(n+1=23\) states, \(s_{0},\ldots,s_{22}\), representing the different credit scores (see Section 2.3). As stated in [31], the profiles of the 28 OTS show quite different shapes, including constant trajectories but also paths with up or down movements (financial crisis). As an example, the OTS for Estonia and Slovakia are represented in Figure 1. Applying clustering on this data set could lead to meaningful groups of countries sharing similar risk profiles, monetary policy, or even government reliability. Moreover, since our approach produces fuzzy solutions, some countries exhibiting a vague behaviour in terms of credit ratings could be identified. As a preliminary exploratory step, we performed a 2DS based on the pairwise dissimilarity matrices calculated by using \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\). The required set of lags \(\mathcal{L}\) was determined by means of the procedure proposed in Section 4.4 (with \(\alpha=0.05\), \(L_{\mathrm{Max}}=10\) and the Bonferroni correction), resulting \(\mathcal{L}=\{1\}\). It is worth remarking that only the first lag was also selected by using alternative rules for correcting the significance level (e.g., Holm or Hommel corrections) and different values of \(\alpha\). The 2DS plots based on \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) with \(\mathcal{L}=\{1\}\) are \begin{table} \begin{tabular}{l l c c c c} \hline \hline & & \(\mathcal{L}_{1}\) & \(\mathcal{L}_{2}\) & \(\mathcal{L}_{3}\) & \(\mathcal{L}_{4}\) & \(\mathcal{L}_{5}\) \\ \hline Scenario 1 (\(\mathcal{L}_{2}\)) & \(T=200\) & 0.000 & **0.462** & 0.285 & 0.111 & 0.142 \\ & \(T=600\) & 0.000 & **0.595** & 0.230 & 0.104 & 0.071 \\ \hline Scenario 2 (\(\mathcal{L}_{2}\)) & \(T=200\) & **0.693** & 0.255 & 0.023 & 0.017 & 0.012 \\ & \(T=600\) & 0.244 & **0.729** & 0.015 & 0.004 & 0.008 \\ \hline Scenario 3 (\(\mathcal{L}_{1}\)) & \(T=200\) & **0.965** & 0.007 & 0.010 & 0.005 & 0.013 \\ & \(T=600\) & **0.969** & 0.013 & 0.007 & 0.007 & 0.004 \\ \hline \hline \end{tabular} \end{table} Table 11: Proportion of times that each set \(\mathcal{L}_{i}\) was selected according to the proposed criterion using \(\alpha=0.05\) and Bonferroni correction. Scenarios 1, 2 and 3 (theoretical set of lags in brackets). For each scenario and value of \(T\), the largest rate is shown in bold. displayed in the top (\(R^{2}=0.90\)) and middle (\(R^{2}=0.95\)) panels of Figure 6, respectively. From Figure 6 follows that the clustering algorithms based on \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) give rise to quite similar configurations. There is a compact group of 10 countries clearly separated from the rest and formed by AT, BE, DE, DK, FI, FR, LU, NL, SE, and UK. Interestingly, these countries are usually characterized for having strong economies (e.g., high average income, low inflation rates...). The remaining countries are more spread-out, forming poorly-separated groups, which suggests that a fuzzy approach could be particularly useful to get meaningful conclusions from this database. Note that the both plots show some interesting differences. For instance, while GR is located close to other countries in Figure 6: Two-dimensional scaling planes based on distances \(\widehat{d}_{1}\) (top), \(\widehat{d}_{2}\) (middle) and \(\widehat{d}_{2,W}\) with \(\beta=0.14\) (bottom) for the monthly credit ratings of 28 European countries. Eastern Europe when employing \(\widehat{d}_{1}\), it constitutes an isolated point (a potential outlier) when \(\widehat{d}_{2}\) is considered. In a clustering context, these isolated points often require an individual analysis, since their presence can negatively affect the performance of standard algorithms. #### 5.1.2 Application of clustering algorithms and results Two important parameters must be set in advance before executing the clustering algorithms, namely the number of clusters, \(C\), and the fuzziness parameter, \(m\). Note that the latter parameter highly influences the quality of the obtained clustering partition as seen in Section 4. The selection of \(C\) and \(m\) was done simultaneously by means of a procedure proposed by [16], which is based on two steps: (i) fixing a grid of values for the pair \((C,m)\), and (ii) choosing the pair leading to the minimum value of a measure relying on four internal clustering validity indices, namely the Xie-Beni index [48], the Kwon index [49], and the indices proposed in [50] and in [51]. These indices measure the degree of compactness of a given clustering solution and, in all cases, the lower the value of the index, the better the quality of the partition. In particular, for fixed \(C\) and \(m\), [16] consider the average of a standardized version of these indices, thus bringing them to the same scale. In this application, the grid was constructed by setting \(C\in\{1,2,\ldots,7\}\) and \(m\in\{1.1,1.2,\ldots,4\}\), and the selected values were \((C,m)=(3,1.9)\) for \(\widehat{d}_{1}\) and \((C,m)=(3,2.1)\) for \(\widehat{d}_{2}\). Hence, both metrics identify an underlying partition with the same number of groups, which is coherent with the similarity of both plots in Figure 6. Table 12 contains the membership degrees produced by the fuzzy \(C\)-medoids clustering algorithm based on both metrics. Superscripts \(1\) and \(2\) on the first column indicate the medoid countries according to \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\), respectively. To ease interpretation, only one, two or three membership degrees for the \(i\)th country were highlighted in grey according to the following criterion: (i) only the \(j\)th membership is shaded gray when \(u_{ij}>0.5\) and \(u_{ik}<0.3\), for \(k\neq j\), (ii) the three membership degrees are highlighted if \(u_{ij}>0.25\), for all \(j\in\{1,2,3\}\), and (iii) otherwise, one of them is below \(0.25\) and the remaining two are reasonably spread-out, so the latter ones are highlighted. These criteria basically provide a simple way of interpreting the fuzzy solutions produced by both metrics. Both clustering partitions in Table 12 are consistent with the corresponding 2DS plots in Figure 6. Cluster \(\mathcal{C}_{1}\) contains the ten countries constituting the well-separated group in the left part of the graphs, all with membership degrees above \(0.5\). Therefore, both metrics are capable of properly detecting the group including the strongest economies in Europe. On the other hand, clusters \(\mathcal{C}_{2}\) and \(\mathcal{C}_{3}\) are formed by countries exhibiting scattered membership degrees, which was expected since no clear clustering structure is observed for these countries in Figure 6. However, a quick glance at the distribution of the highlighted membership degrees allows us to conclude that both groups exhibit a much less degree of overlap in the partition produced by \(\widehat{d}_{2}\). For instance, with \(\widehat{d}_{2}\), cluster \(\mathcal{C}_{3}\) groups together several countries located in Eastern Europe with high membership degrees, namely BG, CY, HR, HU, LT, LV, and RO. Note that both partitions contain some countries exhibiting a substantially fuzzy behaviour. For instance, the membership degrees of ES are all close to \(\frac{1}{3}\) in the partition generated by \(\widehat{d}_{1}\), thus suggesting equidistance from the three clusters. This fact is not surprising since ES is known to have a promising economy, but far less powerful than the ones of the countries in cluster \(\mathcal{C}_{1}\). Analogous conclusions can be obtained for other countries whose membership degrees are evenly distributed \begin{table} \begin{tabular}{c|c c c|c c c} \hline & \multicolumn{3}{c|}{\(\widehat{d}_{1}\)} & \multicolumn{3}{c}{\(\widehat{d}_{2}\)} \\ \hline Country & \(\mathcal{C}_{1}\) & \(\mathcal{C}_{2}\) & \(\mathcal{C}_{3}\) & \(\mathcal{C}_{1}\) & \(\mathcal{C}_{2}\) & \(\mathcal{C}_{3}\) \\ \hline AT & 0.909 & 0.044 & 0.048 & 0.996 & 0.003 & 0.001 \\ BE & 0.628 & 0.176 & 0.197 & 0.760 & 0.149 & 0.091 \\ BG & 0.199 & 0.376 & 0.426 & 0.133 & 0.265 & 0.603 \\ CY & 0.146 & 0.452 & 0.402 & 0.132 & 0.288 & 0.580 \\ CZ & 0.184 & 0.524 & 0.292 & 0.034 & 0.913 & 0.054 \\ DE\({}^{2}\) & 0.958 & 0.020 & 0.022 & 1.000 & 0.000 & 0.000 \\ DK & 0.981 & 0.009 & 0.010 & 0.999 & 0.001 & 0.000 \\ EE\({}^{2}\) & 0.174 & 0.547 & 0.280 & 0.000 & 1.000 & 0.000 \\ ES & 0.342 & 0.309 & 0.349 & 0.218 & 0.543 & 0.239 \\ FI & 0.925 & 0.036 & 0.039 & 0.996 & 0.002 & 0.002 \\ FR & 0.837 & 0.077 & 0.086 & 0.812 & 0.116 & 0.072 \\ GR & 0.212 & 0.385 & 0.403 & 0.176 & 0.320 & 0.504 \\ HR & 0.204 & 0.372 & 0.424 & 0.123 & 0.249 & 0.628 \\ HU & 0.170 & 0.408 & 0.422 & 0.087 & 0.197 & 0.716 \\ IE & 0.416 & 0.296 & 0.289 & 0.269 & 0.511 & 0.220 \\ IT & 0.159 & 0.457 & 0.383 & 0.132 & 0.601 & 0.268 \\ LT & 0.161 & 0.496 & 0.343 & 0.085 & 0.244 & 0.672 \\ LU & 0.958 & 0.020 & 0.022 & 1.000 & 0.000 & 0.000 \\ LV\({}^{2}\) & 0.165 & 0.452 & 0.383 & 0.000 & 0.000 & 1.000 \\ MT & 0.143 & 0.619 & 0.238 & 0.141 & 0.562 & 0.297 \\ NL\({}^{1}\) & 1.000 & 0.000 & 0.000 & 0.995 & 0.002 & 0.003 \\ PL & 0.185 & 0.488 & 0.328 & 0.128 & 0.408 & 0.464 \\ PT\({}^{1}\) & 0.000 & 0.000 & 1.000 & 0.150 & 0.393 & 0.457 \\ RO & 0.220 & 0.356 & 0.423 & 0.158 & 0.290 & 0.551 \\ SE & 0.954 & 0.022 & 0.024 & 0.992 & 0.001 & 0.007 \\ SI & 0.245 & 0.440 & 0.315 & 0.150 & 0.682 & 0.168 \\ SK\({}^{1}\) & 0.000 & 1.000 & 0.000 & 0.120 & 0.485 & 0.396 \\ UK & 0.953 & 0.023 & 0.025 & 0.908 & 0.056 & 0.036 \\ \hline \end{tabular} \end{table} Table 12: Membership degrees of 28 European countries produced by the fuzzy \(C\)-medoids clustering algorithm based on metrics \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) for a 3-cluster partition. The superscripts 1 and 2 are used to indicate the medoid countries according to \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\), respectively. For each country, the corresponding memberships were highlighted according to their values. between the 3 groups. Indeed, these insights can be reached due to the fuzzy nature of the partitions, remaining obscured with crisp partitions. Therefore, this example illustrates the usefulness of the fuzzy paradigm when performing clustering of ordinal series in real databases. To gain greater insights into the clustering solutions returned by \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\), the corresponding ternary plots are given in Figures 7 and 8, respectively. The medoids are placed in the vertexes, while the position of the rest of objects is determined by their vector of membership degrees. Note that the ternary plot based on \(\widehat{d}_{1}\) clearly suggests a higher degree of overlap between \(\mathcal{C}_{2}\) and \(\mathcal{C}_{3}\). _Remark 6_.: _Clustering based on \(\widehat{d}_{2,W}\)._ We also run the weighted fuzzy \(C\)-medoids algorithm based on \(\widehat{d}_{2}\). As with the unweighted approach, we set \(C=3\) and \(m=2.1\). The algorithm returned the value \(\beta=0.14\), which indicates that the clustering partition is mostly driven by the serial component \((\widehat{d}_{2,B})\). The 2DS Figure 7: Ternary plot associated with the 3-cluster solution produced by distance \(\widehat{d}_{1}\) in the data set of credit ratings. plot based on the combined metric \(0.14^{2}\widehat{d}_{2,M}+0.86^{2}\widehat{d}_{2,B}\) is displayed in the bottom panel of Figure 6. Note that the ten richest countries are still separated from the remaining ones, but showing a much larger degree of dispersion. On the contrary, the dispersion among the rest of the countries clearly decreases. Overall, the resulting partition presents a higher degree of fuzziness than the one provided by \(\widehat{d}_{2}\), thus making it more difficult to interpret the clusters. Hence, even though the weighted procedure leads to the partition with the best trade-off between intra-cluster compactness and inter-cluster separation, the unweighted approach provides more meaningful groups in the context of the current application. ### Fuzzy clustering of Austrian wage mobility data #### 5.2.1 Data set and exploratory analyses The second case study is related to the nonsupervised classification of Austrian workers in terms of wage mobility. The database consists of 9402 time Figure 8: Ternary plot associated with the 3-cluster solution produced by distance \(\widehat{d}_{2}\) in the data set of credit ratings. series for men entering the labor market in 1975 to 1980 at an age of at most 25 years. The series represent gross monthly wages in May of successive years and exhibit individual lengths ranging from 2 to 32 years with the median length being equal to 22. This time series data set is available through the R package **bayesMCClust** (object _MCCExtExampleData_) [52], and it was originally taken from the Austrian Social Security Database (ASSD) [53]. It is worth highlighting that a slightly modified version of this data collection was used in [25] to perform clustering of categorical series. Therefore, the application presented in this section involves a case study which has already been established in the time series clustering literature. Following [25], the gross monthly wage is divided into six categories labelled by the integers from 0 to 5. Category zero corresponds to zero-income or nonemployment (which is not equivalent to be out of labour force). Categories one to five correspond to the quintiles of the income distribution, which are determined for each year from all nonzero wages observed in that year for the population of all male employees in Austria. As it is stated in [25], the consideration of wage categories has the advantage that no inflation adjustment has to be made and circumvents the problem that, in Austria, the recorded wages are right-censored. Note that, as a natural ordering exists in the set of wage categories, the series under study can be directly treated as OTS. Table 13 provides the relative frequencies of the different states in the database, indicating that the wage categories are approximately uniformly distributed, with state 1 appearing a slightly higher number of times than the remaining ones. #### 5.2.2 Application of clustering algorithms and results The fuzzy \(C\)-medoids algorithm based on \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) was applied to the data set of Austrian wage mobility. Here we considered \(\mathcal{L}=\{1\}\) due to the short length of the series in the collection (the first lag resulted significant according to the hypothesis test presented in Section 4.4). Selection of \(C\) and \(m\) was carried out by using the same procedure as in the previous analysis, resulting in \((C,m)=(2,2.0)\) for \(\widehat{d}_{1}\) and \((C,m)=(3,1.8)\) for \(\widehat{d}_{2}\). Thus, both metrics identify underlying partitions with a different number of groups. The large number of series in this database makes it unfeasible to show the resulting fuzzy partitions. However, given a metric, the properties of the series in each group can be summarized by independently analyzing the estimated features within the group. We started by examining the \(\widehat{d}_{1}\)-based partition. First, each series was assigned to the cluster with the highest membership degree, thus obtaining a crisp partition with 4660 series in the first group (\(\mathcal{C}_{1}^{1}\)) and 4742 series in the second group (\(\mathcal{C}_{2}^{1}\)). Based on this partition, we computed the vectors \(\overline{\boldsymbol{f}}^{i}=\left(\overline{f}_{0}^{i},\overline{f}_{1}^{i },\ldots,\overline{f}_{4}^{i}\right)\) and the matrices \(\overline{\boldsymbol{F}}^{i}=\left(\overline{f}_{j-1k-1}^{i}(1)\right)_{1 \leq j,k\leq 5}\), for \(i=1,2\) \begin{table} \begin{tabular}{l c c c c c c} \hline State & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline Relative frequency & 0.172 & 0.208 & 0.146 & 0.140 & 0.161 & 0.173 \\ \hline \end{tabular} \end{table} Table 13: Relative frequencies of the different states in the database of Austrian wage mobility. whose elements are the averages of the corresponding features over all series in the \(i\)th cluster. The resulting values were \[\overline{\boldsymbol{f}}^{1}=(0.28,0.73,0.91,0.97,0.99),\quad\overline{ \boldsymbol{F}}^{1}=\begin{pmatrix}0.15&0.24&0.26&0.28&0.28\\ 0.24&0.63&0.70&0.72&0.73\\ 0.26&0.68&0.86&0.90&0.91\\ 0.28&0.70&0.89&0.96&0.97\\ 0.28&0.70&0.90&0.97&0.99\end{pmatrix}\] \[\overline{\boldsymbol{f}}^{2}=(0.08,0.17,0.29,0.48,0.73),\quad\overline{ \boldsymbol{F}}^{2}=\begin{pmatrix}0.02&0.03&0.03&0.04&0.06\\ 0.03&0.09&0.12&0.14&0.15\\ 0.04&0.10&0.19&0.25&0.27\\ 0.05&0.11&0.22&0.39&0.46\\ 0.06&0.13&0.24&0.43&0.68\end{pmatrix}\] The average vectors \(\overline{\boldsymbol{f}}^{1}\) and \(\overline{\boldsymbol{f}}^{2}\) are very different, indicating that the series in cluster \(\mathcal{C}^{1}_{2}\) generally take larger states than the ones in \(\mathcal{C}^{1}_{1}\). The average matrices \(\overline{\boldsymbol{F}}^{1}\) and \(\overline{\boldsymbol{F}}^{2}\) also reveal clearly dissimilar dependence structures in both groups, although their values are rather difficult to interpret. Hence, to shed light on the behaviour pattern at each cluster, we decided to compute the average values of the \(\widehat{d}_{2}\)-based features according to the clustering partition returned by \(\widehat{d}_{1}\). The new features are provided in the upper part of Table 14. As expected from the values of \(\overline{\boldsymbol{f}}^{1}\) and \(\overline{\boldsymbol{f}}^{2}\), the averages for the first four \(\widehat{d}_{2}\)-based measures indicate that both groups are clearly different in terms of marginal distributions. In particular, the series in \(\mathcal{C}^{1}_{1}\) exhibit positive skewness (tendency to lower wage categories) in contrast to the negative skewness usually displayed by the series in \(\mathcal{C}^{1}_{2}\). Concerning the serial behaviour, cluster \(\mathcal{C}^{1}_{1}\) is associated with a lower degree of positive dependence than \(\mathcal{C}^{1}_{2}\). Based on previous considerations, a description of the Austrian labour market in terms of social mobility could be provided. Indeed, individuals with higher wages (\(\mathcal{C}^{1}_{2}\)) experience a lower degree of social mobility (i.e., a decline in income), since high states tend to be followed by high states. On the contrary, a more pronounced level of social mobility is observed for employees with lower salaries (\(\mathcal{C}^{1}_{1}\)), which indicates that these individuals are more likely to get a promotion. \begin{table} \begin{tabular}{c c|c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Cluster} & \(\widehat{\text{loc}}_{d_{o,1}}\) & \(2\widehat{\text{disp}}_{d_{o,1}}\) & \(\widehat{\text{ssim}}_{d_{o,1}}\) & \(\widehat{\text{skew}}_{d_{o,1}}\) & \(\widehat{\text{skew}}_{d_{o,1}}(1)\) \\ \cline{2-7} & & 5 & 5 & 5 & 5 & 5 \\ \hline \(\widehat{d}_{1}\) & \(\mathcal{C}^{1}_{1}\) & 0.50 & 0.38 & 0.31 & 0.44 & 0.17 \\ & \(\mathcal{C}^{1}_{2}\) & 0.80 & 0.55 & 0.33 & -0.49 & 0.41 \\ \hline \(\widehat{d}_{2}\) & \(\mathcal{C}^{2}_{1}\) & 0.52 & 0.27 & 0.45 & 0.57 & 0.01 \\ & \(\mathcal{C}^{2}_{2}\) & 0.89 & 0.47 & 0.46 & -0.65 & 0.43 \\ & \(\mathcal{C}^{2}_{3}\) & 0.55 & 0.62 & 0.08 & 0.03 & 0.41 \\ \hline \hline \end{tabular} \end{table} Table 14: Average values of the \(\widehat{d}_{2}\)-based features in each group concerning the clustering solutions produced by \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) in the data set of Austrian wage mobility. Figure 9 shows the medoid series with \(\widehat{d}_{1}\). Although the identification of clear patterns in this kind of graphs is usually challenging, some interesting insights can be obtained from both plots. For instance, the medoid of cluster \(\mathcal{C}_{2}^{1}\) puts more weight on higher categories besides displaying a stronger tendency to generate long runs (positive dependence). Note that these considerations are consistent with the average features displayed in the upper part of Table 14. Similar analyses can be carried out by considering the clustering solution produced by \(\widehat{d}_{2}\). The crisp version of this partition includes 2878, 3121 and 3403 series in the first (\(\mathcal{C}_{1}^{2}\)), second (\(\mathcal{C}_{2}^{2}\)) and third (\(\mathcal{C}_{3}^{2}\)) groups, respectively. The average values of the \(\widehat{d}_{2}\)-based features with respect to each group are provided in the lower part of Table 14. Note that this 3-cluster solution can be interpreted as a refinement of the above partition. In fact, cluster \(\mathcal{C}_{1}^{2}\) is again associated with high skewness and low positive dependence (thus high social mobility). A similar reasoning can be made for the second group. Cluster \(\mathcal{C}_{3}^{2}\) represents individuals with middle income and a low level of mobility. The clear connection between both clustering partitions becomes evident in the confusion matrix given in Table 15, where: (i) all the series in \(\mathcal{C}_{2}^{2}\) belong to \(\mathcal{C}_{2}^{1}\), (ii) only 45 series of \(\mathcal{C}_{1}^{2}\) (1.56%) fall outside \(\mathcal{C}_{1}^{1}\), and (iii) the additional group identified Figure 9: Medoid time series according to the 2-cluster solution produced by distance \(\widehat{d}_{1}\) in the data set of Austrian wage mobility. by \(\widehat{d}_{2}\), \(\mathcal{C}_{3}^{2}\), is formed by series of both clusters \(\mathcal{C}_{1}^{1}\) and \(\mathcal{C}_{2}^{1}\) in similar amounts. In sum, the analyses carried out throughout Sections 5.1 and 5.2 illustrate the usefulness of both metrics \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) when performing fuzzy clustering of OTS in real data sets. Specifically, they highlight the importance of the fuzzy paradigm when attempting to achieve meaningful conclusions from the resulting partitions. ## 6 Conclusions In this paper, we have proposed two novel distances between OTS which automatically take advantage of the inherent ordering in the series' range. The first metric considers proper estimates of the cumulative probabilities, while the second distance employs some ordinal features describing the behaviour of a given OTS. Both distances are formed by two components. The first one evaluates discrepancies between the marginal distributions of the series, while the second component assesses differences in terms of serial dependence structures. The metrics are used as input to the classical fuzzy \(C\)-medoids algorithm, which allows for the assignment of gradual memberships of the OTS to the different groups. This is particularly useful when dealing with time series data sets, where different amounts of dissimilarity between the underlying processes or changes on the dynamic behaviours over time are frequent. To assess the performance of the proposed clustering algorithms, several simulation experiments were carried out including scenarios formed by OTS pertaining to well-defined clusters and scenarios involving series generated from an outlying stochastic process. Different types of ordinal processes were considered. The methods were compared with several procedures based on alternative dissimilarities. Overall, the proposed clustering techniques showed the best performance. Specifically, they outperformed some techniques specifically designed to deal with real-valued and with nominal time series, which highlights the importance of considering the underlying ordering when performing OTS clustering. Extensions of both clustering procedures were also constructed by giving different weights to the marginal and serial components of the proposed metrics. The weighting system allows the importance of each component in the computation of the clustering partition to be automatically determined during the minimisation phase. The advantages of the weighted algorithms with respect to the standard ones in terms of clustering accuracy were analysed. The \begin{table} \begin{tabular}{l|c c c} \multicolumn{3}{c}{} & \multicolumn{2}{c}{\(\widehat{d}_{1}\)} \\ \cline{2-4} \multicolumn{1}{c|}{} & & \(\mathcal{C}_{1}^{1}\) & \(\mathcal{C}_{2}^{1}\) \\ \cline{2-4} \multicolumn{1}{c|}{} & \(\mathcal{C}_{1}^{2}\) & 2833 & 45 \\ \(\widehat{d}_{2}\) & \(\mathcal{C}_{2}^{2}\) & 0 & 3121 \\ \(\mathcal{C}_{3}^{2}\) & 1827 & 1576 \\ \end{tabular} \end{table} Table 15: Confusion matrix for the crisp clustering solutions produced by \(\widehat{d}_{1}\) and \(\widehat{d}_{2}\) in the data set of Austrian wage mobility. results showed that significant improvements are frequently observed when employing the weighted procedure based on the second of the introduced metrics. The usefulness of the proposed clustering algorithms was illustrated by means of two applications involving economic time series. In both cases, interesting conclusions were reached. There are at least three interesting ways through which this work could be extended. First, robust versions of the proposed methods could be constructed by considering the so-called metric, noise, and trimmed approaches [54; 10; 17], which adjust the objective function of the clustering algorithm in a suitable manner so that outlier series do not pervert the resulting partition. Second, a spatial penalisation term could be incorporated in the objective function of the procedures in order to deal with OTS data sets containing geographical information [39; 38], like the one considered in Section 5.1. Third, the clustering methods could be modified in such a way that they can properly handle OTS containing missing data [55]. It would be interesting to address these and further topics in future research. ## Appendix In this section, we present the proofs of Propositions 1 and 2. Proof of Proposition 1: We shall prove Proposition 1 by induction in \(n\). Denote by \((f_{0},f_{1},\ldots,f_{n})\) and \((g_{0},g_{1},\ldots,g_{n})\) the vectors of cumulative probabilities for processes \(\{X_{t}\}_{t\in\mathbb{Z}}\) and \(\{Y_{t}\}_{t\in\mathbb{Z}}\), respectively. Note that, for \(n=1\) (2 states), the distance \(d_{1,M}\) can be written as \[d_{1,M}\big{(}X_{t},Y_{t}\big{)}=(f_{0}-g_{0})^{2}=(p_{0}-q_{0})^{2}, \tag{28}\] so the assertion of Proposition 1 is true. Assume now that Proposition 1 holds for \(n=N\), \(N\in\mathbb{N}\). For processes \(X_{t}\) and \(Y_{t}\) with range \(\{s_{0},s_{1},\ldots,s_{N+1}\}\), we have \[d_{1,M}\big{(}X_{t},Y_{t}\big{)}=\sum_{i=0}^{N-1}(f_{i}-g_{i})^{2}+(f_{N}-g_{N })^{2}. \tag{29}\] Note that the term \(\sum_{i=0}^{N-1}(f_{i}-g_{i})^{2}\) in (29) can be seen as the distance \(d_{1,M}\) between two processes \(X_{t}^{*}\) and \(Y_{t}^{*}\) with range \(\{s_{0},s_{1},\ldots,s_{N}\}\), marginal probabilities \((p_{0},p_{1},\ldots,p_{N-1},1-f_{N-1})\) and \((q_{0},q_{1},\ldots,q_{N-1},1-g_{N-1})\), and cumulative probabilities \((f_{0},f_{1},\ldots,f_{N-1},1)\) and \((g_{0},g_{1},\ldots,g_{N-1},1)\), respectively. Moreover, by taking into account that \(f_{i}=\sum_{j=0}^{i}p_{j}\) and \(g_{i}=\sum_{j=0}^{i}q_{j}\), the term \((f_{N}-g_{N})^{2}\) in (29) can be expressed as \[(f_{N}-g_{N})^{2}=\bigg{(}\sum_{i=0}^{N}(p_{i}-q_{i})\bigg{)}^{2}=\sum_{i=0}^{ N}(p_{i}-q_{i})^{2}+2\sum_{j=0}^{N-1}\sum_{k=j+1}^{N}(p_{j}-q_{j})(p_{k}-q_{k}). \tag{30}\] Considering now the induction hypothesis, and plugging-in the previous expression for \((f_{N}-g_{N})^{2}\) in (29), the distance \(d_{1,M}\) between \(X_{t}\) and \(Y_{t}\) can be written as \[d_{1,M}\big{(}X_{t},Y_{t}\big{)}=\sum_{i=0}^{N-1}(N-i)(p_{i}-q_{i} )^{2}+\sum_{i=0}^{N-1}(p_{i}-q_{i})^{2}+(p_{N}-q_{N})^{2}+\] \[2\sum_{j=0}^{N-2}\sum_{k=j+1}^{N-1}(N-k)(p_{j}-q_{j})(p_{k}-q_{k} )+2\sum_{j=0}^{N-2}\sum_{k=j+1}^{N-1}(p_{j}-q_{j})(p_{k}-q_{k})+\] \[2\sum_{j=0}^{N-1}(p_{j}-q_{j})(p_{N}-q_{N})=\sum_{i=0}^{N-1}(N+1-i )(p_{i}-q_{i})^{2}+(p_{N}-q_{N})^{2}+ \tag{31}\] \[2\sum_{j=0}^{N-2}\sum_{k=j+1}^{N-1}(N+1-k)(p_{j}-q_{j})(p_{k}-q_ {k})+2\sum_{j=0}^{N-1}(p_{j}-q_{j})(p_{N}-q_{N})=\] \[\sum_{i=0}^{N}(N+1-i)(p_{i}-q_{i})^{2}+2\sum_{j=0}^{N-1}\sum_{k=j +1}^{N}(N+1-k)(p_{j}-q_{j})(p_{k}-q_{k}).\] Therefore, the assertion is also true for \(n=N+1\), and thus for all \(n\in\mathbb{N}\). The proof of Proposition 1 is completed. Proof of Proposition 2 Proof.: The iterative solutions of the constrained minimization problem (15) are obtained via the Lagrangian multipliers method. First, for \(j=1,2\), consider the Lagrangian function taking the form \[L(\mathbf{U},\beta,\mathbf{\lambda})=\sum_{i=1}^{s}\sum_{c=1}^{C}u_{ic}^{m}\Big{[} \beta^{2}\widehat{d}_{j,M}(i,c)+(1-\beta)^{2}\widehat{d}_{j,B}\big{(}i,c\big{)} \Big{]}-\sum_{i=1}^{s}\lambda_{i}\big{(}\sum_{c=1}^{C}u_{ic}-1\big{)}, \tag{32}\] where \(\mathbf{\lambda}=\{\lambda_{1},\ldots,\lambda_{s}\}\) is the set of Lagrange multipliers concerning the constraints of the membership degrees. By fixing \(\beta\in[0,1]\) and setting equal to zero the partial derivatives of \(L\) with respect to \(u_{ic}\) and \(\lambda_{i}\), for arbitrary \(i\in\{1,\ldots,s\}\) and \(c\in\{1,\ldots,C\}\), we obtain that \[\frac{\partial L(\mathbf{U},\beta,\mathbf{\lambda})}{\partial u_{ic}}=0\ \ \text{ and }\ \ \frac{\partial L(\mathbf{U},\beta,\mathbf{\lambda})}{\partial\lambda_{i}}=0\] is equivalent to \[mu_{ic}^{m-1}\Big{[}\beta^{2}\widehat{d}_{j,M}(i,c)+(1-\beta)^{2}\widehat{d}_ {j,B}(i,c)\Big{]}-\lambda_{i}=0\ \ \text{ and }\ \ \sum_{c^{\prime}=1}^{C}u_{ic^{\prime}}-1=0. \tag{33}\] From the first equation in (33), we can express \(u_{ic}\) as \[u_{ic}=\left(\frac{\lambda_{i}}{m}\right)^{\frac{1}{m-1}}\Big{[}\beta^{2}\widehat {d}_{j,M}(i,c)+(1-\beta)^{2}\widehat{d}_{j,B}(i,c)\Big{]}^{\frac{-1}{m-1}}. \tag{34}\] By introducing (34) in the second equation of (33), we obtain \[\left(\frac{\lambda_{i}}{m}\right)^{\frac{1}{m-1}}\sum_{c^{\prime}=1}^{C} \Big{[}\beta^{2}\widehat{d}_{j,M}(i,c^{\prime})+(1-\beta)^{2}\widehat{d}_{j,B} (i,c^{\prime})\Big{]}^{\frac{-1}{m-1}}=1, \tag{35}\] which leads to \[\left(\frac{\lambda_{i}}{m}\right)^{\frac{1}{m-1}}=\bigg{[}\sum_{c^{\prime}=1 }^{C}\Big{[}\beta^{2}\widehat{d}_{j,M}(i,c^{\prime})+(1-\beta)^{2}\widehat{d}_ {j,B}(i,c^{\prime})\Big{]}^{\frac{-1}{m-1}}\bigg{]}^{-1}. \tag{36}\] Lastly, by replacing (36) in (34), the membership degree \(u_{ic}\) can be expressed as \[u_{ic}=\Bigg{[}\sum_{c^{\prime}=1}^{C}\left(\frac{\beta^{2}\widehat{d}_{j,M}( i,c)+(1-\beta)^{2}\widehat{d}_{j,B}(i,c)}{\beta^{2}\widehat{d}_{j,M}(i,c^{ \prime})+(1-\beta)^{2}\widehat{d}_{j,B}(i,c^{\prime})}\right)^{\frac{1}{m-1}} \Bigg{]}^{-1}, \tag{37}\] which gives the iterative solutions for the membership degrees. The iterative solution for \(\beta\) can be obtained in a similar way. We proceed by fixing \(u_{ic}\) and setting equal to zero the partial derivative of \(L\) with respect to \(\beta\), i.e., \(\frac{\partial L(\mathcal{U},\beta,\boldsymbol{\lambda})}{\partial\beta}=0\). This is equivalent to \[\sum_{i=1}^{s}\sum_{c=1}^{C}u_{ic}^{m}\Big{[}\beta\widehat{d}_{j,M}(i,c)-(1- \beta)\widehat{d}_{j,B}\big{(}i,c\big{)}\Big{]}=0, \tag{38}\] yielding \[\sum_{i=1}^{s}\sum_{c=1}^{C}u_{ic}^{m}\Big{[}\beta\big{(}\widehat{d}_{j,M}(i, c)+\widehat{d}_{j,B}(i,c)\big{)}-\widehat{d}_{j,B}\big{(}i,c\big{)}\Big{]}=0. \tag{39}\] From equation (39), we can conclude that \[\beta=\frac{\sum_{i=1}^{s}\sum_{c=1}^{C}u_{ic}^{m}\widehat{d}_{j,B}\big{(}i,c \big{)}}{\sum_{i=1}^{s}\sum_{c=1}^{C}u_{ic}^{m}\big{(}\widehat{d}_{j,M}(i,c)+ \widehat{d}_{j,B}(i,c)\big{)}}, \tag{40}\] which gives the iterative solution for \(\beta\). ## Acknowledgments The research of Angel Lopez-Oriona and Jose. A. Vilar has been supported by the Ministerio de Economia y Competitividad (MINECO) grant MTM2017-87197-C3-1-P, the Xunta de Galicia through the ERDF (Grupos de Referencia Competitiva ED431C-2016-015), and the Centro de Investigacion de Galicia "CITIC", funded by Xunta de Galicia and the European Union (European Regional Development Fund-Galicia 2014-2020 Program), by grant ED431G 2019/01. The author Angel Lopez-Oriona would like to thank Prof. Christian H. Weiss for his kindness during the doctoral stay at the Helmut Schmidt University of Hamburg, where this research was carried out.
2308.03091
Superradiant instabilities of massive bosons around exotic compact objects
Superradiantly unstable ultralight particles around a classical rotating black hole (BH) can form an exponentially growing bosonic cloud, which have been shown to provide an astrophysical probe to detect ultralight particles and constrain their mass. However, the classical BH picture has been questioned, and different theoretical alternatives have been proposed. Exotic compact objects (ECOs) are horizonless alternatives to BHs featuring a reflective surface (with a reflectivity $\mathcal{K}$) in place of the event horizon. In this work, we study superradiant instabilities around ECOs, particularly focusing on the influence of the boundary reflection. We calculate the growth rate of superradiant instabilities around ECOs, and show that the result can be related to the BH case by a correction factor $g_{\mathcal{K}}$, for which we find an explicit analytical expression and a clear physical interpretation. Additionally, we consider the time evolution of superradiant instabilities and find that the boundary reflection can either shorten or prolong the growth timescale. As a result, the boundary reflection alters the superradiance exclusion region on the Regge plane, potentially affecting constraints on the mass of ultralight particles. For a mildly reflective surface ($|\mathcal{K}|\lesssim 0.5$), the exclusion region is not substantially changed, while significant effects from the boundary reflection can occur for an extreme reflectivity ($|\mathcal{K}|\gtrsim0.9$).
Lihang Zhou, Richard Brito, Zhan-Feng Mai, Lijing Shao
2023-08-06T11:30:32Z
http://arxiv.org/abs/2308.03091v2
# Superradiant instabilities of massive bosons around exotic compact objects ###### Abstract Superradiantly unstable ultralight particles around a classical rotating black hole (BH) can form an exponentially growing bosonic cloud, which have been shown to provide an astrophysical probe to detect ultralight particles and constrain their mass. However, the classical BH picture has been questioned, and different theoretical alternatives have been proposed. Exotic compact objects (ECOs) are horizonless alternatives to BHs featuring a reflective surface (with a reflectivity \(\mathcal{K}\)) in place of the event horizon. In this work, we study superradiant instabilities around ECOs, particularly focusing on the influence of the boundary reflection. We calculate the growth rate of superradiant instabilities around ECOs, and show that the result can be related to the BH case by a correction factor \(g_{K}\), for which we find an explicit analytical expression and a clear physical interpretation. Additionally, we consider the time evolution of superradiant instabilities and find that the boundary reflection can either shorten or prolong the growth timescale. As a result, the boundary reflection alters the superradiance exclusion region on the Regge plane, potentially affecting constraints on the mass of ultralight particles. For a mildly reflective surface (\(|\mathcal{K}|\lesssim 0.5\)), the exclusion region is not substantially changed, while significant effects from the boundary reflection can occur for an extreme reflectivity (\(|\mathcal{K}|\gtrsim 0.9\)). + Footnote †: Corresponding author: [email protected] + Footnote †: Corresponding author: [email protected] + Footnote †: Corresponding author: [email protected] ## I Introduction Ultralight bosons have been proposed by different theories as elementary particles beyond the Standard Model of particle physics. Examples include (i) the QCD axion, introduced to solve the strong charge-parity problem [1; 2; 3], (ii) a plenitude of axion-like particles (ALPs), predicted by the string theory and called an "axiverse" [4; 5], and (iii) dark photons [6]. These ultralight bosons, which would naturally couple weakly to baryonic matter, have been shown to be promising dark matter candidates [7; 8; 9]. In addition to some ground-based experiments (see e.g. Refs. [9; 10]), astrophysical environments, such as the vicinity of black holes (BHs), also provide natural testbeds for detecting ultralight particles. This relies on a mechanism called BH superradiance (for a comprehensive review, see Ref. [11]). Consider a field of ultralight particles with mass \(\mu\), located near a rotating BH. When the Compton wavelength of the particle is comparable to the horizon radius of the BH, the particle field can form quasi-bound states around the BH and extract energy and angular momentum effectively from the BH if the following superradiance condition is satisfied \[\omega_{R}<\frac{ma}{2Mr_{+}}, \tag{1}\] where \(\omega_{R}\) is the real part of the frequency of the massive field, which is typically close to the mass \(\mu\) of the ultralight particle; \(m\) is the magnetic quantum number; and \(a,\ M,\ r_{+}\) are the spin, mass and horizon radius of the BH, respectively. An intuitive understanding of this condition is that superradiance occurs whenever the angular velocity of the field, \(\omega_{R}/m\), is less than that of the spacetime \(a/2Mr_{+}\). When the superradiance condition is met, the bosonic field can turn unstable as more and more particles are produced from the extracted energy and angular momentum. These particles remain bounded to the BH by gravity and form an exponentially growing bosonic cloud. This phenomenon is the so-called superradiant instability. Superradiant instabilities of BHs have been thoroughly studied using perturbation theory but also numerical relativity, with studies including the computation of the unstable eigenfrequencies [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23], linear and non-linear time evolutions of the instability [24; 25; 26; 27], the understanding of nonlinear effects such as the "bosenova" or scalar emission induced by self-interactions [28; 29; 30; 31], and the computation of gravitational wave (GW) emission by the bosonic cloud [26; 32; 33; 34; 35; 36; 37; 38]. In particular, it has been shown that superradiant instabilities can spin down rotating BHs and leave exclusion regions on the BH spin-mass plane (Regge plane). Since these regions are related to the mass of ultralight particles, they can be used to constrain the latter through the measurement of the spin and mass of astrophysical BHs [5; 26; 33; 39; 40]. In this paper, we shall not restrict our discussion to BHs. Classical BHs, namely, Kerr BHs, as a solution of Einstein's general relativity (GR), have singularities at \(r=0\), which are hidden within event horizons. However, this classical BH picture leads to puzzles [41], for example, the information paradox of evaporating BHs [42; 43]. In different contexts, including some quantum gravity candidates, exotic compact objects (ECOs) as horizonless alternatives to classical BHs, have been proposed and studied. Examples include fuzzballs in string theory [44; 45], boson stars and oscillatons [46], gravitational condensate stars, i.e. gravastars [47] and wormholes [48; 49]. A more comprehensive list of proposed ECOs can be found in Refs. [41; 50]. ECOs have also been dubbed as "BH mimickers" [51]. They have no event horizon, and their surfaces are reflective, in contrast with the BH event horizon which only allows particles and waves to fall inwards. In a simple phenomenological model, widely used in the literature, one can assume the presence of a boundary at \(r_{0}\gtrsim r_{+}\), with a reflectivity \(\mathcal{K}\), while the external spacetime is still described by the usual Kerr geometry. The modified inner boundary condition may result in colorful phenomenology, including modified quasinormal modes (QNMs) [52; 53; 54; 55], ergoregion instabilities [52; 54; 55; 56; 57] and GW "echo" signals [58; 59; 41; 50], to name a few examples. Although there is no definite proof of the existence of BH horizons [52], the advent of GW detection and the improvements in its precision may provide a unique opportunity to probe physics at the near-horizon scale and constrain to very high precision the existence of a reflecting surface (see Refs. [60; 41] for recent reviews). Recently, superradiant instabilities of ultralight particles around ECOs, have also been studied in Refs. Guo _et al._[61; 62]. In particular, a first effort was made in Ref. [61] in investigating massive scalar perturbations with a modified boundary condition, using a purely analytic approach in the non-relativistic regime where \(\alpha\equiv\mu M\ll 1\), where again \(\mu\) and \(M\) are the masses of the bosonic field and the BH, respectively. The goal of this paper is to study this subject in more detail and, for the first time, explicitly show how the boundary reflection influences the growth of massive scalar perturbations around ECOs. This paper is organized as follows. In Sec. II we derive the equations of motion for massive scalar perturbations and specify the boundary conditions. At the end of that section a discussion can be found regarding the difference in the boundary conditions set by Guo _et al._[61] and by us. In Sec. III we solve the eigenvalue problem. We first introduce our analytic method, from which we obtain our key result, the correction factor \(\phi_{\mathcal{K}}\) [see Eq. (36) below]. We also calculate the eigenfrequencies using a semi-analytic method and a continued fraction method, followed by a comparison between different methods. The physical meaning of the correction factor \(\phi_{\mathcal{K}}\) is investigated in Sec. IV by making use of the energy-momentum conservation. Sec. V is devoted to a discussion on the time evolution of superradiant instabilities around ECOs, with a particular focus on the influence of the boundary reflection. Sec. VI discusses the implications of the possible boundary reflection for the constraints on the ultralight particle mass. Our summary and conclusion can be found in Sec. VII. In this work, We use the \((-,+,+,+)\) convention and set \(G=c=\hbar=1\). ## II Bosonic cloud around an ECO We consider massive scalar perturbations in the following spacetime background: we assume that the geometry outside of the ECO is described by the Kerr metric, with the line element in Boyer-Lindquist coordinates \[\begin{split}\mathrm{d}s^{2}=&-\Big{(}1-\frac{2Mr} {\rho^{2}}\Big{)}\mathrm{d}t^{2}+\frac{\rho^{2}}{\Delta}\mathrm{d}r^{2}-\frac{ 4Mr}{\rho^{2}}a\sin^{2}\theta\mathrm{d}\phi\mathrm{d}t\\ &+\rho^{2}\mathrm{d}\theta^{2}+\bigg{[}(r^{2}+a^{2})\sin^{2} \theta+\frac{2Mr}{\rho^{2}}a^{2}\sin^{4}\theta\bigg{]}\mathrm{d}\phi^{2}, \end{split} \tag{2}\] where \(a=J/M\) is the spin angular momentum normalized by the mass of the ECO, \(\rho^{2}\equiv r^{2}+a^{2}\cos^{2}\theta\), and \(\Delta\equiv r^{2}-2Mr+a^{2}\). For a Kerr BH, the event horizon and the Cauchy horizon are located at \(r_{+}=M+\sqrt{M^{2}-a^{2}}\) and \(r_{-}=M-\sqrt{M^{2}-a^{2}}\) respectively; \(r_{+}\) is normally taken to be the inner boundary in the study of superradiant instabilities, with a purely ingoing boundary condition [13; 21; 17]. However, for an ECO, we replace the event horizon with a reflective surface located at \(r_{0}=r_{+}(1+\epsilon)\), where \(\epsilon\ll 1\). This surface reflects a portion of the ingoing wave, parameterized by the reflectivity \(\mathcal{K}\). In a curved spacetime, a test scalar field with mass \(\mu\) satisfies the Klein-Gordon equation \[(\nabla^{\prime}\nabla_{\nu}-\mu^{2})\Psi=0. \tag{3}\] To study characteristic modes of the perturbation field, we separate variables as follows \[\Psi(t,r,\theta,\phi)=\mathrm{e}^{-i\omega t}\mathrm{e}^{im\phi}R_{lm}(r)S_{lm }(\theta), \tag{4}\] where \(\omega\in\mathbb{C}\) is the complex eigenfrequency, and the integers \(l\geq 0,\ m\in[-l,\ l]\) are the angular and magnetic quantum numbers, respectively. Expanding the Klein-Gordon equation in a Kerr metric background, we obtain the following equations of motion \[\frac{\mathrm{d}}{\mathrm{d}r}\Big{(}\Delta\frac{\mathrm{d}R_{lm }}{\mathrm{d}r}\Big{)}+\bigg{[}\frac{\omega^{2}\big{(}r^{2}+a^{2}\big{)}^{2}-4 Mamour+m^{2}a^{2}}{\Delta}-(\omega^{2}a^{2}+\mu^{2}r^{2}+\Lambda_{lm}) \bigg{]}R_{lm}(r) = 0, \tag{5}\] \[\frac{1}{\sin\theta}\frac{\mathrm{d}}{\mathrm{d}\theta}\Big{(} \sin\theta\frac{\mathrm{d}S_{lm}}{\mathrm{d}\theta}\Big{)}+\bigg{[}a^{2}( \omega^{2}-\mu^{2})\cos^{2}\theta-\frac{m^{2}}{\sin^{2}\theta}+\Lambda_{lm} \bigg{]}S_{lm}(\theta) = 0, \tag{6}\] where \(\omega\) and \(\Lambda_{lm}\) are the eigenvalues to be solved. The eigenfunctions \(S_{lm}\) of Eq. (6) are a series of spin-weighted spheroidal harmonics labelled by \(l\) and \(m\), with the eigenvalues \(\Lambda_{lm}=l(l+1)+\mathcal{O}\left[a^{2}(\mu^{2}-\omega^{2})\right]\); see Ref. [63] for an analytical expansion of \(\Lambda_{lm}\) in terms of \(a\sqrt{\mu^{2}-\omega^{2}}\). Here we introduce a complex number \(l^{\prime}\) to denote \(\Lambda_{lm}\) as \[\Lambda_{lm}=l^{\prime}(l^{\prime}+1). \tag{7}\] When \(\omega\approx\mu\) and \(\alpha=\mu M\ll 1\) are satisfied, \(l^{\prime}\) is very close to the angular quantum number \(l\). Therefore, in Eq. (5), \(\Lambda_{lm}\) can be treated as a known number and only the eigenfrequency \(\omega\) needs to be found. In order to find \(\omega\) from Eq. (5), we also need to impose appropriate boundary conditions. When \(r\to\infty\), we take the decaying solution of \(R_{lm}\)[17] \[\lim_{r\to\infty}R_{lm}(r)\sim r^{-1+\left(2\omega^{2}-\mu^{2}\right)M/\kappa} \mathrm{e}^{-\kappa r}, \tag{8}\] where \[\kappa=\sqrt{\mu^{2}-\omega^{2}},\quad\mathrm{Re}\,\kappa>0. \tag{9}\] When investigating the behaviour of \(R_{lm}\) near the inner boundary \(r_{0}\), it is useful to introduce the tortoise coordinate \[\begin{split} r^{*}&=\int\frac{r^{2}+a^{2}}{\Delta} \mathrm{d}r\\ &=r+\frac{2Mr_{+}}{r_{+}-r_{-}}\ln|r-r_{+}|-\frac{2Mr_{-}}{r_{+}- r_{-}}\ln|r-r_{-}|,\end{split} \tag{10}\] and define \[Y=(r^{2}+a^{2})^{1/2}R_{lm}(r). \tag{11}\] The location of the boundary can be expressed in the tortoise coordinate, \(r_{0}^{*}=r^{*}(r_{0})\). Then the radial equation (5) can be rewritten in the standard form of a wave equation \(\mathrm{d}^{2}Y/\mathrm{d}{r^{*}}^{2}+VY=0\), where the effective potential reads \[\begin{split} V(r)=&-\frac{\Delta(2Mr^{3}+a^{2}r^{ 2}-4Ma^{2}r+a^{4})}{(r^{2}+a^{2})^{4}}\\ &-\frac{\Delta(\mu^{2}r^{2}+a^{2}\omega^{2}-2ma\omega+\Lambda_{ lm})}{(r^{2}+a^{2})^{2}}+\left(\omega-\frac{ma}{r^{2}+a^{2}}\right)^{2}.\end{split} \tag{12}\] Since \(r_{0}\approx r_{+}\), when \(r\to r_{0}\) we have \(\Delta\approx 0\), and therefore, in this limit, the two independent solutions of \(Y\) are \(\mathrm{e}^{\mathrm{i}(\omega-\omega_{L})\Delta r^{*}}\), where \(\Delta r^{*}=r^{*}-r_{0}^{*}\), and \[\omega_{c}=\frac{ma}{r_{+}^{2}+a^{2}}. \tag{13}\] For an ECO, the wave function near \(r_{0}\) should be a superposition of ingoing and outgoing waves [64] \[\lim_{\Delta r_{*}\to 0}Y\sim\mathrm{e}^{-\mathrm{i}(\omega-\omega_{L})\Delta r ^{*}}+\mathcal{K}\mathrm{e}^{\mathrm{i}(\omega-\omega_{L})\Delta r^{*}} \tag{14}\] where \(\mathcal{K}\) is the boundary reflectivity; \(|\mathcal{K}|\) denotes the proportion of the incident wave reflected at \(r_{0}\), and \(\mathrm{arg}(\mathcal{K})\) is the phase shift. Note that the inner boundary condition (14) in our treatment differs from Eq. (27) in Ref. [61]. In the latter, they wrote \(\mathrm{e}^{-\mathrm{i}(\omega-\omega_{L})r^{*}}+\mathcal{R}(\omega)\mathrm{e }^{\mathrm{i}(\omega-\omega_{L})r^{*}}\) and the location \(r_{0}^{*}\) of the reflective surface is not explicitly specified. Their definition is related to ours by \(\mathcal{R}(\omega)=\mathcal{K}\mathrm{e}^{-2(\omega-\omega_{L})r_{0}^{*}}\). With our definition, the physical meaning of the reflectivity \(\mathcal{K}\) is clearer. At the reflective boundary, we have \(\Delta r^{*}=0\), and the amplitudes of the ingoing and outgoing waves at the reflective boundary are \(1\) and \(\mathcal{K}\) respectively, up to a common constant. This means that \(|\mathcal{K}|\) is simply the reflected proportion, and \(|\mathcal{K}|=1\) represents a "perfect reflection", up to a phase shift if \(\mathcal{K}\) is complex. Finally, the radial equation (5), together with Eq. (8) and Eq. (14), defines an eigenvalue problem. This will be solved in the following where we will compute the complex eigenfrequencies \(\omega=\omega_{R}+\mathrm{i}\omega_{I}\), where the real part \(\omega_{R}\) denotes the energy level of the bosonic cloud and the imaginary part \(\omega_{I}\) represents the growth rate of the superradiant instability. ## III Growth rate of superradiant instability ### Analytic method In the "non-relativistic" regime \(\alpha=\mu M\ll 1\), we have \(\omega\approx\mu\) and can solve Eq. (5) using matched asymptotic expansions. This approach, found by Detweiler [13], has been used to study superradiant instabilities around BHs [14; 18; 21; 22] and was recently extended by Guo _et al._[61] to account for a boundary reflection. Here we further develop it with an ECO boundary condition. This can be done by solving the radial equation in the "far" and "near" regions respectively, and matching the two solutions in the "overlap" region. In the far region where \(r\gg M\), to leading order in \(\alpha\), Eq. (5) can be written as, \[\frac{\mathrm{d}^{2}\left(rR\right)}{\mathrm{d}r^{2}}+\left[(\omega^{2}-\mu^ {2})+\frac{2M\mu^{2}}{r}-\frac{l^{\prime}(l^{\prime}+1)}{r^{2}}\right](rR)=0, \tag{15}\] where we have dropped subscripts \((l,m)\) for notation simplicity. Following Detweiler [13], we define \[\nu=M\mu^{2}/\kappa, \tag{16}\] where \(\kappa\) was defined in Eq. (9). Then the solution to Eq. (15) with the decaying boundary condition at infinity reads \[R_{lm}(r)=(2\kappa r)^{l}\mathrm{e}^{-\kappa r}U(l^{\prime}+1-\nu,2l^{\prime}+ 2;2\kappa r), \tag{17}\] where \(U(a,b,c;x)\) is the Tricomi confluent hypergeometric function with repect to \(x\). If \(l^{\prime}+1-\nu=-n\) is an integer, \(U(a,b,c;x)\) reduces to a polynomial, which, in quantum mechanics, corresponds to eigenstates of hydrogen atoms, with \(n=0,1,2\cdots\) being the radial quantum number. However, since the inner boundary condition is different from that of hydrogen atoms, we must introduce a small deviation \(\delta\nu\in\mathbb{C}\), via \[\nu=l^{\prime}+n+1+\delta\nu, \tag{18}\] and the eigenfrequency is \[\omega=\mu\sqrt{1-\frac{\alpha^{2}}{\nu^{2}}}=\omega_{R}+\mathrm{i}\omega_{I}. \tag{19}\] The latter equation is derived from the definition of \(\nu\) in Eq. (16) directly. We now explore the solution of \(R(r)\) in the near region \(r\sim r_{0}\). Here we introduce a dimensionless distance \[z\equiv\frac{r-r_{+}}{r_{+}-r_{-}}, \tag{20}\] and the location of the inner boundary is \[z_{0}\equiv\frac{r_{0}-r_{+}}{r_{+}-r_{-}}. \tag{21}\] Then, to leading order in \(\alpha\), Eq. (5) can be written as \[z(z+1)\frac{\mathrm{d}}{\mathrm{d}z}\left[z(z+1)\frac{\mathrm{d}R}{\mathrm{d}z }\right]+V(z)R=0, \tag{22}\] where1 Footnote 1: Our definition of \(p\) differs from \(P\) in Detweiler [13] by a minus sign. \[V(z) = p^{2}-I^{\prime}(I^{\prime}+1)z(z+1), \tag{23}\] \[p = \frac{2Mr_{+}(\omega-\omega_{c})}{r_{+}-r_{-}}, \tag{24}\] and \(\omega_{c}\) is defined in Eq. (13). The solution is \[R_{\mathrm{near}}(z)=\left(\frac{z}{z+1}\right)^{-\mathrm{i}p}G(-I^{\prime},I ^{\prime}+1;1+2\mathrm{i}p;z+1), \tag{25}\] where \(G(a,b;c;x)\) is any solution to the hypergeometric equation with respect to \(x\). There are two independent solutions [65]2, Footnote 2: In Ref. [13] the term corresponding to \(2I^{\prime}+2\) is incorrectly written as \(2I+1\). \[u_{3} =(-z)^{I^{\prime}}{}_{2}F_{1}(-I^{\prime},-I^{\prime}+2\mathrm{i}p;-2I^{ \prime};-z^{-1}), \tag{26}\] \[u_{4} =(-z)^{-I^{\prime}-1}{}_{2}F_{1}(I^{\prime}+1,I^{\prime}+1+2 \mathrm{i}p;2I^{\prime}+2;-z^{-1}),\] where \({}_{2}F_{1}\) is the hypergeometric function. With these two solutions, \(R_{\mathrm{near}}\) can be written as \[R_{\mathrm{near}}=\left(\frac{z}{1+z}\right)^{-\mathrm{i}p}(b_{3}u_{3}+b_{4}u _{4}). \tag{27}\] The ratio of the coefficients \(b_{3},b_{4}\) is determined by the inner boundary condition (14). Details are provided in Appendix A, and the result is \[\frac{b_{4}}{b_{3}}=-\frac{\Gamma(-2I^{\prime})\Gamma(I^{\prime}+1)}{\Gamma(- I^{\prime})\Gamma(2I^{\prime}+2)}\frac{\mathcal{K}z_{0}^{-2\mathrm{i}p} \Gamma(2ip+1)\Gamma(I^{\prime}-2ip+1)-\Gamma(1-2ip)\Gamma(I^{\prime}+2ip+1)}{ \mathcal{K}z_{0}^{-2\mathrm{i}p}\Gamma(2ip+1)\Gamma(-I^{\prime}-2ip)-\Gamma(1 -2ip)\Gamma(2ip-I^{\prime})}. \tag{28}\] Let us note that, to obtain Eq. (22), besides \(\alpha\ll 1\), we have implicitly assumed two additional conditions by neglecting terms at higher orders in \(\alpha\),3 namely Footnote 3: These two conditions are found by comparing the dominant term \(p^{2}-\Lambda_{\mathrm{tot}}(z+1)\) with subdominant terms at \(\mathcal{O}(\alpha^{2})\) in the full expression of \(V(z)\), which was given in Ref. [22]. \[\alpha^{2}(1-\omega_{c}/\omega) \ll l(l+1)\sqrt{1-a^{2}/M^{2}}, \tag{29}\] \[z \ll\min\left(l^{2}/\alpha^{2},l/\alpha\right). \tag{30}\] The former condition may not be satisfied even for \(\alpha\ll 1\) if the spin is extreme \(a/M\sim 1\), requiring the inclusion of the next-to-leading order correction for highly spinning BHs [22]. Here we do not include this correction for simplicity. The latter condition gives the regime of validity of the near region solution (27). The far region with \(r\gg M\) and the near region with \(z\ll\min\left(l^{2}/\alpha^{2},l/\alpha\right)\) have an overlap when \(\alpha\) is small enough, and thus the two solutions can be matched. First, one can expand the far region solution in the small-\(r\) limit, keeping only dominant terms: \[R_{\mathrm{far}}(r)=(-1)^{n}\frac{\Gamma(2I^{\prime}+2+n)}{\Gamma(2I^{\prime}+ 2)}(2\kappa r)^{r}+(-1)^{n+1}\delta v\Gamma(2I^{\prime}+1)\Gamma(n+1)(2 \kappa r)^{-I-1}, \tag{31}\] where \(\kappa r\ll 1\) and \(|\delta v|\ll 1\). Also, the large-\(r\) limit of the near region solution is \[R_{\mathrm{near}}(r)=b_{3}\left(\frac{-r}{r_{+}-r_{-}}\right)^{r}+b_{4}\left( \frac{-r}{r_{+}-r_{-}}\right)^{-r-1}. \tag{32}\] The two expansions should be linearly dependent, which determines \(\delta v\) and \(\omega\). When doing the calculation, we make use of the fact that \(l^{\prime}\approx l\) so only \(l\) appears in the final results. However, the factor \(\Gamma(-2I^{\prime})/\Gamma(-l^{\prime})\) needs to be treated with care. As in Ref. [22], we take the limit \(l^{\prime}\to l\), and it becomes \[\lim_{I^{\prime}\to l}\frac{\Gamma(-2I^{\prime})}{\Gamma(-l^{\prime})}=\lim_{ \epsilon\to 0}\frac{\Gamma(-2l-2\epsilon)}{\Gamma(-l-\epsilon)}=\frac{(-1)^{l}}{ 2}\frac{\Gamma(l+1)}{\Gamma(2l+1)}. \tag{33}\] We also take into account that \(|\delta\nu|\ll 1\), which allows us to obtain \(\omega\) with a Taylor expansion. Finally, we obtain \[M\omega_{R} = \alpha\left[1-\frac{\alpha^{2}}{2(l+n+1)^{2}}\right], \tag{34}\] \[M\omega_{I} = g_{X}\alpha^{4l+5}\left(\frac{ma}{2M}-\omega_{R}r_{+}\right) \frac{2^{4l+2}(2l+n+1)!}{(l+n+1)^{2l+4}n!}\left[\frac{l!}{(2l+1)!(2l)!}\right] ^{2}\prod_{j=1}^{l}\left[j^{2}\!\left(1-\frac{a^{2}}{M^{2}}\right)+\left(2r_{+} \omega_{R}-\frac{ma}{M}\right)^{2}\right], \tag{35}\] where we have defined \[g_{X}=\frac{1-\left|\mathcal{K}\right|^{2}}{1+\left|\mathcal{K}\right|^{2}+2 \operatorname{Re}(A^{2}z_{0}^{-2\mathrm{i}p}\mathcal{K})/\left|A\right|^{2}}, \tag{36}\] where \(A\equiv\prod_{j=1}^{l}(j-2\mathrm{i}p)\) with \(p\) defined in Eq. (24). Equation (36) is our key result. The most important feature of our analytic result is that the growth rate \(M\omega_{I}\) when including a (partially) reflective boundary condition differs from that of the BH case only by the factor \(g_{\mathcal{K}}\). This factor does not alter the superradiance condition (1), namely that when \(\omega_{R}<ma/2Mr_{+}\), the scalar field extracts energy and angular momentum from the ECO, growing exponentially. When \(\mathcal{K}=0\) we have \(g_{\mathcal{K}}=1\), and the growth rate recovers the BH case, which was found by Detweiler [13] (except for a \(1/2\) factor4) and other studies (see e.g., Ref. [21]). The factor \(g_{\mathcal{K}}\) hence represents the correction introduced by the boundary reflection. The denominator of this factor can be written as Footnote 4: Our result (35), when \(g_{X}=1\), turns out to be the half of the the growth rate obtained in Ref. [13]. This could be explained by a missing \(1/2\) factor that should have been on the right-hand side of Eq. (23) in Ref. [13], possibly stemming from an inappropriate treatment of \(\Gamma(-2r)/\Gamma(-r)\). This \(1/2\) factor is also discussed in Refs. [18; 22]. As a comparison, our result agrees with Eq. (2.32) of Ref. [21]. \[1+\left|\mathcal{K}\right|^{2}+2\left|\mathcal{K}\right|\cos\varphi, \tag{37}\] where \(\varphi=2\sum_{j=1}^{l}\arctan\left(-2p/j\right)-2p\ln z_{0}+\arg\mathcal{K}\). When \(p\) changes, the denominator oscillates between \(\left(1-\left|\mathcal{K}\right|\right)^{2}\) and \(\left(1+\left|\mathcal{K}\right|\right)^{2}\). Therefore, we have \[\frac{1-\left|\mathcal{K}\right|}{1+\left|\mathcal{K}\right|}\leq g_{ \mathcal{K}}\leq\frac{1+\left|\mathcal{K}\right|}{1-\left|\mathcal{K}\right|}. \tag{38}\] The value of \(z_{0}\) will influence the (quasi-)period of the oscillation. For example, when \(z_{0}\) is small enough so that \(-2p\ln z_{0}\) is the dominant term in \(\varphi\), the change in \(p\) that accounts for a full oscillation cycle is \(\sim\pi/\left|\ln z_{0}\right|\). In this paper we will illustrate results obtained with \(z_{0}=10^{-5}\), and smaller values of \(z_{0}\) will result in denser oscillatory patterns. The physical meaning of \(g_{\mathcal{K}}\) will be discussed in Sec. IV. ### Semi-analytic method The above method can yield an analytic result and clearly show how the boundary reflection changes the growth rate. However, to get more accurate results, the following semi-analytic method may be adopted, which was also used in Ref. [5] and similar to the method used in Ref. [66]. In the matching procedure presented above, only terms proportional to \(r^{I}\) and \(r^{-I-1}\) were considered. Therefore, a natural improvement to this scheme is to compute Eq. (17) and Eq. (27) numerically and match the two at a point \(r_{\mathrm{match}}\) in the overlapping region, via \[\left(R_{\mathrm{near}}\frac{\mathrm{d}R_{\mathrm{far}}}{\mathrm{d}r}-R_{ \mathrm{far}}\frac{\mathrm{d}R_{\mathrm{near}}}{\mathrm{d}r}\right)\Bigg{|}_ {r=r_{\mathrm{match}}}=0. \tag{39}\] Since Eq. (17) and Eq. (27) are both approximate solutions, one should find nonzero residuals after plugging them into the original radial equation (5). The point \(r_{\mathrm{match}}\) is chosen such that relative residuals of the two solutions are equal or closest. This approach makes use of the analytic solutions, \(R_{\mathrm{far}}\) and \(R_{\mathrm{near}}\), but matches them numerically. Therefore, the method is semi-analytic. In Fig. 1 we perform a comparison of the growth rates calculated with different methods. We first consider the case of \(\mathcal{K}=0\), which corresponds to the purely ingoing boundary condition for a BH. In this case, other than the analytic method and the semi-analytic method explained above, the growth rate \(\omega_{I}\) can also be calculated using the continued fraction method [17], which serves as a consistency check here. For our choice of parameters, the analytic results agree very well with the continued fraction results in the regime \(\alpha\ll 1\), but the discrepancy quickly increases for a larger \(\alpha\). However, the semi-analytic method always yields a result close to that of the continued fraction method for the full range of \(\alpha\) considered, with a relative error less than \(50\%\). When \(\mathcal{K}\neq 0\), the continued fraction method by Dolan [17] cannot be used, because the inner boundary condition is no longer purely ingoing. In this case, in order to relate the semi-analytic results for ECOs to those for BHs, we also show a third curve in dashed purple in each of the last two panels of Fig. 1. This curve was obtained by multiplying the semi-analytic results for BHs by \(g_{\mathcal{K}}\). The striking point is, it perfectly agrees with the semi-analytic results for ECOs, with a relative error less than \(10^{-3}\). Therefore, even though the correction factor \(g_{\mathcal{K}}\) was obtained using the fully analytical approach in the regime \(\alpha\ll 1\), it turns out to also be applicable to the more accurate semi-analytical results, which is not limited to that regime. In the last two panels of Fig. 1, the growth rate exhibits oscillatory behaviors, introduced by the boundary reflection. In order to see how the value of the reflectivity changes these oscillating patterns, in Fig. 2 we present the growth rate for different \(\mathcal{K}\), calculated using the semi-analytic method. It can be seen that for a larger \(|\mathcal{K}|\), the curve shows sharper peaks with deeper valleys in between. These new features can affect the time evolution of superradiant instabilities and the ultra-light particle mass constraints. These will be investigated in Sec. V and Sec. VI; but before that, we examine the physical origin of the correction factor \(\mathpzc{g}_{\mathcal{K}}\) in the next section. ## IV Physical interpretation of \(\mathpzc{g}_{\mathcal{K}}\) Here we try to understand Eq. (35) and the correction factor \(g_{\mathcal{K}}\) by analyzing the energy-momentum conservation at the ECO's surface \(r=r_{0}\). A similar analysis for BHs can be found in Ref. [17]. A complex scalar field has a Lagrangian density \(\mathcal{L}=\frac{1}{2}(\partial^{\mu}\Psi\partial_{\mu}\Psi^{\mu}+\mu^{2} \Psi^{\mu}\Psi)\), and an energy-momentum tensor \(T^{\mu\nu}=\partial^{\mu\nu}\Psi\partial^{\nu\mu}-g^{\mu\nu}\mathcal{L}\). Following Dolan [17], we use the ingoing-Kerr coordinates \(\vec{x}^{\mu}=(\vec{t},r,\theta,\vec{\phi})\), defined via \[\vec{t}=t+\alpha(r),\qquad\vec{\phi}=\phi+\beta(r), \tag{40}\] where \[\begin{split}\alpha(r)&=\frac{2M}{r_{+}-r_{-}} \Big{(}r_{+}\ln|r-r_{+}|-r_{-}\ln|r-r_{-}|\Big{)},\\ \beta(r)&=\frac{a}{r_{+}-r_{-}}\ln\left|\frac{r-r_{ +}}{r-r_{-}}\right|.\end{split} \tag{41}\] Hereafter we add a tilde on top of quantities calculated in this coordinate system. The contravariant metric tensor is \[\vec{g}^{\mu\nu}=\frac{1}{\rho^{2}}\begin{pmatrix}-\rho^{2}-2Mr&2Mr&0&0\\ 2Mr&\Delta&0&a\\ 0&0&1&0\\ 0&a&0&1/\sin^{2}\theta\end{pmatrix}, \tag{42}\] where \(\rho\) and \(\Delta\) are the same as in Eq. (2). Note that our convention differs from that of Dolan [17] by a minus sign. In this case, the Klein-Gordon equation is separable using \[\widetilde{\Psi}(\vec{t},r,\theta,\vec{\phi})=\mathrm{e}^{-\mathrm{i}\cdot \alpha\vec{t}}\mathrm{e}^{\mathrm{i}m\beta}S_{lm}(\theta)\widetilde{R}_{lm}(r). \tag{43}\] The spacetime has a Killing vector \(\partial_{\vec{t}}\), and \(\widetilde{T}_{0}^{\ \mu}\) is the conserved energy flux. We consider the spacetime region \(V\) that describes a time slice of the external space, satisfying \(-\Delta\vec{t}/2<\vec{t}<\Delta\vec{t}/2\), \(r>r_{0}\), \(0\leq\theta<\pi\) and \(0\leq\vec{\phi}<2\pi\). Then the conservation law, \(\nabla_{\mu}\widetilde{T}_{0}^{\ \mu}=0\), together with Gauss's theorem, gives \[\int_{\partial V}\widetilde{T}_{0}^{\ \mu}\bar{\mu}_{\mu}\sqrt{|\vec{g}|} \mathrm{d}^{3}\vec{S}=0, \tag{44}\] Figure 1: Comparison of different methods in calculating the (dimensionless) superradiance growth rate \(M\omega_{I}\). Growth rates \(M\omega_{I}\) of the fundamental mode (\(l=m=1\) and \(n=0\)), calculated using the analytic method and the semi-analytic method, are shown for BHs (_upper_) and ECOs (_middle_ and _bottom_), with \(a/M=0.9\) for the first two panels and \(\mu M=0.25\) for the last. For the BH case, we also present the result calculated using the continued fraction method for comparison. For the ECO cases, additionally, we multiply the semi-analytic results for their corresponding BHs by \(g_{\mathcal{K}}\), and plot the resulting growth rates with dashed purple curves. where \(\tilde{g}\equiv-\rho^{4}\sin^{2}\theta\) is the determinant of the covariant metric \(\tilde{g}_{\mu\nu}\), and \(\tilde{n}_{\nu}\) is the normal one-form of \(\partial V\). Here \(\tilde{n}_{\nu}=\pm\delta_{\nu}^{0}\) for the hypersurfaces \(\tilde{t}=\pm\Delta\tilde{t}/2\), and \(\tilde{n}_{\nu}=\delta_{\nu}^{1}\) for the hypersurface \(r=r_{0}\). The hypersurface at spatial infinity \(r\to\infty\) is not included because the energy flux is zero there. When \(\Delta\tilde{t}\to 0\), Eq. (44) yields the energy conservation equation \[\frac{\partial}{\partial\tilde{t}}\int\limits_{\text{3D}}-\widetilde{T}_{0}^{ \phantom{0}0}\rho^{2}\sin\theta\,\text{d}r\,\text{d}\theta\,\text{d}\tilde{ \phi}=\int\limits_{\text{2D}}-\widetilde{T}_{1}^{\phantom{0}1}\rho^{2}\sin \theta\,\text{d}\theta\,\text{d}\tilde{\phi}, \tag{45}\] where the "3D" integration is done in the external space (\(r>r_{0}\)), and the "2D" integration on the surface \(r=r_{0}\), both at the fixed time \(\tilde{t}=0\). In order to obtain the asymptotic behavior of \(\widetilde{R}_{lm}\), we compare Eq. (4) and Eq. (43), and find that the radial function in the ingoing-Kerr coordinates \(\widetilde{R}_{lm}\) and that in the Boyer-Lindquist coordinates \(R_{lm}\) are related by \[\widetilde{R}_{lm}(r)=\text{e}^{\text{i}\omega_{R}(r)}\text{e}^{-\text{i}m \beta(r)}R_{lm}(r). \tag{46}\] Therefore, when \(r\to r_{0}\), the radial function \(\widetilde{R}_{lm}\) behaves as \[\widetilde{R}_{lm}\sim\dot{z}^{ip}R_{lm}\sim\dot{z}^{ip}\cdot C_{lm}\left[(z/ z_{0})^{-\text{i}p}+\mathcal{K}(z/z_{0})^{\text{i}p}\right], \tag{47}\] where \(C_{lm}\) is a constant. Since in our calculation, from Eq. (17) to the subsequent matching procedure, the absolute magnitude of the field is not specified, we call \(C_{lm}\) the "relative amplitude" of the field at the inner boundary.5 In most cases, \(|C_{lm}|^{2}\ll 1\). Using the transformed wave function \(\widetilde{\Psi}\) (43) and the asymptotic behavior of \(\widetilde{R}_{lm}\) (47), direct calculation yields Footnote 5: Strictly speaking, the value of the radial function \(R_{lm}\) at \(r=r_{0}\) is \(C_{lm}(1+\mathcal{K})\). \[-\widetilde{T}_{0}^{\phantom{0}1}=\frac{\omega_{R}(ma-2Mr_{+}\omega_{R})}{ \rho_{0}^{2}}\left|C_{lm}\right|^{2}\left(1-|\mathcal{K}|^{2}\right)|S(\theta )|^{2}, \tag{48}\] which is the net energy flux going outwards at \(r_{0}\), defined in the ingoing-Kerr coordinates. Calculating the 2D integral in Eq. (45), we obtain \[2\omega_{I}=\omega_{R}\Big{(}1-|\mathcal{K}|^{2}\Big{)}\frac{\left(ma-2Mr_{+} \omega_{R}\right)|C_{lm}|^{2}}{\int\limits_{\text{3D}}-\widetilde{T}_{0}^{ \phantom{0}0}\rho^{2}\sin\theta\,\text{d}r\,\text{d}\theta\,\text{d}\tilde{ \phi}}, \tag{49}\] where the \(2\omega_{I}\) term arises because \(\widetilde{T}_{0}^{\phantom{0}0}\propto\text{e}^{2\omega_{R}\tilde{t}}\). The integral in the denominator represents the total energy outside \(r_{0}\). As long as \(|C_{lm}|^{2}\ll 1\), the integral mainly depends on the far region solution \(R_{lm}(r)\) and therefore can be approximately evaluated using a hydrogenic wave function in a Newtonian potential. As a result, this integral has a very weak dependence on \(a/M,z_{0}\), and \(\mathcal{K}\), and mainly depends on \(l\) and \(n\). Equations (48) and (49) provide a way to understand the physical meaning of \(g_{\mathcal{K}}\) in Eq. (36). First, let us consider the overall suppression factor \((1-|\mathcal{K}|^{2})\). This factor also appears in Eq. (48) and its interpretation is straightforward: the energy flows carried by the ingoing and outgoing waves go on opposite directions and thus the net flux is reduced when there is a (partially) reflecting surface. In particular, when \(|\mathcal{K}|=1\), the two achieve a balance, and thus there is no net energy flux across \(r_{0}\), leading to a zero growth/decay rate.6 Footnote 6: Since the denominator in Eq. (36) ranges from \((1-|\mathcal{K}|)^{2}\) to \((1+|\mathcal{K}|)^{2}\), one may wonder what if \(|\mathcal{K}|=1\) and both the numerator and denominator are equal to zero. In this case, \(g_{\mathcal{K}}\to\infty\). However, once the spin \(a\) decreases due to extraction of the angular momentum, the denominator would be nonzero and \(g_{\mathcal{K}}\) stays at thereafter. The denominator of \(g_{\mathcal{K}}\), which represents the oscillatory behavior of \(\omega_{I}\), is tightly related to the relative amplitude \(C_{lm}\). To demonstrate this, in the left panel of Fig. 3, we plot \(|C_{lm}|^{2}\) for \(\mathcal{K}=0.8\) (solid) and \(\mathcal{K}=0\) (dashed). For classical BHs, \(|C_{lm}|^{2}\) changes very slowly with \(a\). However, in the presence of a boundary reflection, the change becomes rapid. The oscillatory behaviour of \(|C_{lm}|^{2}\) directly leads to the oscillatory behaviour of \(\omega_{I}\) via Eq. (49), manifested as the oscillating denominator of \(g_{\mathcal{K}}\), as shown in the right panel of Fig. 3. The physical link between the relative amplitude and the growth Figure 2: Growth rate calculated using the semi-analytic method, shown as functions of (_left_) the scalar mass parameter \(\alpha\) and (_right_) the ECO’s (dimensionless) spin \(a/M\). The inner boundary is located at \(z_{0}=10^{-5}\). For the left panel, we take \(a/M=0.9\) and plot the modes \((l,m,n)=(1,1,0)\), \((2,2,0)\), and \((3,3,0)\). For the right, we take \(\mu M=0.1\) and only the mode \((l,m,n)=(1,1,0)\) is shown. rate is also straightforward: with a larger \(|C_{lm}|^{2}\), the scalar field extracts a larger energy flux (48) and thus grows faster. To sum up, in the analytical expression of \(g_{\mathcal{K}}\) (36), the factor \(\left(1-|\mathcal{K}|\right)^{2}\) can be understood as the counteraction of outgoing and ingoing energy flows, and the oscillatory behavior of the denominator comes from the change in the scalar field's (relative) density \(|C_{lm}|^{2}\) at \(r=r_{0}\), which is proportional to the amount of energy extracted there. ## V Time evolution of superradiant instability In this section, we consider the time evolution of superradiant instabilities and investigate how it is influenced by the boundary reflection. Let us start by reviewing the case of a Kerr BH. If initially there is a nonzero scalar field around a Kerr BH (for example, arising from quantum fluctuations), as long as the superradiance condition (1) is satisfied, the field will extract energy and angular momentum from the BH. In this case, more and more scalar particles are produced, the field grows exponentially, and a bosonic cloud around the BH is formed. For scalar fields, the time evolution of superradiant instabilities around Kerr BHs has been investigated using an adiabatic approximation in Ref. [26]. For the case of an ECO, the superradiance condition is not changed. Similar to its BH counterpart, a scalar field could grow and form a bosonic cloud when the superradiance condition is satisfied. However, since the growth rate is changed by the factor \(g_{\mathcal{K}}\), one will anticipate some new features in the time evolution. To begin with, we present the equations governing the adiabatic evolution of the instability. These equations are essentially the same as in the case of a Kerr BH. First, since the wave function \(\Psi\) grows as \(\sim\mathrm{e}^{\mu t}\), the number of particles in the bosonic cloud grows as \(\sim\mathrm{e}^{2\omega t}\). Therefore, the superradiant energy extraction rate is \[\dot{E}_{\mathrm{SR}}=2\omega_{I}M_{\mathrm{cl}}, \tag{50}\] where \(M_{\mathrm{cl}}\) is the mass of the bosonic could. Since we are mostly interested in the case where gas accretion is much slower than the evolution of superradiant instabilities, we will not take possible accretion processes onto the central ECO into account.7 Then the mass \(M\) and angular momentum \(J\) of the ECO change according to Footnote 7: See Ref. [26] where gas accretion is included in the evolution equations for BHs. \[\dot{M} =-\dot{E}_{\mathrm{SR}}, \tag{51}\] \[J =-\frac{m}{\omega_{R}}\dot{E}_{\mathrm{SR}}. \tag{52}\] On the other hand, the mass of the bosonic cloud changes as \[\dot{M}_{\mathrm{cl}}=\dot{E}_{\mathrm{SR}}-\dot{E}_{\mathrm{GW}}, \tag{53}\] where \(\dot{E}_{\mathrm{GW}}\) is the energy flux carried away by GWs emitted by the cloud. In our study below, we only consider the fundamental mode, \(l=m=1\) and \(n=0\), for which we can adopt the GW energy flux obtained in Ref. [26], \[\dot{E}_{\mathrm{GW}}=\frac{484+9\pi^{2}}{23040}\left(\frac{M_{\mathrm{cl}}^{ 2}}{M^{2}}\right)(M\mu)^{14}. \tag{54}\] This equation, obtained for the case of BHs, is supposed to be a good approximation still for the case of ECOs. The reason is that the GW emission mostly comes from the far region where \(|\Psi|^{2}\) is large, and the far region wavefunctions in both the BH and the ECO cases are nearly the same. Using the parameters listed in Table 1, we can calculate the time evolution of superradiant instabilities. Here the initial mass of the bosonic cloud is taken to be the mass of a particle, \(M_{\mathrm{cl},0}=\mu\). The results are plotted in Fig. 4 for different values of reflectivity and scalar particle mass. We found that the evolution can be roughly divided into three stages. Figure 3: A comparison between \(|C_{lm}|^{2}\) and \(g_{\mathcal{K}}/(1-|\mathcal{K}|^{2})\). Solid lines are for an ECO with \(\mathcal{K}=0.8\) and \(z_{0}=10^{-5}\), while dashed lines are their counterparts for a BH (\(\mathcal{K}=0\)). We take \(\mu M=0.1\) and consider the fundamental mode \(l=m=1\) and \(n=0\). The relative amplitude \(C_{lm}\) is calculated using the analytic method. **Steady growth of the scalar field:**: In the very beginning, the mass of the cloud is so small (in fact for \(\mu=1\times 10^{-18}\,\mathrm{eV}\) and \(M_{0}=10^{7}\,\,M_{\odot}\), we have \(M_{\mathrm{cl},0}/M_{0}\sim 10^{-91}\)) that the superradiant extraction is negligible. Therefore, the mass and spin of the ECO are essentially unchanged for a long period, and consequently, the growth rate \(\omega_{I}\) stays steady. **A spin-down phase of the ECO:**: After about \(200\) e-folds, the cloud has acquired a non-negligible mass \(\sim 10^{-5}\,M_{0}\), and the evolution starts to be discernible in the figure. The cloud quickly extracts energy and angular momentum from the ECO, until it reaches the maximal mass \(\sim 0.1M_{0}\). This stage lasts for about \(10\) e-folds, during which the spin of the ECO drops quickly. Therefore, as implied in the right panel of Fig. 2, the growth rate \(\omega_{I}\) shows an oscillatory pattern with time. As an obvious example, the growth for the \(\mathcal{K}=0.9\) case in the upper panel of Fig. 4 is uneven during this stage. **GW dissipation:**: After the spin of the ECO drops to the superradiant critical value, GW emission overtakes the superradiant extraction, and therefore the cloud starts to dissipate gradually. GWs emitted by the cloud are nearly monochromatic with an angular frequency \(\sim 2\omega_{R}\) and a slowly decreasing amplitude [33; 36; 38]. In Fig. 4, we can see that for different \(\mathcal{K}\), the time the cloud takes to accumulate to its maximal mass can vary. This timescale is mainly determined by the first stage, during which the growth rate \(\omega_{I}\) is essentially a constant. The boundary reflection changes this timescale via the correction factor \(g_{\mathcal{K}}\) in \(\omega_{I}\). Since \(g_{\mathcal{K}}\) can be either larger or less than \(1\), this timescale, compared to the BH case, can be either shortened or prolonged by the boundary reflection, as seen in the upper (shortened) and lower (prolonged) panels of Fig. 4 respectively. The change in the growth timescale implies that the boundary reflection may affect constraints on the ultralight particle mass, which is the topic of the next section. ## VI Astrophysical constraints on the mass of ultralight bosons In this section, we first review how superradiant instabilities can be used to constrain the mass of ultralight bosons, and then discuss the implications of the reflective boundary condition if the assumed BH is in fact an ECO. ### Ultralight particle mass constraints from BHs Constraints on the mass of ultralight bosons have been imposed considering BH superradiance, with measurements of BHs' spin and mass [39; 5; 40; 33]. The basic idea is that, if there exists an ultralight particle with mass \(\mu\), BHs with high enough spins should suffer superradiant instabilities and spin down. This results in an exclusion region on the \(J\)-\(M\) plane (Regge plane), where \(J\) and \(M\) are respectively the angular momentum and mass of BHs. The location of this region is related to the mass of the particle \(\mu\). Therefore, values of \(\mu\) which create exclusion regions that are incompatible with existing BH \(J\)-\(M\) measurements should not be allowed. To estimate the exclusion regions, one approach is to compare the characteristic timescale of the bosonic cloud evolution, \(\tau_{\mathrm{cloud}}\), with the characteristic timescale associated to the BH's astrophysical processes, \(\tau_{\mathrm{astro}}\). For example, for the case \begin{table} \begin{tabular}{c c} \hline \hline Parameter & Definition \\ \hline \(M_{0}\) & Initial mass of the ECO \\ \(M_{\mathrm{cl},0}\) & Initial mass of the bosonic cloud \\ \(J_{0}\) & Initial spin of the ECO \\ \(\mu\) & Mass of the ultralight scalar particle \\ \(z_{0}\) & Location of the reflective boundary in Eq. (21) \\ \(\mathcal{K}\) & Reflectivity of the boundary surface \\ \hline \end{tabular} \end{table} Table 1: Definition of parameters for the time evolution of superradiant instabilities. of binary BHs, Arvanitaki _et al._[40] compared the superradiance saturation timescale, namely the time the cloud takes to accumulate to its maximal mass, with the binary merger timescale. For the case of X-ray binaries, Cardoso _et al._[39] compared the instability timescale, \(1/\omega_{I}\), with the durations over which two sources show stable spin values. The typical astrophysical timescale is also often chosen to be the accretion timescale of the BH [5; 18; 34; 67]. If \(\tau_{\rm cloud}\ll\tau_{\rm astro}\), the cloud could extract energy and angular momentum effectively within an astrophysical timescale, substantially spinning down the BH. Another approach to find the exclusion regions is the Monte Carlo method [26]. Starting with a particular \((J,M)\) combination, one can calculate the evolution of the system and extract the final state of the BH at some specified time \(t_{F}\). Doing this for a sample of randomly chosen initial states, \((J_{i},M_{i})\), and plotting the final states, \((J_{f},M_{f})\), on the Regge plane, one can find that a particular region is hardly populated; see Fig. 3 in Ref. [26] for example. ### Extension to the ECO case and the role of boundary reflection Previous mass constraints of ultralight particles were obtained based on the assumption that the compact objects are BHs, with an event horizon as the inner boundary. However, it is worth studying how the change in the boundary condition affects the mass constraints. Here we make a brief discussion on how the ECO boundary condition alters the \(J\)-\(M\) exclusion regions, as well as its implications for ultralight particle mass constraints. For simplicity, we adopt the first approach, i.e. comparing timescales, to draw exclusion regions on the Regge plane. Here the astrophysical timescale is taken to be the accretion timescale, \(\tau_{\rm Acc}\), of the ECO. We assume that the ECO is accreting at a rate \(f_{\rm Edd}\dot{M}_{\rm Edd}\), where \(\dot{M}_{\rm Edd}\) is the Eddington accretion rate, which is related to the Eddington luminosity \(L_{\rm Edd}\) through the radiative efficiency \(\eta\), via \(\epsilon\dot{M}_{\rm Edd}c^{2}=L_{\rm Edd}=1.26\times 10^{31}(M/M_{\odot})\,{\rm J \,s^{-1}}\). The factor \(\epsilon=\eta/(1-\eta)\) arises because if a fraction \(\eta\) of the infalling mass is converted to radiation, the accreted fraction reduces to \(1-\eta\). Here we define the accretion timescale \[\tau_{\rm Acc}\equiv\frac{M}{f_{\rm Edd}\dot{M}_{\rm Edd}}=\frac{4.5\times 10^ {7}}{f_{\rm Edd}}\frac{\epsilon}{0.1}\ {\rm yr}. \tag{55}\] where we shall typically take \(\epsilon=0.1\)[68; 69]. If we consider a supermassive ECO with mass \(M\sim 10^{6}\,M_{\odot}\), and take the initial mass of the cloud to be the mass of a single scalar particle \(\mu\sim 10^{-18}\) eV, it takes the cloud about \(\ln(0.1M/\mu)\sim 205\) e-folds to grow to \(0.1\,M\). Therefore, we define the fast-superradiance regime satisfying \[205\ln\left(\frac{M}{10^{6}M_{\odot}}\frac{10^{-18}\ {\rm eV}}{\mu}\right)\, \tau_{\rm SR}<\tau_{\rm Acc}, \tag{56}\] where the superradiance e-fold timescale is \(\tau_{\rm SR}\equiv 1/(2\omega_{I})\) since the cloud grows as \(M_{\rm cl}\propto\epsilon^{2\omega_{I}t}\). In this regime, the cloud will extract energy and angular momentum effectively within the accretion timescale. Therefore, it gives an exclusion region on the Regge plane. BHs (\(\mathcal{K}=0\)) or ECOs (\(\mathcal{K}\neq 0\)) inside this region should spin down effectively and leave this region within \(\tau_{\rm Acc}\). We calculated the growth rate \(\omega_{I}\) using the analytic method. Exclusion regions defined in Eq. (56) are plotted in Fig. 5 for different values of the boundary reflectivity \(\mathcal{K}\). It is clear that the boundary reflection alters the shape of this region and introduces small spiky features, which are more pronounced as \(|\mathcal{K}|\) increases. When \(\mathcal{K}\) approaches extremity \(|\mathcal{K}|\to 1\), the spikes are sharper, while the bulk part--the part under the spikes, as illustrated in the figure for \(\mathcal{K}=0.99\)--shrinks inwards. Our results may have some implications for the usual method for constraining the scalar mass. When the reflectivity is not too large, the alteration to the exclusion region is insignificant. For example, in the \(\mathcal{K}=0.5\) case of Fig. 5, compared with the \(\mathcal{K}=0\) case, for every value of \(J/M^{2}\), the change in the value of \(M\) on the boundary line is within 0.1 dex, which can be comparable to, say, the current measurement errors. However, for \(\mathcal{K}=0.99\), the spikes are distinct, and the shrinkage of the bulk region reaches \(\sim 0.4\) dex. Therefore, we expect that a mildly reflective boundary, roughly \(|\mathcal{K}|\lesssim 0.5\), may not substantially influence the mass constraints of ultralight scalar particles, but an extreme value of reflectivity, say, \(|\mathcal{K}|\gtrsim 0.9\), could introduce distinct spiky structures to the exclusion region, with a considerable inward shrinkage of its bulk part. Figure 5: Exclusion regions, as defined in Eq. (56), for different \(\mathcal{K}\). The red dashed line denotes where the superradiance condition (1) is saturated. The vertical axis extends up to \(J/M^{2}=0.975\), above which the small spikes are much more crowded. We have taken \(z_{0}=10^{-5}\) and \(\mu=10^{-18}\) eV. ## VII Conclusion Exotic compact objects (ECOs) have been conceived as alternatives to BHs. ECOs do not possess an event horizon, and the inner boundary condition for scalar perturbations is different from that of BHs. In this paper, we computed the growth rate of superradiant instabilities assuming a modified inner boundary condition, parameterized by the location of a reflective surface, \(z_{0}\), and its reflectivity, \(\mathcal{K}\). We solved the eigenvalue problem analytically, using matched asymptotic expansions, and found the analytic expression of the growth rate \(\omega_{I}\). Our key result is that the growth rate of superradiant instabilities around an ECO can be related to the value in the BH case simply by a factor \(g_{\mathcal{K}}\), whose explicit expression is given in Eq. (36). For a better accuracy, we also calculated the growth rate using a semi-analytic method. We found that the semi-analytic results in the ECO and BH cases can also be related by the same factor \(g_{\mathcal{K}}\), despite the fact that this factor was obtained using a purely analytic treatment. Therefore, the factor \(g_{\mathcal{K}}\) must have a clear physical meaning, which was investigated and we showed that it can be related to the energy flux at the inner boundary. Using an adiabatic approach, we also studied how the superradiant instability of such ECOs would evolve. We found that, starting from a single particle, the evolution can be divided into three stages, namely (i) steady growth of the scalar field, (ii) a spin-down phase of the ECO, and (iii) GW dissipation. The time it takes for the cloud to reach its maximal mass mainly depends on the duration of the first stage, and can be either shortened or prolonged by the boundary reflection. Finally, we discussed the implications for astrophysical constraints on ultralight scalar fields. By comparing the timescales of the cloud evolution and gas accretion, we found the exclusion regions on the ECOs' Regge plane. Boundary reflection introduces spiky structures to the exclusion region, and the effect is more pronounced for larger reflectivities. As long as the reflectivity is not too large, say \(|\mathcal{K}|\lesssim 0.5\), the alteration to the exclusion region may not substantially influence the mass constraints of ultralight scalars, but the effects of boundary reflection could be significant for large reflectivity, e.g., \(|\mathcal{K}|\gtrsim 0.9\). At the end of this paper, we make a short comment on the ECO model we adopted. Our work is based on the model in which one truncates the Kerr spacetime at a radius \(r_{0}\) and puts a spherical reflective boundary there with an isotropic reflectivity \(\mathcal{K}\). Although widely used in literature (as mentioned in the Introduction and references therein), this model is only a simplified one. More realistic models may consider deviation of the boundary shape from a sphere and also anisotropic reflectivity, which is out of the scope of this work and deserves future study. ###### Acknowledgements. This work was supported by the National Natural Science Foundation of China (11991053, 12247128, 11975027), the National SKA Program of China (2020SKA0120300), the Max Planck Partner Group Program funded by the Max Planck Society, and the High-Performance Computing Platform of Peking University. L.Z. is supported by the Hui-Chun Chin and Tsung-Dao Lee Chinese Undergraduate Research Endowment (Chun-Tsung Endowment) at Peking University. R.B. acknowledges financial support provided by FCT - Fundacao para a Ciencia e a Tecnologia, I.P., under the Scientific Employment Stimulus - Individual Call - 2020.00470.CEECIND and under project No. 2022.01324.PTDC. ## Appendix A Determining the ratio \(b_{4}/b_{3}\) When \(z\to 0\), we have \[\left(\frac{z}{1+z}\right)^{-\mathrm{i}p}u_{3} \to f_{3}^{-}z^{-\mathrm{i}p}+f_{3}^{+}\dot{x}^{\mathrm{i}p}, \tag{38}\] \[\left(\frac{z}{1+z}\right)^{-\mathrm{i}p}u_{4} \to f_{4}^{-}\dot{x}^{-\mathrm{i}p}+f_{4}^{+}\dot{x}^{\mathrm{i}p}, \tag{39}\] where \[f_{3}^{-} =\frac{(-1)^{l}\Gamma(-2l^{\prime})\Gamma(2ip)}{\Gamma(-l^{\prime })\Gamma(2ip-l^{\prime})}, \tag{40}\] \[f_{3}^{+} =\frac{(-1)^{l}\Gamma(-2l^{\prime})\Gamma(-2ip)}{\Gamma(-l^{\prime })\Gamma(-l^{\prime}-2ip)},\] (41) \[f_{4}^{-} =\frac{(-1)^{l-l}\Gamma(2l^{\prime}+2)\Gamma(2ip)}{\Gamma(l^{ \prime}+1)\Gamma(l^{\prime}+2ip+1)},\] (42) \[f_{4}^{+} =\frac{(-1)^{l-l}\Gamma(2l^{\prime}+2)\Gamma(-2ip)}{\Gamma(l^{ \prime}+1)\Gamma(l^{\prime}-2ip+1)}. \tag{43}\] The inner boundary condition (14) is equivalent to \[\lim_{z\to z_{0}}R_{\mathrm{near}}\sim(z/z_{0})^{-\mathrm{i}p}+\mathcal{K}(z /z_{0})^{\mathrm{i}p}. \tag{44}\] Considering the boundary condition and the asymptotic behaviours of \(u_{3},u_{4}\), we can pin down the ratio \(b_{4}/b_{3}\) via \[\frac{b_{3}f_{3}^{+}+b_{4}f_{4}^{+}}{b_{3}f_{3}^{-}+b_{4}f_{4}^{-}}=\mathcal{K} _{0}^{-2\mathrm{i}p}, \tag{45}\] and the result is presented in Eq. (28) in the main text.
2307.13407
Probe thermometry with continuous measurements
Temperature estimation plays a vital role across natural sciences. A standard approach is provided by probe thermometry, where a probe is brought into contact with the sample and examined after a certain amount of time has passed. In many situations however, continuously monitoring the probe may be preferred. Here, we consider a minimal model, where the probe is provided by a two-level system coupled to a thermal reservoir. Monitoring thermally activated transitions enables real-time estimation of temperature with increasing accuracy over time. Within this framework we comprehensively investigate thermometry in both bosonic and fermionic environments employing a Bayesian approach. Furthermore, we explore adaptive strategies and find a significant improvement on the precision. Additionally, we examine the impact of noise and find that adaptive strategies may suffer more than non-adaptive ones for short observation times. While our main focus is on thermometry, our results are easily extended to the estimation of other environmental parameters, such as chemical potentials and transition rates.
Julia Boeyens, Björn Annby-Andersson, Pharnam Bakhshinezhad, Géraldine Haack, Martí Perarnau-Llobet, Stefan Nimmrichter, Patrick P. Potts, Mohammad Mehboudi
2023-07-25T11:00:02Z
http://arxiv.org/abs/2307.13407v1
# Probe thermometry with continuous measurements ###### Abstract Temperature estimation plays a vital role across natural sciences. A standard approach is provided by probe thermometry, where a probe is brought into contact with the sample and examined after a certain amount of time has passed. In many situations however, continuously monitoring the probe may be preferred. Here, we consider a minimal model, where the probe is provided by a two-level system coupled to a thermal reservoir. Monitoring thermally activated transitions enables real-time estimation of temperature with increasing accuracy over time. Within this framework we comprehensively investigate thermometry in both bosonic and fermionic environments employing a Bayesian approach. Furthermore, we explore adaptive strategies and find a significant improvement on the precision. Additionally, we examine the impact of noise and find that adaptive strategies may suffer more than non-adaptive ones for short observation times. While our main focus is on thermometry, our results are easily extended to the estimation of other environmental parameters, such as chemical potentials and transition rates. ## I Introduction Temperature plays a prominent role in the study of physical systems, as it is arguably the most relevant state parameter that determines their behaviour. Therefore, thermometry is often a preliminary step in quantum experiments. However, it also introduces measurement disturbance [1] in exchange for precision. The theory of quantum metrology [2; 3; 4] identifies optimal precision-disturbance trade-offs in parameter estimation with quantum resources, and quantum thermometry is the branch specialised to temperature estimation [5; 6]. A common framework to study the precision limits in thermometry is to consider a probe coupled to the sample of interest at temperature \(T\). Over time, the probe gains information about the sample's temperature, which is later accessed through direct measurements on the probe. Relevant experimental realisation of probe thermometry include single-atom probes for ultracold gases [7; 8; 9], NV centres acting as thermometers of living cells [10; 11], and nanoscale electron calorimeters [12; 13; 14]. Theoretically, much progress has been achieved on characterising the fundamental precision limits of probe thermometry in frequentist and Bayesian approaches [15; 16; 17; 18; 19; 20; 21], the precision scaling at ultralow temperatures [22; 23; 24], the impact of strong coupling and correlations [25; 26; 27; 28; 29], measurement back action [30; 31], as well as enhanced sensing via non-equilibrium probes [32; 33; 34; 35; 36; 37; 38; 39; 40; 41]. While providing remarkable progress on our understanding of thermometry, previous works are based on the assumption that the probe is measured and subsequently reset or discarded. In this work, we depart from this paradigm and consider thermometry through continuous measurements of the probe, where information on the sample's temperature is continuously extracted while the probe is interacting with it. Total measurement time is thus the major resource here, and there is no hidden time cost for probe preparation, as often ignored in previous works. This scenario has to date received little attention, notable exceptions are provided by Refs. [42; 43]. Another reason to pursue this scenario is the remarkable experimental progress on continuous measurements including charge measurements [44; 45; 46], homo- and heterodyne detection [47; 48], and magnetometry [49; 50; 51; 52]. With this aim, we consider a minimal model where a two-level probe weakly interacts with a thermal bath while being continuously monitored--see Fig. 1. Depending on the nature of the bath, either bosonic or fermionic, this can correspond to a superconducting qubit coupled to the electromagnetic environment or a quantum dot coupled to an electronic reservoir. We construct temperature estimators and characterize their estimation errors, employing a Bayesian approach [18; 19; 20; 21]. In the long-time limit (or, the large-data limit), the Fisher information, which can be given analytically, determines bounds on these errors. We dicuss the saturability of these bounds via non-adaptive and adaptive measurement strategies, where the energy gap of the probe is tuned during the protocol via a suitable feedback. Finally, we characterize the robustness of our results to noisy measurements and a finite bandwidth of the detector, bringing our considerations closer to experimental platforms. The paper is structured as follows: In Section II, we introduce a Markov jump process to describe the system trajectories subject to a bosonic or fermionic bath. In Section III, we discuss thermometry in the large-data limit and present an analytical expression for the Fisher information of the trajectories--Eq. (14). This can be used not only for thermometry, but also for estimating other parameters such as the thermalisation rate. Section IV is devoted to the Bayesian approach to thermometry. After defining a relative error quantifier and the optimal estimator that minimises it on average, we present the tight Bayesian Cramer-Rao bound in Eq. (30) which we then use to implement adaptive feedback on the probe to improve precision. In Section VI, we explore the impact of measurement noise on the quality of our continuously monitored thermometer. Finally, in Section VII, we conclude and discuss some future directions. ## II System and Trajectories We consider a two-level system, such as a qubit or a quantum dot, coupled to a thermal bath (see Fig. 1). The two-level system provides the probe which is used to determine the temperature of the sample, provided by the thermal bath. Denoting the two states of the probe by \(0\) and \(1\), the system dynamics is described by the rate equation \[\partial_{t}\begin{pmatrix}p_{0}(t)\\ p_{1}(t)\end{pmatrix}=\begin{pmatrix}-\Gamma_{\text{in}}&\Gamma_{\text{out}}\\ \Gamma_{\text{in}}&-\Gamma_{\text{out}}\end{pmatrix}\begin{pmatrix}p_{0}(t)\\ p_{1}(t)\end{pmatrix}. \tag{1}\] Here, \(p_{j}(t)\) denotes the probability that the system is in state \(j\) and \(\Gamma_{\text{in}(\text{out})}\) denotes the rate of the transition \(0\to 1\) (\(1\to 0\)). It is straightforward to solve Eq. (1), making use of \(p_{0}(t)=1-p_{1}(t)\), resulting in \[p_{1}(t)=e^{-(\Gamma_{\text{in}}+\Gamma_{\text{out}})t}\left[p_{1}-\frac{ \Gamma_{\text{in}}}{\Gamma_{\text{in}}+\Gamma_{\text{out}}}\right]+\frac{ \Gamma_{\text{in}}}{\Gamma_{\text{in}}+\Gamma_{\text{out}}}, \tag{2}\] where \(p_{1}\equiv p_{1}(0)\). The rate equation (1) describes stochastic jumps between the states \(0\) and \(1\). Observing these jumps in real time provides information on temperature, because the jump rates are temperature dependent. We note that, while we consider temperature here, any parameter that the rates depend on may be estimated analogously, e.g., properties of the bath spectral density [53]. Observing the stochastic jumps in the time interval \([0,\tau]\) results in a trajectory \(\nu_{\tau}=\{n(t)|t\in[0,\tau]\}\), where \(n(t)\in\{0,1\}\) denotes the occupation of the state at time \(t\). The probability density to observe a trajectory is given by \[\rho(\nu_{\tau}|T)=p_{n_{0}}\Gamma_{\text{in}}^{k}\Gamma_{\text{out}}^{l}e^{- \Gamma_{\text{in}}(\tau-\tau_{1})-\Gamma_{\text{out}}\tau_{1}}, \tag{3}\] where \(p_{n_{0}}\) denotes the probability of the system being in state \(n_{0}\equiv n(0)\) at \(t=0\), \(\tau_{1}\) denotes the total time the system spends in state \(1\) along the trajectory, and \(k\) (\(l\)) denotes the number of jumps from state \(0\to 1\) (\(1\to 0\)) along the trajectory. In Appendix A we derive Eq. (3) and we show how \(\tau_{1}\), \(k\), and \(l\) may be obtained from \(n(t)\). We note that because \(t\) is a continuous variable, \(\rho(\nu_{\tau}|T)\) is a probability density with a unit that depends on the number of jumps that occur along \(\nu_{\tau}\). The unitless probability to observe a trajectory where the jumps occur within time-windows of width \(dt\) is given by \(\rho(\nu_{\tau}|T)(dt)^{l+k}\). While the above holds true for any pair of rates \(\Gamma_{\text{in}}\) and \(\Gamma_{\text{out}}\), we focus here on rates that describe the exchange of energy (and possibly particles) with a thermal bath. Such rates obey a detailed balance relation \[\frac{\Gamma_{\text{in}}}{\Gamma_{\text{out}}}=e^{-\beta\omega}, \tag{4}\] where \(\beta=1/(k_{B}T)\) relates to the inverse temperature, with \(k_{B}\) being the Boltzman constant which we set to \(1\) in this work, and \(\omega>0\) is the energy gap between states \(0\) and \(1\) (with \(0\) denoting the ground state). For a bath exchanging particles, we may set the chemical potential to zero without loss of generality and use the same relation. For concreteness, we consider two widely-used expressions for the rates. The first describes a bosonic thermal bath \[\Gamma_{\text{in}}=n_{B}(\omega)\kappa(\omega),\hskip 14.226378pt\Gamma_{\text{ out}}=[n_{B}(\omega)+1]\kappa(\omega), \tag{5}\] where we introduced the Bose-Einstein distribution \[n_{B}(\omega)=\frac{1}{e^{\beta\omega}-1}. \tag{6}\] With these rates, the considered scenario corresponds, e.g., to a superconducting qubit coupled to an electromagnetic environment. We will mainly focus on an Ohmic spectral density with \(\omega\) well below the cut-off frequency such that \(\kappa(\omega)=\kappa^{\prime}\omega\). The second expression we use for the rates describes a fermionic bath, describing, e.g., a quantum dot weakly coupled to an electronic reservoir \[\Gamma_{\text{in}}=n_{F}(\omega)\Gamma,\hskip 14.226378pt\Gamma_{\text{out}}=[1-n _{F}(\omega)]\Gamma, \tag{7}\] Figure 1: Sketch of continuous thermometry. A two-level system interacts with a bath of unknown temperature \(T\) with coupling rates \(\Gamma_{\text{in}}\) and \(\Gamma_{\text{out}}\) into and out of the excited state. The energy gap \(\omega\) of the system can be changed to improve the performance of the thermometer. The state of the two-level system evolves on a telegraph-like trajectory (white line) and is monitored continuously by a weak measurement, which results in a noisy signal with finite bandwidth (blue). where we introduced the Fermi-Dirac distribution \[n_{F}(\omega)=\frac{1}{e^{\beta\omega}+1}. \tag{8}\] For the fermionic rates, we consider a flat spectral density, such that \(\Gamma\) is independent of frequency. ## III Fisher information Consider \(T^{*}\) to be the true temperature of the sample which we want to infer by analysing the measured data. In thermometry, we want to build an estimator \(\tilde{T}(\nu_{\tau})\) that maps an observed trajectory \(\nu_{\tau}\) into the best estimate for the temperature. To this end, we need to quantify the accuracy of the estimate in terms of a suitable cost (or error) function. An appropriate choice that does not depend on the absolute scale of the true underlying temperature [54] is the relative square distance \[D_{\text{R,T^{*}}}[\tilde{T},\nu_{\tau}]\coloneqq\left(\frac{\tilde{T}(\nu_{ \tau})-T^{*}}{T^{*}}\right)^{2}. \tag{9}\] Accordingly, we can quantify the overall performance of the temperature estimation protocol by averaging the relative square distance over all possible trajectories at a particular \(T^{*}\) \[D_{\text{R,T^{*}}}[\tilde{T}]\coloneqq\int d\nu_{\tau}\rho(\nu_{\tau}|T^{*}) \left(\frac{\tilde{T}(\nu_{\tau})-T^{*}}{T^{*}}\right)^{2}. \tag{10}\] Note that this _true_ relative mean-square distance is not available in an actual experiment where \(T^{*}\) is unknown. In the frequentist approach to thermometry, one could instead quantify the uncertainty in the estimate of the temperature by, e.g., a confidence region [55]. The true relative distance is lower-bounded by the Cramer-Rao inequality [56; 57]. In particular, for any unbiased estimator \(\tilde{T}_{\text{u.b.}}\) that satisfies \[\int d\nu_{\tau}\rho(\nu_{\tau}|T^{*})\tilde{T}_{\text{u.b.}}(\nu_{\tau})=T^{ *}, \tag{11}\] the true relative distance is lower bounded by \[D_{\text{R,T^{*}}}^{-1}[\tilde{T}_{\text{u.b.}}]\leqslant T^{2}F[\rho(\nu_{ \tau}|T)]. \tag{12}\] Here, \(F[\rho(\nu_{\tau}|T)]\) is the Fisher information of the probability distribution \(\rho(\nu_{\tau}|T)\) with respect to temperature and reads \[F[\rho(\nu_{\tau}|T)]=\int d\nu_{\tau}\rho(\nu_{\tau}|T)[\partial_{T}\ln\rho( \nu_{\tau}|T)]^{2}. \tag{13}\] For the trajectories described in Eq. (3), we find the Fisher information (see Ref. [42] as well as App. B for a derivation) \[\begin{split}& F[\rho(\nu_{\tau}|T)]=F[p_{n_{0}}]+\left[p_{0} \frac{(\Gamma_{\text{in}}^{\prime})^{2}}{\Gamma_{\text{in}}}+p_{1}\frac{( \Gamma_{\text{out}}^{\prime})^{2}}{\Gamma_{\text{out}}}\right]\frac{1-e^{-( \Gamma_{\text{in}}+\Gamma_{\text{out}})\tau}}{\Gamma_{\text{in}}+\Gamma_{ \text{out}}}\\ &+\frac{\Gamma_{\text{in}}\Gamma_{\text{out}}}{\Gamma_{\text{in} }+\Gamma_{\text{out}}}\left[\left(\frac{\Gamma_{\text{in}}^{\prime}}{\Gamma_{ \text{in}}}\right)^{2}+\left(\frac{\Gamma_{\text{out}}^{\prime}}{\Gamma_{ \text{out}}}\right)^{2}\right]\left[\tau-\frac{1-e^{-(\Gamma_{\text{in}}+ \Gamma_{\text{out}})\tau}}{\Gamma_{\text{in}}+\Gamma_{\text{out}}}\right], \end{split} \tag{14}\] where the prime denotes a derivative with respect to temperature. We stress that while we focus on temperature here, Eq. (14) may be applied to any parameter encoded in the rates of a two-level system. In the long time limit, we may drop all terms that do not grow with the total time of the trajectory \(\tau\), which results in \[F[\rho(\nu_{\tau}|T)]=\frac{\Gamma_{\text{in}}\Gamma_{\text{out}}}{\Gamma_{ \text{in}}+\Gamma_{\text{out}}}\tau\left[\left(\frac{\Gamma_{\text{in}}^{ \prime}}{\Gamma_{\text{in}}}\right)^{2}+\left(\frac{\Gamma_{\text{out}}^{ \prime}}{\Gamma_{\text{out}}}\right)^{2}\right]. \tag{15}\] For a bosonic bath, c.f. Eq. (5), this expression reduces to \[F[\rho(\nu_{\tau}|T)]=\kappa(\omega)\tau\frac{\omega^{2}}{8k_{B}^{2}T^{4}} \frac{\cosh(\beta\omega)}{\sinh^{3}(\beta\omega/2)\cosh(\beta\omega/2)}. \tag{16}\] For a fermionic bath, c.f. Eq. (7), we find \[F[\rho(\nu_{\tau}|T)]=\Gamma\tau\frac{\omega^{2}}{8k_{B}^{2}T^{4}}\frac{\cosh (\beta\omega)}{\cosh^{4}(\beta\omega/2)}. \tag{17}\] If the initial state is the steady state, its contribution to the Fisher information reads \[F[p_{n_{0}}]=\frac{(\Gamma_{\text{out}}^{\prime}\Gamma_{\text{in}}-\Gamma_{ \text{out}}\Gamma_{\text{in}}^{\prime})^{2}}{\Gamma_{\text{out}}\Gamma_{ \text{in}}(\Gamma_{\text{out}}+\Gamma_{\text{in}})^{2}}. \tag{18}\] For rates obeying detailed balance, the steady state corresponds to a thermal state with the Fisher information \[F[p_{n_{0}}]=\frac{\omega^{2}}{2k_{B}^{2}T^{4}}\frac{1}{1+\cosh \left(\beta\omega\right)}. \tag{19}\] A universally applicable and commonly employed frequentist estimator is the _maximum likelihood estimator_ (ML) [58]. It is defined as the temperature that maximises the probability (3) of the observed trajectory, which can be given exactly as \[\tilde{T}_{\mathrm{ML}}(\nu_{\tau})\coloneqq\arg\max_{T}\,\left( \rho(\nu_{\tau}|T)\right)=\frac{\omega}{\log[(1+\tilde{n}_{B})/\tilde{n}_{B}]}, \tag{20}\] \[\tilde{n}_{B}\coloneqq\frac{k+l-\kappa(\omega)t+\sqrt{[k+l-\kappa (\omega)t]^{2}+4k\kappa(\omega)t}}{2\kappa(\omega)t}. \tag{21}\] In the large-data limit, the ML becomes unbiased and saturates the Cramer-Rao bound (12). While the Fisher information sets an ultimate bound on temperature estimation in the asymptotic limit, it can also serve as a means to improve the precision of a thermometer in the course of the measurement. In Sec. V below, we will design improved Bayesian estimation strategies based on the Fisher information. ## IV Bayesian Thermometry We now focus on the Bayesian approach to thermometry and specify our limited a priori knowledge about the temperature in the form of a prior probability density \(\rho(T)\geq 0\), \(\int dT\,\rho(T)=1\). Bayes' rule prescribes how to update our knowledge according to the observed trajectory [59] \[\rho(T)\mapsto\rho(T|\nu_{\tau})=\frac{\rho(\nu_{\tau}|T)\rho(T)}{\rho(\nu_{ \tau})}, \tag{22}\] given the likelihood \(\rho(\nu_{\tau}|T)\) from Eq. (3) and the normalisation factor \(\rho(\nu_{\tau})\coloneqq\int dT\rho(\nu_{\tau}|T)\rho(T)\). The posterior distribution \(\rho(T|\nu_{\tau})\) determines our remaining uncertainty about the actual temperature value after observing \(\nu_{\tau}\). Specifically, we can quantify the uncertainty (error) of a temperature estimate \(\tilde{T}(\nu_{\tau})\) in a similar manner as before by taking the average of the relative square distance over the posterior \[\mathrm{D}_{\mathrm{R}}[\tilde{T},\nu_{\tau}]\coloneqq\int dT\rho(T|\nu_{ \tau})\left(\frac{\tilde{T}(\nu_{\tau})-T}{T}\right)^{2}. \tag{23}\] This represents the _presumed_ relative error on temperature an experimentalist would report after recording the trajectory \(\nu_{\tau}\). We average the presumed relative error over all possible trajectories to get a trajectory-independent figure of merit for the quality of a given estimator \(\tilde{T}\) \[\mathrm{D}_{\mathrm{R}}[\tilde{T}]\coloneqq\int d\nu_{\tau}\rho(\nu_{\tau}) \mathrm{D}_{\mathrm{R}}[\tilde{T},\nu_{\tau}]. \tag{24}\] Note that, even though the true temperature \(T^{*}\) does not explicitly appear in this expression, we do implicitly assume it is restricted by our specified prior \(\rho(T)\). In fact, one could arrive at the same expression (24) from a different perspective: Suppose an experimenter observes trajectories \(\nu_{\tau}\) at the fixed, but unknown true temperature \(T^{*}\). Each trajectory occurs with probability \(\rho(\nu_{\tau}|T^{*})\) and yields the estimate \(\tilde{T}(\nu_{\tau})\), the relative deviation of which from the true value is given by (9). Averaged over many repetitions, the relative deviation is (10)--unknown to the experimenter, of course. Then, averaging this also over temperatures drawn from the prior distribution, one finds \[\int dT^{*}\rho(T^{*})\mathrm{D}_{\mathrm{R},T^{*}}[\tilde{T}] \tag{25}\] \[= \int dT^{*}\rho(T^{*})\int d\nu_{\tau}\rho(\nu_{\tau}|T^{*}) \left(\frac{\tilde{T}(\nu_{\tau})-T^{*}}{T^{*}}\right)^{2}\] \[= \int d\nu_{\tau}\rho(\nu_{\tau})\int d\nu_{\tau}\rho(T^{*}|\nu_{ \tau})\left(\frac{\tilde{T}(\nu_{\tau})-T^{*}}{T^{*}}\right)^{2}\] \[= \int d\nu_{\tau}\rho(\nu_{\tau})\int d\nu_{\tau}\mathrm{D}_{ \mathrm{R}}[\tilde{T},\nu_{\tau}]\equiv\mathrm{D}_{\mathrm{R}}[\tilde{T}].\] This quantity is a functional of the chosen estimator function \(\tilde{T}\), and by minimizing \(\mathrm{D}_{\mathrm{R}}[\tilde{T}]\), we obtain the associated optimal estimator as [20] \[\tilde{T}_{\mathrm{R}}(\nu_{\tau})\coloneqq\frac{\int dT\rho(T|\nu_{\tau})/T }{\int dT\rho(T|\nu_{\tau})/T^{2}}. \tag{26}\] Had we chosen a different cost function than (23) to quantify the uncertainty of the temperature estimate, we would have obtained a different optimal estimator [18; 54]. For example, we obtain a Bayesian estimator that is tightly related to the ML (and often easy to calculate, and more precise than ML in the small-data limit) if we simply maximize the posterior \[\tilde{T}_{\mathrm{MP}}(\nu_{\tau})\coloneqq\arg\max_{T}\!\rho(T|\nu_{\tau}). \tag{27}\] For a prior that is flat over the full range of temperatures, this is equivalent to the ML. In contrast, when the temperature is bounded, \(\tilde{T}_{\mathrm{MP}}(\nu_{\tau})\) adjusts the ML such that it never estimates a temperature outside the prior domain. In our simulations, we work with a flat prior, \[\rho(T)=\begin{cases}[T_{\mathrm{max}}-T_{\mathrm{min}}]^{-1},&T_{\mathrm{ min}}\leqslant T\leqslant T_{\mathrm{max}},\\ 0,&\mathrm{otherwise},\end{cases} \tag{28}\] for which the maximum-posterior estimator becomes \[\tilde{T}_{\mathrm{MP}}(\nu_{\tau})=\max\{\min\{\tilde{T}_{\mathrm{ML}}(\nu_{ \tau}),T_{\mathrm{max}}\},T_{\mathrm{min}}\}. \tag{29}\] Our analysis and techniques will work equally for other choices of prior and cost function; see Appendix D, for a comparison to relevant examples. The Cramer-Rao bound on the true relative square deviation (12) also bounds Bayesian figures of merit. In particular, if we insert an unbiased estimator into Eq. (25), we obtain the inequality \[\mathrm{D}_{\mathrm{R}}[\tilde{T}]=\int dT\rho(T)D_{\mathrm{R},t}[ \tilde{T}] \stackrel{{\tilde{T}_{\mathrm{ub.}}}}{{\geqslant}}\int\frac{dT \rho(T)}{T^{2}F[\rho(\nu_{\tau}|T)]}, \tag{30}\] see also [60; 61]. Although this bound (30)--sometimes referred to as the tight Bayesian Cramer-Rao bound--strictly holds only for unbiased estimators, we expect that Bayesian estimators will respect and are able to saturate it as their biases generally vanish in the limit of large data. For our system, a vanishing bias is ensured by the Bernstein-Von Mises theorem for Markov processes [62; 63] (see also Appendix C for a simple derivation), which states that in the long-time limit, the posterior \(\rho(T|\nu_{\tau})\) takes the shape of a Gaussian around the actual temperature, with a variance that is equal to the inverse of the Fisher information. ## V Improving precision: adaptive vs non-adaptive strategies ### A non-adaptive strategy So far we have characterised the trajectories in Eq. (3) given that the thermometer was prepared in a state \(p_{n_{0}}\). For long trajectories, this initial state plays a vanishing role in the estimation error; whereas the value \(\omega\) of the thermometer's energy gap is crucial. In a non-adaptive strategy, we may still be free to tune the gap at the beginning, and it is indeed worth tuning it wisely. While a global optimisation which aims at minimising the error for a given total time \(\tau\) is complex and may require heavy numerical simulations, we can still design a strategy to tune the gap such that it is optimal at least for long enough times. Our proposed strategy is therefore to tune the gap to one that minimises the error in the long-time limit. That is, we aim at minimising the right hand side of Eq. (30) \[\omega^{*}_{\mathrm{n.ad.}}\coloneqq\arg\min_{\omega}\int\frac{ dT\rho(T)}{T^{2}F[\rho(\nu_{\tau}|T)]}. \tag{31}\] This optimal value only depends on the prior and other fixed parameters. In Fig. 2 we depict a Monte-Carlo simulation of the relative error for the optimal estimator [c.f. Eq. (26)] in this non-adaptive scenario. As one can see, asymptotically the error approaches the tight Cramer-Rao bound (30), i.e., asymptotically this is an optimal strategy. ### A greedy adaptive strategy Here, we take a step further and assume that we have the freedom to change the probe's energy gap \(\omega\) during the measurement; that is, we can adaptively tune it. Once again, a global optimisation strategy may be costly. Thus, we will seek a simple alternative. Again, we design a strategy that is optimal in the long-time limit which works as follows: Suppose that at time \(t\) we have observed a trajectory \(\nu_{t}\) and estimate the temperature to be \(\tilde{T}(\nu_{t})\). We tune the gap to the one that maximises \(\tilde{T}^{2}(\nu_{t})F[\rho(\nu_{\tau}|\tilde{T}(\nu_{t}))]\). Note that \(\nu_{\tau}\) is a variable that is integrated over when determining the Fisher information, see Eq. (13), not to be mistaken with the observed trajectory \(\nu_{t}\) which is used to build the estimator, see Eq. (15). That is, at any time \(t\) and after observing the trajectory \(\nu_{t}\) we tune the gap to \[\omega^{*}_{\mathrm{ad.}}(\nu_{t})\coloneqq\arg\max_{\omega} \ \tilde{T}^{2}(\nu_{t})F[\rho(\nu_{\tau}|\tilde{T}(\nu_{t}))]. \tag{32}\] Using the expressions for the Bosonic and fermionic Fisher information we have \[\omega^{*}_{\mathrm{ad.}}(\nu_{t}) \approx 2.4750\ \tilde{T}(\nu_{t}),\ \ \text{Bosonic}, \tag{33}\] \[\omega^{*}_{\mathrm{ad.}}(\nu_{t}) \approx 2.6672\ \tilde{T}(\nu_{t}),\ \ \text{Fermionic}. \tag{34}\] This strategy guarantees that the gap asymptotically converges to the fixed optimal value, since we expect the estimator to approach the true temperature at long times. The convergence to a fixed gap is important; even if our feedback is delayed, it eventually tunes the gap to the optimal one for the true temperature. In Fig. (2) we demonstrate how the use of the adaptive strategy can outperform a non-adaptive scenario and saturate the asymptotic CRB at the optimal gap. Let us remark that the aforementioned simple strategies are motivated by asymptotic figures of merit. In the transient regime, however, alternative strategies might be more appropriate. ## VI Noisy measurements Many experimental platforms, e.g., semiconductor quantum dots [64], include noise in the measured trajectory, rather than the telegraph-like trajectory defined above Eq. (3). Additionally, it is common that the detector is limited by a finite bandwidth, introducing a delay in the readout. In this section, we develop a model for Bayesian parameter estimation including these effects, focusing on Gaussian measurement noise. The model is developed for temperature estimation using the two-level probe system defined in Eq. (1), but can be adapted to any other parameter and to more complicated architectures [65]. We use the model to simulate non-adaptive as well as adaptive temperature estimation. The noisy trajectory is defined as \(\nu_{\tau}=\{D_{t}|t\in[0,\tau]\}\), where \(D_{t}\) is the outcome of the detector at time \(t\). When the system resides in the ground (excited) state, the detector signal randomly fluctuates around \(0\) (1). The evolution of the system under such a measurement is \[\mathbf{P}_{t}(\nu_{t})=\mathsf{M}(D_{t}|D_{t-dt})e^{\mathsf{W}dt}\mathbf{P}_{t-dt}( \nu_{t-dt}), \tag{35}\] where \(\mathbf{P}_{\tau}(\nu_{\tau})=\left(p_{0}(\nu_{\tau}),p_{1}(\nu_{\tau})\right)^{\rm T}\) is a column vector with \(p_{j}(\nu_{\tau})\) the joint probability of occupying state \(j\in\{0,1\}\) at time \(\tau\) and observing \(\nu_{\tau}\). Here \(\mathsf{W}\) is the rate matrix in Eq. (1), determining the time-evolution of the probe system. The matrix \(\mathsf{M}(D|D^{\prime})\) describes the measurement of outcome \(D\), given that \(D^{\prime}\) was observed in the previous timestep. Following Ref. [65], we find \[\mathsf{M}(D|D^{\prime})=\sqrt{\frac{2\lambda}{\pi\gamma^{2}dt}} \tag{36}\] \[\begin{pmatrix}e^{-\frac{2\lambda}{\gamma^{2}dt}(D-D^{\prime}e^{- \gamma dt})^{2}}&0\\ 0&e^{-\frac{2\lambda}{\gamma^{2}dt}[D-(D^{\prime}e^{-\gamma dt}+\gamma dt)]^{2} }\end{pmatrix},\] where \(\lambda\) is the strength of measurement and \(\gamma\) is the bandwidth of the detector. A strong measurement (\(\lambda\gg\gamma\)) reduces the noise of the detector, while a weak measurement (\(\lambda\ll\gamma\)) increases the noise. The bandwidth introduces a delay \(1/\gamma\) in the detector and substantially dampens all frequency components larger than \(\gamma\). For \(\lambda\gg\gamma\to\infty\), the noise and lag vanish, and we recover the telegraph signal with the trajectory given by Eq. (3). The likelihood of observing the trajectory \(\nu_{\tau}\), given the temperature \(T\), is calculated via the inner product \[\rho(\nu_{\tau}|T)=(1,1)\cdot\mathbf{P}_{\tau}(\nu_{\tau}). \tag{37}\] The vector on the right-hand side can be calculated iteratively according to Eq. 35 using an initial distribution \(\mathbf{P}_{0}=(p_{0},p_{1})^{\rm T}\). It is then used to update the current state of knowledge using Bayes rule [see Eq. (22)]. The evolution of the state of knowledge with each increment in the measurement register can be equivalently formulated in the language of continuous signal filtering resulting in the Kushner-Stratonovich equation. This is derived in Appendix E In Fig. 3, we show that noisy measurements do not saturate the Cramer-Rao bound Eq. (30); contrary to what we saw for ideal measurements. Nonetheless, our adaptive strategy--which is similar to the one we use for the ideal measurement--can reach the Cramer-Rao bound for the non-adaptive ideal measurement at long enough times. Unfortunately, we cannot simulate for arbitrary long times, but the possibility for the adaptive noisy measurement to outperform the non-adaptive ideal measurement is not ruled out. Lastly, at short times the performance of the adaptive strategy is actually worse than it is for non-adaptive strategies. Again, note that \begin{table} \begin{tabular}{|c|c||c|c|c|} \hline & Strategy & The optimal gap & Optimal value for & \(\int_{T_{\rm min}}^{T_{\rm max}}dT\rho(T)\left[T^{2}F[\rho(\nu_{\tau}|T)] \right]^{-1}\) \\ & Example: \(T_{\rm min}=0.1\)\(\epsilon\), \(T_{\rm max}=10\)\(\epsilon\) & \(T^{2}F[\rho(\nu_{\tau}|T)]\) & Example: \(T_{\rm min}=0.1\)\(\epsilon\), \(T_{\rm max}=10\)\(\epsilon\) \\ \hline \hline \multirow{2}{*}{1} & Non adaptive with & \multirow{2}{*}{\(\approx 1.5401\)\(\epsilon\)} & \multirow{2}{*}{—} & \multirow{2}{*}{132.79 (\(\Gamma\)\(\tau\))\({}^{-1}\)} \\ & initially optimised gap & & & \\ \hline \hline \multirow{2}{*}{2} & Adaptive & \multirow{2}{*}{\(\approx 2.6672\)\(T\)} & \multirow{2}{*}{0.3795 \(\Gamma\)\(\tau\)} & \multirow{2}{*}{2.6350 (\(\Gamma\)\(\tau\))\({}^{-1}\)} \\ & asymptotically thermal & & & \\ \hline \end{tabular} \end{table} Table 2: Same as TABLE. I, but for a Fermionic bath with flat spectral density (i.e., \(s=0\)). Evidently, the choice of the strategy is more impactful than the Bosonic bath. Specifically, one can see adaptive strategies can improve the asymptotic precision more than an order of magnitude compared to the non-adaptive one. One should note that, this is mainly stemming from the choice of a flat spectral density, that is more appropriate for fermionic baths. Again, we express all the parameters in terms of \(\epsilon\), such that temperature \(T\), frequency \(\omega\), and the coupling \(\Gamma\) have the \(\epsilon\) dimension, while time \(\tau\) has the dimension \(\epsilon^{-1}\). For this table we have set \(T_{\rm min}=0.1\epsilon\), \(T_{\rm max}=10\epsilon\), and \(\Gamma=\epsilon=1\). \begin{table} \begin{tabular}{|c|c||c|c|c|} \hline & Strategy & The optimal gap & Optimal value for & \(\int_{T_{\rm min}}^{T_{\rm max}}dT\rho(T)\left[T^{2}F[\rho(\nu_{\tau}|T)] \right]^{-1}\) \\ & Example: \(T_{\rm min}=0.1\)\(\epsilon\), \(T_{\rm max}=10\)\(\epsilon\) & \(T^{2}F[\rho(\nu_{\tau}|T)]\) & Example: \(T_{\rm min}=0.1\)\(\epsilon\), \(T_{\rm max}=10\)\(\epsilon\) \\ \hline \hline \multirow{2}{*}{1} & Non adaptive with & \multirow{2}{*}{\(\approx 0.4595\)\(\epsilon\)} & \multirow{2}{*}{—} & \multirow{2}{*}{0.4133 (\(\kappa^{\prime}\epsilon\)\(\tau\))\({}^{-1}\)} \\ & initially optimised gap & & & \\ \hline \multirow{2}{*}{2} & Adaptive & \multirow{2}{*}{\(\approx 2.4750\)\(T\)} & \multirow{2}{*}{1.5430 \(\kappa^{\prime}\tau\)\(T\)} & \multirow{2}{*}{0.3015 (\(\kappa^{\prime}\epsilon\)\(\tau\))\({}^{-1}\)} \\ & asymptotically thermal & & & \\ \hline \end{tabular} \end{table} Table 1: The different strategies proposed in this work and their asymptotic precision for a Bosonic bath—see TABLE. II for a fermionic bath. (1) The non-adaptive strategy, in which the gap is chosen once and then left unchanged. That is, we choose the gap that minimises \(\int\rho(T)dT\left[T^{2}F[\rho(\nu_{\tau}|T)]\right]^{-1}\) with \(F[\rho(\nu_{\tau}|T)]\) from Eq. (15). Once the gap is chosen at the begining of the process, it will not be adaptively changed. One can see that for the parameter range considered here, this strategy performs reasonably good compared to the adaptive one. (2) A practical adaptive strategy that only controls the gap and leaves the state untouched. The gap is chosen such that in the asymptotic limit it is optimal for a thermal state (i.e., the gap that maximises \(F[\rho(\nu_{\tau}|T)]\) in Eq. (15)). For the parameters that we consider, the adaptive strategy (2) can outperform the non-adaptive strategy (1) by \(\left[\mathrm{D_{R}}[\tilde{T}_{\rm R}](\mathrm{ad.})-\mathrm{D_{R}}[\tilde{T}_{ \rm R}](\mathrm{n.~{}ad.})\right]/\mathrm{D_{R}}[\tilde{T}_{\rm R}](\mathrm{n.~ {}ad.})\approx 40\%\)— in the asymptotic limit. Here, temperature and energies are expressed in units of \(\epsilon\) while \(\tau\) has units of \(\epsilon^{-1}\). The coupling \(\kappa^{\prime}\) is dimensionless. In the examples we have set \(T_{\rm min}=0.1\epsilon\), \(T_{\rm max}=10\epsilon\), \(\epsilon=1\), and \(\kappa^{\prime}=1\). our adaptive strategy is not necessarily optimal for the noisy scenario; it is rather designed to be optimal for the ideal measurement and at long times. In Appendix F we further discuss that a significant estimation bias that persists at low and high temperatures is the reason behind the noisy measurements failing to saturate the Cramer-Rao bound. At low temperatures this bias arises from there not being sufficient time for transitions to be observed which results in the estimated temperature being below the actual temperature. This low temperature bias is not affected by the measurement strength. In contrast, at higher temperatures the temperature is overestimated. This is reduced when the measurement strength is increased. Additionally, it shows that one of the ways that the adaptive strategy improves the estimation is by reducing the bias. In particular, at low temperatures, being able to adjust the gap of the system allows more transitions to occur in the trajectory leading to a better estimate. ## VII Conclusions & Outlook Our thermometry protocol based on a continuously monitored two-level system offers theoretical tools for temperature estimation (and estimation of other environmental parameters) in quantum systems to various experimental settings dealing with both bosonic and fermionic environments. In particular, our results can be readily exploited in experimental scenarios, e.g., in temperature and chemical potential estimation using continuously monitored quantum dots. Our results can be related to and contrasted against several other studies in recent years. Indeed, continuous monitoring as a non-destructive method for interrogating quantum systems has found use in parameter estimation tasks particularly in magnetometry [49; 50; 51; 52]. Theoretical works have also put forward proposals to surpass the standard quantum limit, and furthermore address the shortcomings in presence of noise [66; 67; 68; 69]. On a more fundamental level, the ultimate Bayesian and frequentist bounds have been addressed [70; 71; 72; 73; 74]. These bounds cannot be straightforwardly adopted to our problem since they are either problem-specific or fundamental and thus too generic. Furthermore, in our case, the parameter to be estimated is a property of the environment, which distinguishes our setting from magnetometry and other Hamiltonian estimation tasks. We were able to analyt Figure 2: The relative error \(D_{\rm R}[\tilde{T}_{\rm R}]\) in thermometry of a bosonic bath as a function of time. Here we evaluate the relative error by a Monte Carlo simulation that approximates Eq. (24) by randomly sampling a temperature from the prior according to which we randomly sample a trajectory. We repeat this process 1000 times and take the average. The error is plotted for both the adaptive (solid blue), and the non adaptive (solid black) scenarios. The CRB lines correspond to the r.h.s. of Eq. (30). At long times, both strategies approach this asymptotic bound both for adaptive (dashed blue) and non-adaptive (dashed black) scenarios. Our simulations show that the adaptive strategies outperform the non-adaptive ones even in the non-asymptotic times. In the simulations we choose the parameters according to TABLE. 1, that is \(T_{\rm min}=0.1\epsilon\), \(T_{\rm max}=10\epsilon\), \(\epsilon=1\), and \(\kappa^{\prime}=1\). In the adaptive strategy, the frequency changes in each time step. These graphs are obtained by simulating 1000 trajectories with temperatures that are randomly sampled from the prior distribution. Figure 3: Performance of the temperature estimation protocol when it is limited by a noisy trajectory with a finite bandwidth \(\gamma=10.0\). Here, the relative error is averaged over 1000 trajectories generated at randomly selected temperatures in the range \([0.1,10.0]\) with an initial gap \(\omega_{\rm n.~{}ad}^{*}\). It is plotted as a function of time and is compared to the adaptive and non-adaptive Cramer-Rao bounds. Increasing the measurement strength \(\lambda\) leads to a better accuracy however, the noise prevents the CRB from being reached. For a finite measurement strength, the adaptive strategy performs worse than the non-adaptive strategy at short times. The rest of the parameters are chosen similar to Fig. 2, that is \(T_{\rm min}=0.1\epsilon\), \(T_{\rm max}=10\epsilon\), \(\epsilon=1\), and \(\kappa^{\prime}=1\). ically characterise the trajectories--see Eq. (3)--which is crucial in finding the frequentist and Bayesian limits in estimation and designing improved non-adaptive and adaptive protocols. Our investigation leaves several future directions to proceed. We considered purely classical dynamics, without any quantum coherence. In the presence of coherence, our findings should be revisited to find the optimal measurement and whether quantum correlations are beneficial. Furthermore, we consider a single probe scenario. In the presence of multiple interacting (or many-body) probes, quantum correlations may be harnessed for thermometry--similar to the unitarily encoded magnetometry considered in Ref. [75]. Finally, our optimal protocols are based on optimising the Cramer-Rao bound. They are optimal in the limit of large data (long monitoring time). However, in the finite-data regime, one may be able to design better strategies. This could be done, at a massive computational cost, by minimising the relative distance error numerically. A smarter algorithm that requires less computational resources--at the cost of being sub-optimal--is desirable. ###### Acknowledgements. P.P.P, M.P.L, and G.H. acknowledge funding from the Swiss National Science Foundation (Ecosellenza Professorial Fellowship PCEFP2_194268, Ambizione Grant No. PZ00P2-186067, PRIMA PR00P2 179748). G.H. also acknowledges NCCR Swissmap. M.M. acknowledges funding from the DFG/FWF Research Unit FOR 2724 'Thermal machines in the quantum world'. B.A.A. was supported by the Swedish Research Council, Grant No. 2018-03921. P.B. is supported by the European Research Council (Consolidator grant 'Cocoquest' 101043705) and grant number FQXi Grant Number: FQXi-IAF19-07 from the Foundational Questions Institute Fund, a donor advised fund of Silicon Valley Community Foundation.
2301.09909
Radar Sensing via OTFS Signaling: A Delay Doppler Signal Processing Perspective
The recently proposed orthogonal time frequency space (OTFS) modulation multiplexes data symbols in the delay-Doppler (DD) domain. Since the range and velocity, which can be derived from the delay and Doppler shifts, are the parameters of interest for radar sensing, it is natural to consider implementing DD signal processing for radar sensing. In this paper, we investigate the potential connections between the OTFS and DD domain radar signal processing. Our analysis shows that the range-Doppler matrix computing process in radar sensing is exactly the demodulation of OTFS with a rectangular pulse shaping filter. Furthermore, we propose a two-dimensional (2D) correlation-based algorithm to estimate the fractional delay and Doppler parameters for radar sensing. Simulation results show that the proposed algorithm can efficiently obtain the delay and Doppler shifts associated with multiple targets.
Kecheng Zhang, Weijie Yuan, Shuangyang Li, Fan Liu, Feifei Gao, Pingzhi Fan, Yunlong Cai
2023-01-24T10:40:09Z
http://arxiv.org/abs/2301.09909v2
# Radar Sensing via OTFS Signaling: A Delay Doppler Signal Processing Perspective ###### Abstract The recently proposed orthogonal time frequency space (OTFS) modulation multiplexes data symbols in the delay-Doppler (DD) domain. Since the range and velocity, which can be derived from the delay and Doppler shifts, are the parameters of interest for radar sensing, it is natural to consider implementing DD signal processing for radar sensing. In this paper, we investigate the potential connections between the OTFS and DD domain radar signal processing. Our analysis shows that the range-Doppler matrix computing process in radar sensing is exactly the demodulation of OTFS with a rectangular pulse shaping filter. Furthermore, we propose a two-dimensional (2D) correlation-based algorithm to estimate the fractional delay and Doppler parameters for radar sensing. Simulation results show that the proposed algorithm can efficiently obtain the delay and Doppler shifts associated with multiple targets. OTFS; delay-Doppler (DD) domain; Radar sensing; Fractional delay and Doppler ## I Introduction Future wireless systems, such as beyond 5G or 6G networks, are expected to support not only high-quality wireless communications but also highly accurate sensing services. It is widely acknowledged that sensing in the next-generation wireless network will become much more important than the currently deployed networks [1]. Most researches focus on orthogonal frequency division multiplexing (OFDM) for realizing integrated sensing and communication (ISAC) [2]. OFDM has various advantages [3], such as low detection complexity, and high robustness for radar target detection. However, the implementation of OFDM could be challenging in practice [4]. For example, the high peak-to-average power ratio (PAPR) problem of OFDM will reduce power efficiency, especially at high-frequency carriers. Moreover, in high-mobility environments, the orthogonality between subcarriers is destroyed due to the severe Doppler effect, which could degrade communication performance significantly. Besides, the channel response in a high-mobility communication scenario varies significantly across different coherence regions, in which OFDM needs to estimate the channel more frequently to get accurate channel state information and leading to high signaling overhead. These problems motivate the researchers to develop a new modulation technique that has robust communication performance in high-mobility environments. The recently proposed orthogonal time frequency space (OTFS) modulation has become a promising candidate for providing reliable communication under high-mobility multi-path scenarios [4, 5, 6]. Different from the conventional time-frequency (TF) domain modulation schemes, the OTFS modulation scheme multiplexes information symbols over the two-dimensional (2D) delay-Doppler (DD) domain [4, 5], where the resolvable paths of the wireless channel are characterized by the different delay and Doppler shifts. There are many benefits to describing the channel in the DD domain. For example, in high-mobility communication scenarios, the DD domain channel is generally sparse and quasi-static [6] compared to the fast time-varying channel in the TF domain. Meanwhile, OTFS enjoys a lower PAPR [7], making it easier to be implemented with a more efficient power amplifier, compared to the OFDM counterpart. More importantly, the range and velocity parameters of the targets, which can be inferred from delay and Doppler shifts, are the primary parameters to be estimated in radar signal processing, making OTFS a natural choice for realizing ISAC. Some recent works have considered radar sensing via OTFS [8, 9]. In [8], the author proposed a maximum likelihood (ML) algorithm to estimate range and velocity via the OTFS transmission scheme. It shows that the OTFS signal can achieve radar estimation performance bound while maintaining a superior communication performance over OFDM. A matched-filter algorithm was proposed in [9] to estimate the range and velocity of targets, in which the structure of the effective OTFS channel is utilized to simplify the computation. It is shown that the estimation performance of target speed based on OTFS is better than using OFDM. However, the potential of radar sensing using OTFS has not been fully explored. For example, the complexity of the algorithm in [8] is nearly the cube of the OTFS frame size, which is relatively high. The work of [9] only considers the integer delay and Doppler shifts. However, this could be highly impractical in real wireless networks, where the time resource is limited, resulting in insufficient Doppler resolution. Consequently, the presence of fractional Doppler is inevitable. Moreover, for an accurate sensing performance, it is important to study radar sensing with insufficient frequency resources, despite the fact that the integer delay is generally sufficient for communication design. Thus, it is necessary to consider both the fractional delay and Doppler shifts for radar sensing. The aforementioned issues motivate us to develop a method to fulfill radar sensing via OTFS signaling in the presence of fractional delay and Doppler shifts. In this paper, we first discuss the intrinsic connection between the radar sensing range-Doppler matrix computation and the OTFS demodulation process that they are exactly the same under a rectangular pulse shaping filter. Then, inspired by the pulse compression (fast-time matched filtering) in radar sensing [10], we propose a two-step method to estimate the fractional delay and Doppler indices. In particular, the sensing receiver acquires the DD domain echo waves reflected by the targets and performs a 2D correlation between the received DD domain symbols and the transmitted information symbols. After getting the correlation matrix, a difference-based method, in which we take the subtraction between the indices of the second and first maximum magnitude of the correlated matrix, is implemented to calculate the fractional delay and Doppler indices. Simulation results show that the proposed algorithm can obtain the delay and Doppler shifts associated with multiple targets efficiently. _Notations_: \((\cdot)^{*}\) denotes the conjugate operation; \([\cdot]_{N}\) denotes the modulo operation with respect to (w.r.t.) \(N\); \(|\cdot|\) means taking the magnitude of a complex; \(\delta(\cdot)\) is the Dirac delta function; the bold lowercase \(\mathbf{a}^{N}\) represents a vector with N dimension, and the bold uppercase \(\mathbf{A}^{M\times N}\) represents a matrix with \(M\times N\) dimension; \(\mathbb{R}^{M\times N}\), \(\mathbb{Z}^{M\times N}\), and \(\mathbb{C}^{M\times N}\) are the \(M\times N\) dimension matrix spaces with real, integer, and complex entries, respectively. ## II System Model For each OTFS frame, the number of time slots and the number of sub-carriers are denoted by \(N\) and \(M\), respectively. The occupied bandwidth of one OTFS frame is \(M\Delta f\) with a duration \(NT\), where \(\Delta f\) represents the subcarrier space, and \(T\) is the symbol duration. The sequence of information bits is mapped to a symbol set \(\{X_{\text{DD}}[k,l],k=0,\ldots,N-1,l=0,\ldots,M-1\}\) in the DD domain, where \(l\) and \(k\) represent the indices of delay and Doppler shifts, respectively, and \(X_{\text{DD}}\in\mathbb{A}\), in which \(\mathbb{A}=\{\alpha_{1},\ldots,\alpha_{|\mathbb{A}|}\}\) is the modulation alphabet (e.g. QAM). The DD domain symbol \(X_{\text{DD}}[k,l]\) is transformed into the TF domain signal \(X_{\text{TF}}[n,m]\) through the inverse symplectic finite Fourier transform (ISFFT) [4], \[X_{\text{TF}}[n,m]=\frac{1}{\sqrt{NM}}\sum_{k=0}^{N-1}\sum_{l=0}^{M-1}X_{ \text{DD}}[k,l]e^{j2\pi(\frac{nk}{N}-\frac{ml}{M})}. \tag{1}\] The TF domain modulator maps \(X_{\text{TF}}[n,m]\) to the time domain transmit signal \(s(t)\) via the Heisenberg transform [4], \[s(t)=\sum_{n=0}^{N-1}\sum_{m=0}^{M-1}X_{\text{TF}}[n,m]g_{tx}(t-nT)e^{j2\pi m \Delta f(t-nT)}, \tag{2}\] where \(g_{tx}(t)\) is the pulse shaping filter at the transmitter side. The signal \(s(t)\) transmits over a linear time-varying wireless channel and is reflected by the sensing targets, yielding the sensing echo as \[r(t)=\int\int h(\tau,\nu)e^{j2\pi\nu(t-\tau)}s(t-\tau)\,d\tau\,d\nu+z(t), \tag{3}\] where \(z(t)\) denotes the additive white Gaussian noise (AWGN) process with one side power spectral density (PSD) \(N_{0}\), and \(h(\tau,\nu)\in\mathbb{C}\) is the complex base-band channel impulse response in the DD domain, which can be expressed as \[h(\tau,\nu)=\sum_{i=1}^{P}h_{i}\delta(\tau-\tau_{i})\delta(\nu-\nu_{i}), \tag{4}\] where \(P\in\mathbb{Z}\) is the number of targets in the sensing scenario, and \(h_{i}\), \(\tau_{i}\) and \(\nu_{i}\) denote the reflection coefficient, delay, and Doppler shift associated with the \(i\)th target, respectively. Assuming that the range and relative velocity associated with the \(i\)th target are \(R_{i}\) and \(V_{i}\), respectively, the round-trip delay \(\tau_{i}\in\mathbb{R}\) and the Doppler frequency \(\nu_{i}\in\mathbb{R}\) are expressed as \[\tau_{i}=\frac{2R_{i}}{c}=\frac{l_{\tau_{i}}}{M\Delta f},\,\nu_{i}=\frac{2f_{c} V_{i}}{c}=\frac{k_{\nu_{i}}}{NT}, \tag{5}\] where \(c\) is the speed of light and \(f_{c}\) is the carrier frequency, \(l_{\tau_{i}}=l_{i}+\iota_{i}\) and \(k_{\nu_{i}}=k_{i}+\kappa_{i}\) are the delay and Doppler indices of the \(i\)th target, \(l_{i}\in\mathbb{Z}\) and \(k_{i}\in\mathbb{Z}\) denote the integer parts of indices at the \(i\)th target, while \(\iota_{i}\in[-0.5,0.5]\) and \(\kappa_{i}\in[-0.5,0.5]\) denote the fractional parts. At the OTFS receiver, the received time domain signal \(r(t)\) is first transformed into the TF domain via the Wigner transform [4], which is given by \[Y_{\text{TF}}[n,m]=\int_{-\infty}^{\infty}r(t)g_{rx}^{*}(t-nT)e^{-j2\pi m \Delta f(t-nT)}\,dt, \tag{6}\] where \(g_{rx}(t)\) is the receiver pulse shaping filter. Then the TF domain received signal \(Y_{\text{TF}}[n,m]\) is transformed onto the DD domain through the SFFT, which can be expressed as \[Y_{\text{DD}}[k,l]=\frac{1}{\sqrt{NM}}\sum_{n=0}^{N-1}\sum_{m=0}^{M-1}Y_{\text {TF}}[n,m]e^{-j2\pi(\frac{nk}{N}-\frac{ml}{M})}. \tag{7}\] According to [11], the DD domain input-output relationship in the delay-Doppler domain can be written as \[\begin{split} Y_{\text{DD}}[k,l]&=\sum_{i=1}^{P}h_{ i}e^{j2\pi\frac{(l-l_{\tau_{i}})k_{\nu_{i}}}{MN}}\alpha(k-k_{\nu_{i}},l-l_{ \tau_{i}})\\ &\times X_{\text{DD}}[[k-k_{\nu_{i}}]_{N},[l-l_{\tau_{i}}]_{M}]+Z_ {\text{DD}}[k,l],\end{split} \tag{8}\] where \(Z_{\text{DD}}[k,l]\) denotes the effective noise in the DD domain that follows the Gaussian distribution \(\mathcal{CN}(0,\sigma^{2})\), and \(\alpha_{i}(k,l)\) is the phase offset given by \[\alpha[k,l]=\begin{cases}1,&l\geqslant 0,\\ e^{-j2\pi\frac{k}{N}},&l<0.\end{cases} \tag{9}\] ## III OTFS Based Radar Sensing In this section, we first show the intrinsic connection between the range-Doppler matrix computation in radar sensing and the OTFS demodulation. Then we propose a 2D correlation-based method to estimate the delay and Doppler indices. ### _Range-Doppler Matrix Computation via OTFS Demodulation_ In radar sensing, we need several pulses to get the range/Doppler information. The sampling rate of every transmitted pulse is called fast time, and the sampling interval between the pulses is called slow time [10]. Fig. 1 is an example to show how to calculate the range-Doppler matrix. There are four pulses for target estimation in this example. By rearranging the samples of the pulses, we get the fast-time slow-time matrix. The range-Doppler matrix can then be obtained by applying the discrete Fourier transform (DFT) along the slow-time axis. By denoting \(y_{\text{TD}}[n]=r(\frac{nT}{M})\), we can see that the fast-time slow-time matrix \(\mathbf{R}^{MN}\) is formulated by assigning \(y_{\text{TD}}[m+nM]\) to the \((m,n)\)-th entry of \(\mathbf{R}\). Now combine (6) and (7), the demodulation procedure can be expressed as below \[Y_{\text{DD}}[k,l]=\frac{1}{\sqrt{N}}\sum_{n=0}^{N-1}y_{\text{TD}}[l+nM]e^{-j2 \pi\frac{nk}{N}}, \tag{10}\] which is the discrete Zak transform (DZT) [4] under the energy-normalized rectangular matched filter. It can be observed that the DFT operation on each row of the 2D fast-time slot-time matrix is exactly the DZT in (10), which means the matrix \(\mathbf{Y}_{\text{DD}}\) is the same as the range-Doppler matrix in radar sensing if we choose the rectangular pulse shaping filter. ### _2D Correlation-based Parameter Estimator_ We will describe the 2D correlation-based parameter estimator and the fractional delay and Doppler calculating method in this subsection. According to the brief description in the previous subsection, the received DD domain symbol matrix \(\mathbf{Y}_{\text{DD}}\) is the same as the range-Doppler matrix. However, we cannot localize the targets of interest from \(\mathbf{Y}_{\text{DD}}\) directly, due to the presence of information symbols. After passing the channel, the delay and Doppler bins of these DD domain symbols will overlap with each other, which makes the received DD domain signals contain both the channel responses and the overlapped responses from DD domain information symbols. Inspired by pulse compression in radar sensing, a 2D correlation-based estimator, which can be considered as pulse compression along both the delay and Doppler axes, is implemented to improve the acquisition of delay and Doppler parameters. Denote the matrix after 2D pulse compression as \(\mathbf{V}\), then the accumulated correlation coefficient under different delay and Doppler indices can be expressed as \[V[k,l]=\sum_{n=0}^{N-1}\sum_{m=0}^{M-1}Y_{\text{DD}}^{*}[n,m]X_{ \text{DD}}[[n-k]_{N},[m-l]_{M}]\\ \times\alpha[n-k,m-l]e^{j2\pi\frac{(m-l)k}{NM}}, \tag{11}\] where \(k\in[0,N-1]\) and \(l\in[0,M-1]\), and \(\alpha[k,l]\) is a phase offset given in (9). It should be noted that the 2D correlation cannot be considered as a simple combination of two one-dimensional correlation operations, since there is a phase term \(\alpha[n-k,m-l]\) in (11). To have a better illustration of the proposed method, we establish an example of OTFS radar sensing under the noiseless scenario. In this example, we assume there are \(P=4\) targets, and the normalized quadrature phase shift keying (QPSK) information symbols are generated randomly. We set \(M=32\) and \(N=32\), and the delay and Doppler indices associated with different targets are set as \([24.25,18.07,11.72,21.30]\) and \([2.58,3.93,2.04,-3.24]\), respectively. The DD domain Fig. 1: An illustration for obtaining the fast-time slow-time matrix and the range-Doppler matrix. Fig. 2: The delay-Doppler matrices for radar sensing via OTFS waveform before and after performing the 2D correlation, where \(P=4\) targets are considered and the delay and Doppler indices are of fractional values received symbol matrix under this specific scenario is represented in Fig. 2(a), and the matrix after pulse compression is shown in Fig. 2(b). We can see that the received symbol matrix \(\mathbf{Y}_{\text{DD}}\), i.e., the range-Doppler matrix in radar sensing, is dense due to the overlapped responses from each DD domain information symbol, as described in (8). But after performing the 2D correlation operation, the target responses are more localized. This procedure can be viewed as a special pulse compression in the DD domain, which has a similar function to pulse compression in radar sensing to enhance the acquisition of delay and Doppler responses. After obtaining the 2D correlation matrix, we can estimate the delay and Doppler indices by taking the matrix peaks. By finding the peaks of the range-Doppler matrix, we can only get the integer parts of the indices, while the fractional parts of the delay and Doppler indices remain unknown. This will lead to an inaccurate sensing result [12]. By observing the peaks in the 2D correlation matrix, we can see that there is power leakage from the corresponding delay-Doppler bins to their neighbors, as shown in Fig. 2(b). The power leakage is caused by the presence of fractional delay and Doppler indices. Inspired by this obsession, we propose a simple method to estimate the fractional delay and Doppler through a difference method, which is explained as follows. We first consider how to calculate the fractional part of the Doppler index under the noiseless scenario. Suppose that the delay and Doppler indices of one target are \(k_{\nu_{i}}=k_{i}+\kappa_{\nu_{i}}\) and \(l_{\tau_{i}}=l_{i}+\iota_{\tau_{i}}\), respectively. Meanwhile, assume that there are no two paths that have the same delay. Then we can obtain the row index of the maximum magnitude \(k^{\prime}_{\nu_{1}}\) and the row index of the second maximum magnitude \(k^{\prime}_{\nu_{2}}\) at the \(l_{i}\)th column in the delay-Doppler matrix after pulse compression, \[\begin{split} k^{\prime}_{\nu_{1}}&=\mathop{\arg \max}_{k\in\{\lceil-N/2\rceil,\ldots,\lceil N/2\rceil-1\}}|V[k,l_{i}]|,\\ k^{\prime}_{\nu_{2}}&=\mathop{\arg\max}_{k\in\{ \lceil-N/2\rceil,\ldots,\lceil N/2\rceil-1\}\setminus\{k^{\prime}_{\nu_{1}} \}}|V[k,l_{i}]|.\end{split} \tag{12}\] Having \(k_{\nu_{1}}\) and \(k_{\nu_{2}}\) in hand, we have the following proposition. **Proposition 1**: _Under noiseless conditions, the actual Doppler index \(k_{i}+\kappa_{\nu_{i}}\) must fall into the interval bounded by \(k_{\nu_{1}}\) and \(k_{\nu_{2}}\), where \(|k^{\prime}_{\nu_{2}}-k^{\prime}_{\nu_{1}}|=1\). Then, the ratio between the magnitudes of the correlation coefficients with the same delay index and Doppler indices \(k_{\nu_{1}}\) and \(k_{\nu_{2}}\) can be approximated by_ \[\frac{|V[k^{\prime}_{\nu_{1}},l_{i}]|}{|V[k^{\prime}_{\nu_{2}},l_{i}]|}\approx \frac{|k^{\prime}_{\nu_{2}}-k^{\prime}_{\nu_{1}}-\kappa_{\nu_{i}}|}{|-\kappa_ {\nu_{i}}|}, \tag{13}\] _where the approximation error is at the order of \(\mathcal{O}\left(\frac{1}{MN}\right)\)._ Proof:: See Appendix A. Therefore, the fractional Doppler in (5) can be derived as \[\kappa_{i}=\frac{(k^{\prime}_{\nu_{2}}-k^{\prime}_{\nu_{1}})|V[k^{\prime}_{ \nu_{2}},l_{i}]|}{|V[k^{\prime}_{\nu_{1}},l_{i}]|+|V[k^{\prime}_{\nu_{2}},l_{i }]|}, \tag{14}\] Similarly, by applying the above derivation to the fractional delay taps, we can get \[\iota_{i}=\frac{(l^{\prime}_{\tau_{2}}-l^{\prime}_{\tau_{1}})|V[k_{i},l^{ \prime}_{\tau_{2}}]|}{|V[k_{i},l^{\prime}_{\tau_{1}}]|+|V[k_{i},l^{\prime}_{ \tau_{2}}]|}, \tag{15}\] where \(l^{\prime}_{\tau_{1}}\) and \(l^{\prime}_{\tau_{2}}\) are the column indices of the maximum and the second maximum magnitudes in the \(k_{i}\)th row of the 2D correlation matrix. The algorithm to estimate the fractional parts of the delay and Doppler indices are summarized in Algorithm 1, where \(\boldsymbol{k}=[k_{1},\ldots,k_{P}]\) and \(\boldsymbol{l}=[l_{1},\ldots,l_{P}]\) denotes the integer parts of the Doppler and delay indices, respectively. This algorithm assumes that the number of targets \(P\) is known. But a more realistic approach is to determine whether the target exists or not by setting a detection threshold for the 2D correlation matrix, which will be our work in the future. Now we briefly discuss a possible solution for peak selection. As shown in Fig.3, we first select one entry and compare the magnitude of this entry with its eight adjacent entries. If its magnitude is larger than all eight neighbors, then this entry is considered one peak. Apply this method over the whole matrix, we get all the peaks that appeared in \(\mathbf{V}\). Then we pick the largest \(P\) peaks and consider the corresponding row and column indices are the integer parts of delay and Doppler shifts denoted as \(\boldsymbol{k}\) and \(\boldsymbol{l}\). By implementing Algorithm 1, we can estimate the delay and Doppler shifts associated with the off-grid targets. ## IV Numerical Results In this section, we investigate the estimation performance under various conditions through Monte Carlo simulations. All simulation results are averaged from \(10^{4}\) OTFS frames. We set \(M=128\) and \(N=64\) for each OTFS frame, which means there are 64 time slots and 128 subcarriers in the TF domain. The information bits are generated randomly and mapped to QPSK symbols. The carrier frequency is set as 4 GHz with 15 kHz subcarrier spacing. The maximum speed of the mobile user is set to be 500 km/h, in which the maximum Doppler frequency shift index is \(k_{\nu_{\text{rms}}}=8\), and the maximum delay index is \(l_{\tau_{\text{rms}}}=10\). Both the delay and Doppler indices of the \(i\)th path are drawn uniformly in the ranges \([-l_{\tau_{\text{rms}}},l_{\tau_{\text{rms}}}]\) and \([-k_{\tau_{\text{rms}}},k_{\tau_{\text{rms}}}]\), respectively. In the simulations, we consider two scenarios with \(P=4\) and \(P=6\) targets and calculate the root-mean-square error Fig. 3: The illustration concerning how to pick the peaks in \(\mathbf{V}\) (RMSE) of estimated range and velocity versus signal-to-noise ratio (SNR). The RMSE of the proposed parameter estimation methods is compared with the methods that only take the integer part of the delay and Doppler taps from the 2D correlation matrix. The simulation results are shown in Fig. 4(a) and Fig. 4(b). It is observed in Fig. 4(a) that when there are 4 targets, the RMSE of the estimated velocity is much lower if the fractional Doppler index is estimated through (14), which shows the effectiveness of the proposed algorithm. When there are 6 targets, the RMSE performance becomes worse compared to the scenario with 4 targets at the same SNR level. This is because the interference caused by the response overlaps from each symbol is more severe with 6 targets, which makes the approximation in (13) less accurate. Thus, the velocity estimation becomes worse. For the same reason, the RMSE of the estimated range, shown in Fig.4(b), is smaller if there are 4 targets in the simulation scenario, and the estimation results are much worse if only the integer part of the indices is considered. We can see that the RMSE of range estimation is hardly changed versus SNR. This is because the delay resolution is \(\frac{1}{M\Delta f}\approx 0.52\) microseconds, i.e., the corresponding range resolution is \(156\) meters. Under such an insufficient resolution, the range estimation RMSE will not change with respect to SNR. By increasing the number of subcarriers and time-slots of OTFS symbols, the maximum number of resolvable targets can be improved. Meanwhile, if we choose a higher carrier frequency and larger subcarrier frequency space to sense the target, we can obtain better estimation performance, which is intuitive. ## V Conclusion This paper unveiled the intrinsic connection between the demodulation procedure of OTFS signaling and the range-Doppler matrix computation in radar sensing and applied a 2D correlation-based method to estimate the delay and Doppler indices. Since the delay and Doppler indices of the channel are usually fractional in the DD domain for off-grid targets, we proposed a difference-based method to estimate the fractional part of the parameters. Simulation results showed that the proposed algorithm can obtain the range and velocity estimates corresponding to the off-grid targets accurately. ## Appendix A Proof of Proposition 1 For ease of exposition, we only consider the ideal pulse shaping filter in the proof. As shown in [13], the non-zero entries in the DD domain effective channel are localized identically under both ideal and rectangular pulse shaping filters. The entries at the same location differs with only a particular phase offset. Therefore, our derived results can be extended to the case under rectangular pulse straightforwardly. Let us rewrite the input-output relationship in (8) as a 2D convolution form [11], given by1 Footnote 1: For brevity, we will omit the lower and upper limits of the summation operator in what follows. In particular, the indices \(n\), \(n^{\prime}\), \(n^{\prime\prime}\), \(k\), \(k^{\prime}\), and \(k^{\prime\prime}\) are from \(0\) to \(N-1\) while \(m\), \(m^{\prime}\), \(m^{\prime\prime}\), \(l\), \(l^{\prime}\), and \(l^{\prime\prime}\) are from \(0\) to \(M-1\). \[Y_{\text{DD}}[k,l]=\sum_{k^{\prime},l^{\prime}}X_{\text{DD}}[k^{ \prime},l^{\prime}]h_{w}[k-k^{\prime},l-l^{\prime}]+Z_{\text{DD}}[k,l], \tag{16}\] where the term \(h_{\omega}\) is the effective channel in DD domain, whose expression can be found in (21) of [11]. By performing the 2D correlation via (16), we have \[V[k,l]=\sum_{n,m,k^{\prime},l^{\prime}}X_{\text{DD}}[n-k,m-l]X_{ \text{DD}}^{*}[k^{\prime},l^{\prime}] \tag{17}\] \[\times h_{\omega}^{*}[n-k^{\prime},m-l^{\prime}]+\sum_{n,m}X_{ \text{DD}}[n-k,m-l]Z_{\text{DD}}^{*}[n,m].\] Taking the expectation \(V[k,l]\) yields \[\mathbb{E}[V[k,l]]=\sum_{n,m,k^{\prime},l^{\prime}}\mathbb{E}[X_{ \text{DD}}^{*}[k^{\prime},l^{\prime}]X_{\text{DD}}[n-k,m-l]] \tag{18}\] \[h_{\omega}^{*}[n-k^{\prime},m-l^{\prime}]+\sum_{n,m}\mathbb{E}[X_ {\text{DD}}[n-k,m-l]Z_{\text{DD}}^{*}[n,m]].\] Fig. 4: The estimation performance versus different SNR. Since the entries of \(\mathbf{X}_{\text{DD}}\) are independent QPSK symbols with unit power, the expectation \(\mathbb{E}[X_{\text{DD}}^{*}[k^{\prime},l^{\prime}]X_{\text{DD}}[n-k,m-l]]\) equals \(0\) unless \(k^{\prime}=n-k\) and \(l^{\prime}=m-l\). Meanwhile, the term \(\mathbb{E}[X_{\text{DD}}[n-k,m-l]Z_{\text{DD}}^{*}[n,m]]=0\) since the information symbols are independent from the noise samples. Thus, (18) can be simplified as \[\mathbb{E}[V[k,l]]=MN\cdot h_{\omega}^{*}[k,l]. \tag{19}\] The variance of the entry in matrix \(\mathbf{V}\) is \[\text{var}[V[k,l]]=\mathbb{E}[V[k,l]^{2}]-\mathbb{E}[V[k,l]]^{2} \tag{20}\] where \(\mathbb{E}[V[k,l]^{2}]\) is given in (21). As before, only the terms with \(k^{\prime}=n^{\prime}-k\), \(l^{\prime}=m^{\prime}-l\), \(k^{\prime\prime}=n^{\prime\prime}-k\), and \(l^{\prime\prime}=m^{\prime\prime}-l\) are non zeros. Thus, the second and the third terms on the right-hand side of (21) can be discarded while the first and last terms are \((MN\cdot h_{\omega}^{*}[k,l])^{2}\) and \(MN\cdot\sigma^{2}\), respectively. Consequently, we have \[\mathbb{E}[V[k,l]^{2}]=(MN\cdot h_{\omega}^{*}[k,l])^{2}+MN\cdot\sigma^{2}, \tag{22}\] which gives the variance of \(V[k,l]\), i.e., \(\text{var}[V[k,l]]=MN\cdot\sigma^{2}\). Now, let us consider the variance of \(\frac{V[k,l]}{MN}\). Taking the limitation of \(\text{var}[\frac{1}{MN}V[k,l]]\) gives \[\lim_{M,N\rightarrow\infty}\text{var}\left[\frac{V[k,l]}{MN}\right]=\lim_{M,N \rightarrow\infty}\left(\frac{\sigma^{2}}{MN}\right)=0. \tag{23}\] indicating that when \(M\) and \(N\) are sufficiently large, the variance of \(\frac{1}{MN}V[k,l]\) vanishes. This motivates us to use \(\frac{1}{MN}V[k,l]\) to approximate its expectation, i.e. \(h_{\omega}^{*}[k,l]\), with an approximation error of \(\mathcal{O}\left[\frac{1}{MN}\right]\). Therefore, the ratio between the magnitudes of the correlation coefficients with the same delay index and Doppler indices \(k_{\nu_{1}}\) and \(k_{\nu_{2}}\) can be expressed as \[\frac{|V[k^{\prime}_{\nu_{1}},l_{i}]|}{|V[k^{\prime}_{\nu_{2}},l_{ i}]|}=\frac{|h_{\omega}[k^{\prime}_{\nu_{1}},l_{i}]|}{|h_{\omega}[k^{\prime}_{ \nu_{2}},l_{i}]|}\] \[=\left|\frac{\sin(-\kappa_{\nu_{i}}\pi)}{\sin(\frac{-\kappa_{\nu _{i}}(\pi)}{N})}\right|\cdot\left|\frac{\sin((k^{\prime}_{\nu_{2}}-k^{\prime}_ {\nu_{1}}-\kappa_{\nu_{i}})\pi)}{\sin(\frac{k^{\prime}_{\nu_{2}}-k^{\prime}_ {\nu_{1}}-\kappa_{\nu_{i}}}{N}\pi)}\right|^{-1} \tag{24}\] \[=\left|\frac{\sin(\frac{k^{\prime}_{\nu_{2}}-k^{\prime}_{\nu_{1 }}-\kappa_{\nu_{1}}}{N})\pi}{\sin(\frac{-\kappa_{\nu_{2}}\pi}{N})}\right| \approx\frac{|k^{\prime}_{\nu_{2}}-k^{\prime}_{\nu_{1}}-\kappa_{\nu_{1}}|}{|- \kappa_{\nu_{1}}|}.\] Note that the small-angle approximation \(\sin x\approx x\) also holds for a sufficiently large \(N\). Finally, we arrive at (13).
2305.04702
Inverse mean curvature flow and Ricci-pinched three-manifolds
Let $(M,g)$ be a complete, connected, non-compact Riemannian three-manifold with non-negative Ricci curvature satisfying $Ric\geq\varepsilon\,\operatorname{tr}(Ric)\,g$ for some $\varepsilon>0$. In this note, we give a new proof based on inverse mean curvature flow that $(M,g)$ is either flat or has non-Euclidean volume growth. In conjunction with results of J. Lott and of M.-C. Lee and P. Topping, this gives an alternative proof of a conjecture of R. Hamilton recently proven by A. Deruelle, F. Schulze, and M. Simon using Ricci flow.
Gerhard Huisken, Thomas Koerber
2023-05-08T13:35:28Z
http://arxiv.org/abs/2305.04702v2
# Inverse Mean Curvature Flow and Ricci-Pinched Three-Manifolds ###### Abstract. Let \((M,g)\) be a complete, connected, non-compact Riemannian three-manifold with non-negative Ricci curvature satisfying \(Ric\geq\varepsilon\)\(\operatorname{tr}(Ric)\,g\) for some \(\varepsilon>0\). In this note, we give a new proof based on inverse mean curvature flow that \((M,g)\) is either flat or has non-Euclidean volume growth. In conjunction with the work of J. Lott [10] and of M.-C. Lee and P. Topping [9], this gives an alternative proof of a conjecture of R. Hamilton recently proven by A. Deruelle, F. Schulze, and M. Simon [5] using Ricci flow. ## 1. Introduction Let \((M,g)\) be a complete, connected, non-compact Riemannian three-manifold. Recall that \((M,g)\) is called Ricci-pinched if there is \(\varepsilon>0\) such that \[Ric\geq\varepsilon R\,g. \tag{1}\] Here, \(Ric\) and \(R\) denote the Ricci curvature and the scalar curvature of \((M,g)\), respectively. The following theorem has been conjectured by R. Hamilton [3, Conjecture 3.39] and by J. Lott [10, Conjecture 1.1]. It has been proven by A. Deruelle, F. Schulze, and M. Simon [5, Theorem 1.3] under the additional assumption that \((M,g)\) has bounded curvature. M.-C. Lee and P. Topping [9, Theorem 1.2] have subsequently shown that this additional assumption can be dispensed with. **Theorem 1** ([9, Theorem 1.2]).: _Let \((M,g)\) be a complete, connected, non-compact Riemannian three-manifold that is Ricci-pinched. Then \((M,g)\) is flat._ **Remark 2**.: _Previous results in the direction of Theorem 1 have been obtained by B.-L. Chen and X.-P. Zhu [2, Main Theorem II] and by J. Lott [10, Theorem 1.4]._ **Remark 3**.: _R. Hamilton has shown that every complete, connected, compact Riemannian three-manifolds that is Ricci-pinched is either flat or smoothly isotopic to a spherical space form; see [6, Main Theorem] and the discussion on [5, p. 4]._ To describe the contributions in [5, 9, 10], let \(p\in M\) and \[\operatorname{AVR}=\frac{3}{4\,\pi}\,\lim_{r\to\infty}\frac{|B_{r}(p)|}{r^{3}}\] be the asymptotic volume ratio of \((M,g)\). Note that, by the Bishop-Gromov theorem, \(\operatorname{AVR}\) is well-defined, independent of \(p\), and satisfies \(\operatorname{AVR}\in[0,1]\). J. Lott [10] has shown that, if \((M,g)\) is Ricci-pinched and has bounded curvature, then there exists a smooth, Ricci-pinched solution of Ricci flow coming out of \((M,g)\). By performing a detailed asymptotic analysis of this flow, J. Lott has proven that, if \((M,g)\) is not flat, then \((M,g)\) has positive asymptotic volume ratio; see [10, Corollary 1.7]. Subsequently, M.-C. Lee and P. Topping [9] have shown that the assumption that \((M,g)\) has bounded curvature can be dispensed with. By contrast, if \((M,g)\) has positive asymptotic volume ratio, A. Deruelle, F. Schulze, and M. Simon [5] have observed that the asymptotic cone of \((M,g)\) is a three-dimensional Alexandrov space with non-negative curvature. Moreover, the previous work of A. Deruelle [4] respectively of F. Schulze and M. Simon [15] implies the existence of an expanding soliton solution with non-negative curvature coming out of the asymptotic cone of \((M,g)\). Using their stability result [5, Theroem 1.2] to compare this solution with the Ricci-pinched solution constructed by J. Lott [10], they have concluded that \((M,g)\) is in fact flat. The goal of this paper is to provide a new proof based on inverse mean curvature flow of Theorem 1 under the additional assumption that \((M,g)\) has positive asymptotic volume ratio. **Theorem 4**.: _Let \((M,g)\) be a complete, connected, non-compact Riemannian three-manifold that is Ricci-pinched. If \((M,g)\) has positive asymptotic volume ratio, then \((M,g)\) is isometric to flat \(\mathbb{R}^{3}\)._ **Remark 5**.: _In conjunction with the results of J. Lott [10] and of M.-C. Lee and P. Topping [9], Theorem 4 provides a new proof of Theorem 1._ **Remark 6**.: _Our technique extends to the case where \((M,g)\) has superquadratic volume growth; see Theorem 11._ We now outline the proof of Theorem 4. Suppose, for a contradiction, that \((M,g)\) is not flat. Since \((M,g)\) has non-negative Ricci curvature and is Ricci-pinched, the scalar curvature of \((M,g)\) must be strictly positive at one point. It follows that there is a closed outward-minimizing hypersurface \(\Sigma\subset M\) such that \[\int_{\Sigma}H^{2}\,\mathrm{d}\mu<16\,\pi. \tag{2}\] Here, \(\mathrm{d}\mu\) and \(H\) denote the area element and the mean curvature of \(\Sigma\), respectively. Recall that a nested family \(\{E_{t}\}_{t=0}^{\infty}\) with strictly mean-convex boundary \(\partial E_{t}\subset M\) flows by inverse mean curvature flow if \[\frac{dx}{dt}=\frac{1}{H}\,\nu.\] Here, \(x\) and \(\nu\) are the position and the outward normal of \(\partial E_{t}\), respectively. Using that \((M,g)\) has positive asymptotic volume ratio, the results of L. Mari, M. Rigoli and A. Setti [11], which, in turn, build on previous work of R. Moser [13], show that there exists a weak solution \(\{E_{t}\}_{t=0}^{\infty}\) of inverse mean curvature flow with \(\partial^{*}E_{0}=\Sigma\) in the sense of the work of T. Ilmanen and the first-named author [8]. Here, \(\partial^{*}\) denotes the reduced boundary. Using (2) and that \((M,g)\) is Ricci-pinched, we show that \[\lim_{t\to\infty}\int_{\partial^{*}E_{t}}H^{2}\,\mathrm{d}\mu=0.\] By contrast, the work of V. Agostiniani, M. Fogagnolo, and L. Mazzieri [1] implies that, for every \(t\geq 0\), \[\int_{\partial^{*}E_{t}}H^{2}\,\mathrm{d}\mu\geq 16\,\pi\ \mathrm{ARV},\] a contradiction. ### Acknowledgments The second-named author acknowledges the support of the Lise-Meitner-Project M3184 of the Austrian Science Fund. This work originated during the authors' visit to the Hebrew University during the first-named author's Mark Gordon Distinguished Visiting Professorship. The authors thank the Hebrew University for its hospitality. The authors thank Or Hershkovits and Miles Simon for helpful discussions. ## 2. Proof of Theorem 4 In this section, we assume that \((M,g)\) is a complete, connected, non-compact Riemannian three-manifold with non-negative Ricci curvature satisfying (1) for some \(\varepsilon>0\). The goal of this section is to prove Theorem 4. We recall the following result of S.-H. Zhu [16] which extends previous work of R. Schoen and S.-T. Yau [14, Theorem 3]. **Proposition 7**.: _If \((M,g)\) is not flat, then \((M,g)\) is diffeomorphic to \(\mathbb{R}^{3}\)._ Proof.: Using (1), we see that there is \(p\in M\) with \(Ric(p)>0\). The assertion follows from [16]. Let \(\Sigma\subset M\) be a closed surface with area measure \(\mathrm{d}\mu\), designated normal \(\nu\), and mean curvature \(H\) with respect to \(\nu\). In Lemma 8 below, \(\overset{\circ}{h}\) denotes the traceless second fundamental form of \(\Sigma\). **Lemma 8**.: _If \(\mathrm{genus}(\Sigma)\geq 1\), there holds_ \[2\,\int_{\Sigma}Ric(\nu,\nu)+|\overset{\circ}{h}|^{2}\,\mathrm{d}\mu\geq\int_ {\Sigma}H^{2}\,\mathrm{d}\mu \tag{3}\] _and, if \(\mathrm{genus}(\Sigma)=0\), there holds_ \[2\,\int_{\Sigma}Ric(\nu,\nu)\,\mathrm{d}\mu\geq\varepsilon\,\bigg{(}16\,\pi- \int_{\Sigma}H^{2}\,\mathrm{d}\mu\bigg{)}. \tag{4}\] Proof.: Integrating the contracted Gauss equation and using the Gauss-Bonnet theorem, we have \[\int_{\Sigma}H^{2}\,\mathrm{d}\mu=16\,\pi\,(1-\mathrm{genus}(\Sigma))+2\,\int _{\Sigma}|\overset{\circ}{h}|^{2}\,\mathrm{d}\mu+\int_{\Sigma}4\,Ric(\nu,\nu) -2\,R\,\mathrm{d}\mu.\] Using that \(Ric\geq 0\), we have \(R\geq Ric(\nu,\nu)\). This implies (3). In the case where \(\mathrm{genus}(\Sigma)=0\), we have, using that \(Ric\geq 0\), \[2\,\int_{\Sigma}R\,\mathrm{d}\mu\geq 16\,\pi-\int_{\Sigma}H^{2}.\] In conjunction with (1), we obtain (4). **Lemma 9**.: _Suppose that \((M,g)\) is not flat. There exists a sequence \(\{\Sigma_{i}\}_{i=1}^{\infty}\) of closed surfaces \(\Sigma_{i}\subset M\) with_ \[\lim_{i\to\infty}\int_{\Sigma_{i}}H^{2}\,\mathrm{d}\mu=0.\] Proof.: In the case where \(\mathrm{AVR}=0\), the assertion follows from [1, Theorem 1.1]. In the case where \(\mathrm{AVR}>0\), using that \((M,g)\) is not flat and that \(Ric\geq 0\), we see that there is \(p\in M\) with \(R(p)>0\). Consequently, \[\int_{Br(p)}H^{2}\,\mathrm{d}\mu<16\,\pi \tag{5}\] provided that \(r>0\) is sufficiently small; see, e.g., [12, Proposition 3.1]. Let \(\Sigma^{\prime}\subset M\) be the minimizing hull of \(B_{r}(p)\); see [8, p. 371]. Using [8, (1.15)], we see that \[\int_{\Sigma^{\prime}}H^{2}\,\mathrm{d}\mu\leq\int_{B_{r}(p)}H^{2}\,\mathrm{d}\mu.\] By [11, Remark 1.6 and Theorem 1.7], there exists a proper weak solution \(\{E_{t}\}_{t=0}^{\infty}\) of inverse mean curvature flow in the sense of [8, p. 368] such that \(\Sigma^{\prime}=\partial^{*}E_{0}\). Let \(\Sigma_{t}=\partial^{*}E_{t}\). By Proposition 7 and [8, Lemma 4.2], \(\Sigma_{t}\) is connected for every \(t\geq 0\). According to the results in [8, SS5] and [7, Korollar 5.6], \(\Sigma_{t}\) is of class \(W^{2,2}\cap C^{1,1}\) and there holds \[\int_{\Sigma_{0}}H^{2}\,\mathrm{d}\mu\geq\int_{\Sigma_{t}}H^{2}\,\mathrm{d}\mu +2\,\int_{0}^{t}\int_{\Sigma_{s}}Ric(\nu,\nu)+|\mathring{h}|^{2}\,\mathrm{d} \mu\,\mathrm{d}s \tag{6}\] for every \(t\geq 0\). Clearly, the function \[[0,\infty)\to\mathbb{R},\qquad t\mapsto\int_{\Sigma_{t}}H^{2}\,\mathrm{d}\mu \tag{7}\] is non-increasing. We claim that \[\lim_{t\to\infty}\int_{\Sigma_{t}}H^{2}\,\mathrm{d}\mu=0.\] Indeed, suppose, for a contradiction, that there is \(\delta>0\) such that \[\int_{\Sigma_{t}}H^{2}\,\mathrm{d}\mu\geq\delta\] for every \(t\geq 0\). Shrinking \(\delta>0\), if necessary, and using (5), we may assume that \[\int_{\Sigma_{t}}H^{2}\,\mathrm{d}\mu\leq 16\,\pi-\delta\] for every \(t\geq 0\). Using Lemma 8, we have \[2\,\int_{\Sigma_{s}}Ric(\nu,\nu)+|\mathring{h}|^{2}\,\mathrm{d}\mu\geq\min\{ 1,\varepsilon\}\,\delta\] for every \(t\geq 0\). In conjunction with (6), we see that \[\int_{\Sigma_{t}}H^{2}\,\mathrm{d}\mu<0\] for every \(t\geq 16\,\pi\,\min\{1,\varepsilon\}^{-1}\,\delta^{-1}\). The assertion follows from this contradiction. Proof of Theorem 4.: Let \((M,g)\) be a connected, complete, non-compact Riemannian three-manifold that is Ricci-pinched and has positive asymptotic volume ratio \(\mathrm{AVR}\). Suppose, for a contradiction, that \((M,g)\) is not flat \(\mathbb{R}^{3}\). By Lemma 9, there exists a closed surface \(\Sigma\subset M\) with \[\int_{\Sigma}H^{2}\,\mathrm{d}\mu<16\,\pi\,\,\mathrm{AVR}\,.\] As this is incompatible with [1, Theorem 1.1], the assertion follows. ## Appendix A Superquadratic volume growth In this section, we assume that \((M,g)\) is a complete, connected, non-compact Riemannian three-manifold with non-negative Ricci curvature satisfying (1) for some \(\varepsilon>0\). Moreover, we assume that there is \(p\in M\) and \(\alpha>0\) with \[0<\lim_{r\to\infty}\frac{|B_{r}(p)|}{r^{1+\alpha}}<\infty. \tag{8}\] Note that, by the Bishop-Gromov theorem, \(\alpha\leq 2\). The goal of this section is to give an alternative proof, based on inverse mean curvature flow, of the fact that \((M,g)\) is either flat or has subquadratic volume growth, that is, \(\alpha\leq 1\). **Lemma 10**.: _Suppose that \((M,g)\) is not flat. Then there exists a proper weak solution \(\{E_{t}\}_{t=0}^{\infty}\) of inverse mean curvature flow such that, for every \(t\geq 0\),_ \[\int_{\partial^{*}E_{t}}H^{2}\,\mathrm{d}\mu<16\,\pi\,e^{-t}.\] Proof.: By Lemma 9, there is a closed, outward-minimizing surface \(\Sigma\subset M\) satisfying \[\varepsilon\left(16\,\pi-\int_{\Sigma}H^{2}\,\mathrm{d}\mu\right)\geq\int_{ \Sigma}H^{2}\,\mathrm{d}\mu.\] By [11, Remark 1.6 and Theorem 1.7], there is a proper weak solution \(\{E_{t}\}_{t=0}^{\infty}\) of inverse mean curvature flow with \(\partial^{*}E_{0}=\Sigma\). Let \(\Sigma_{t}=\partial^{*}E_{t}\). As in the proof of Lemma 9, we see that the function (7) is non-increasing so that, for every \(t\geq 0\), \[\varepsilon\left(16\,\pi-\int_{\Sigma_{t}}H^{2}\,\mathrm{d}\mu\right)\geq\int_ {\Sigma_{t}}H^{2}\,\mathrm{d}\mu. \tag{9}\] Again, as in the proof of Lemma 9, using (9) and Lemma 8, we see that \[\int_{\Sigma}H^{2}\,\mathrm{d}\mu\geq\int_{\Sigma_{t}}H^{2}\,\mathrm{d}\mu+ \int_{0}^{t}\int_{\Sigma_{s}}H^{2}\,\mathrm{d}\mu\,\mathrm{d}s.\] Using that \[\int_{\Sigma}H^{2}\,\mathrm{d}\mu<16\,\pi,\] the assertion follows. **Theorem 11**.: _Let \((M,g)\) be a complete, connected, non-compact Riemannian three-manifold that is Ricci-pinched and satisfies (10) for some \(\alpha>0\). If \((M,g)\) is not flat, then \(\alpha\leq 1\)._ Proof.: Let \(\{E_{t}\}_{t=0}^{\infty}\) be a proper weak solution of inverse mean curvature flow as in Lemma 10. Let \(\Sigma_{t}=\partial^{*}E_{t}\). By [8, Lemma 5.1] there holds \(H>0\)\(\mathrm{d}\mu\)-almost everywhere on \(\Sigma_{t}\) for almost every \(t\geq 0\). Using Holder's inequality, we have, for almost every \(t\geq 0\), \[|\Sigma_{t}|^{2}\leq\int_{\Sigma_{t}}H^{-1}\,\mathrm{d}\mu\,\left(\int_{ \Sigma_{t}}H^{2}\,\mathrm{d}\mu\right)^{1/2}\,|\Sigma_{t}|^{1/2}. \tag{10}\] Recall from [8, Exponential Growth Lemma 1.6], that \[|\Sigma_{t}|=|\Sigma_{0}|\,e^{t}. \tag{11}\] In conjunction with Lemma 10 and (10), we obtain that, for almost every \(t\geq 0\), \[\int_{\Sigma_{t}}H^{-1}\,\mathrm{d}\mu\geq\sqrt{\frac{|\Sigma_{0}|^{3}}{16\, \pi}}e^{2\,t}. \tag{12}\] Arguing as in [8, SS5], we see that \[|E_{t}|\geq|E_{0}|+\int_{0}^{t}\,\int_{\Sigma_{s}}H^{-1}\,\mathrm{d}\mu\,\mathrm{ d}s\] for every \(t\geq 0\). In conjunction with (12) and (11), it follows that \[\liminf_{t\to\infty}|\Sigma_{t}|^{-2}\,|E_{t}|>0. \tag{13}\] Using the results on fake distances in the context of \(p\)-harmonic functions in [11, Theorem 1.4 and Theorem 5.4] and on inverse mean curvature flow in [8, Minimizing Hull Property 1.4 and Theorem 2.2 ii)], we see that there is a function \(\rho\in C^{0,1}(M)\) called fake distance in [11] and a number \(\delta>0\) with the following properties: * There holds, for every \(t\geq 0\), \[\{x\in M:\rho(x)<\delta\,e^{t/2}\}\subset E_{t}\subset\{x\in M:\rho(x)\leq \delta^{-1}\,e^{t/2}\}.\] * There holds, for every \(r\geq 1\), \[B_{r}(p)\subset\{x\in M:\rho(x)\leq r\}\subset B_{\delta^{-1}\,r^{2/\alpha}}(p).\] * \(\partial\{x\in M:\rho(x)\leq r\}\) is outward-minimizing for ever \(r>0\). * There holds, for every \(r\geq 0\), \[|\partial\{x\in M:\rho(x)\leq r\}|=4\,\pi\,r^{2}.\] Clearly, \[|E_{t}|\leq|\{x\in M:\rho(x)\leq\delta^{-1}\,e^{t/2}\}|\leq|B_{\delta^{-1-2/ \alpha}\,e^{t/\alpha}}(p)|.\] Moreover, \[|\Sigma_{t}|\geq|\partial\{x\in M:\rho(x)\leq\delta\,e^{t/2}\}|=4\,\pi\,\delta ^{2}\,e^{t}.\] In conjunction with (13), we conclude that \[\liminf_{t\to\infty}\frac{|B_{\delta^{-1-2/\alpha}\,e^{t/\alpha}}(p)|}{16\,\pi \,\delta^{4}\,e^{2\,t}}>0.\] Consequently, \[\liminf_{r\to\infty}\frac{|B_{r}(p)|}{r^{2\,\alpha}}>0. \tag{14}\] This is incompatible with (8) unless \(\alpha\leq 1\). The assertion follows. **Remark 12**.: _Let \(M=\mathbb{R}\times S^{1}\times S^{1}\) and \(g\) be the flat metric. Note that \((M,g)\) is Ricci-pinched and satisfies (8) with \(\alpha=0\). Moreover, \((M,g)\) is foliated by tori with area equal to \(4\,\pi^{2}\). In light of (11), there can be no proper weak solution of inverse mean curvature flow in \((M,g)\) regardless of the initial data. This suggests that inverse mean curvature flow is not an adequate technique to study the case where \(\alpha\leq 1\)._
2307.05265
Computing minimal distinguishing Hennessy-Milner formulas is NP-hard, but variants are tractable
We study the problem of computing minimal distinguishing formulas for non-bisimilar states in finite LTSs. We show that this is NP-hard if the size of the formula must be minimal. Similarly, the existence of a short distinguishing trace is NP-complete. However, we can provide polynomial algorithms, if minimality is formulated as the minimal number of nested modalities, and it can even be extended by recursively requiring a minimal number of nested negations. A prototype implementation shows that the generated formulas are much smaller than those generated by the method introduced by Cleaveland.
Jan Martens, Jan Friso Groote
2023-07-11T13:58:20Z
http://arxiv.org/abs/2307.05265v1
# Computing minimal distinguishing Hennessy-Milner formulas is NP-hard, but variants are tractable ###### Abstract We study the problem of computing minimal distinguishing formulas for non-bisimilar states in finite LTSs. We show that this is NP-hard if the size of the formula must be minimal. Similarly, the existence of a short distinguishing trace is NP-complete. However, we can provide polynomial algorithms, if minimality is formulated as the minimal number of nested modalities, and it can even be extended by recursively requiring a minimal number of nested negations. A prototype implementation shows that the generated formulas are much smaller than those generated by the method introduced by Cleaveland. Distinguishing behaviour, Hennessy-Milner logic, NP-hardness 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 202 2012 2012 2012 2012 2022 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 20122 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2022 2022 2022 2022 2022 2022 2022 202 2022 2022 2022 202 2022 2022 2022 2022 2022 2022 2022 2022 2022 2222 2222 2222 2222 2222 2222 2222 222 222 222 2222 222 2222 2222 2222 2222 2222 222 2222 2222 2222 222 222 2222 2222 222 22222 2222 2222 2222 2222 2222 2222 2222 2222 2222 2222 22222 2222 2222 2222 2222 2222 2222 2222 2222 22222 2222 22222 2222 2222 2222 22222 2222 2222 22222 22222 2222 22222 22222 22222 22222 22222 22222 22222 22222 22222 222222 222222 222222 22222 22222 22222 222222 2222 22222 222222 222222 222222 2222222 222222 2222222 222222 2222222 22222222 222222222 22222222 2222222222 22222222222222 As distinguishing formulas are very useful, we are wondering whether a variant of minimality of distinguishing formulas exists that leads to concise formulas and that can effectively be calculated. We answer this positively by providing efficient algorithms to construct distinguishing formulas that are minimal with respect to the _observation-depth_, i.e., the number of nested modalities. Within this we can even guarantee in polynomial time that the _negation-depth_, i.e., the number of nested negations, or equivalently the number of nested alternations of box and diamond modalities, is minimal. These algorithms strictly improve upon the method by Cleaveland [6]. A prototype implementation of our algorithm shows that our formulas are indeed much smaller and more pleasant to use. In order to obtain these results we employ the notions of \(k\)-bisimilarity [19] and \(m\)-nested similarity [10]. Distinguishing formulas have been the topic of studies in many papers, more than we can mention. A recent impressive work introduces a method to find minimal distinguishing formulas for various classes of behavioural equivalences [3]. The algorithm translates the problem to determining the winning region in a reachability game. These games can grow super-exponentially in size. In the context of distinguishing deterministic finite automata, an algorithm is given that from a splitting tree finds pairwise minimal distinguishing words [23]. In a more generalized setting [25, 15] a co-algebraic method is given to generate distinguishing modal formulas. The notion of distinguishing formulas is also used in the setting with abstractions for branching bisimilarity [16, 9]. This document is structured as follows. In Section 2 the required preliminaries on LTSs and HML formulas are given. In Section 3, we show that decision problems related to finding minimal distinguishing formulas are NP-hard. Next, in Section 4 we give a procedure that generates a minimal observation- and negation-depth formula. Additionally, in this section, we give a partition refinement algorithm inspired by [23, 20] which can be used to determine minimal observation-depth distinguishing formulas. In the full version, an appendix is included containing proofs omitted here due to space constraints. ## 2 Preliminaries For the numbers \(i,j\in\mathbb{N}\), we define \([i,j]=\{c\in\mathbb{N}\mid i\leqslant c\leqslant j\}\), the closed interval from \(i\) to \(j\). ### Ltss, \(k\)-bisimilarity & \(m\)-nested similarity We use Labelled Transition Systems (LTSs) as our behavioural models. Strong bisimilarity is a widely used behavioural equivalence [19, 22], which we define in the classical inductive way. A labelled transition system (LTS) \(L=(S,Act,\rightarrow)\) is a three-tuple containing: * a finite set of states \(S\), * a finite set of action labels \(Act\), and * a transition relation \(\rightarrow\subseteq S\times Act\times S\). We write \(s\xrightarrow{a}s^{\prime}\) iff \((s,a,s^{\prime})\in\rightarrow\). We call \(s^{\prime}\) an \(a\)-derivative of \(s\) iff \(s\xrightarrow{a}s^{\prime}\). [\(k\)-bisimilar [19]] Let \(L=(S,Act,\rightarrow)\) be an LTS. For every \(k\in\mathbb{N}\), \(k\)-bisimilarity written as \(\leftrightarroweq_{k}\) is defined inductively: \[\begin{split}\leftrightarroweq_{0}&=\{(s,t)\mid s, t\in S\}\text{, and}\\ \leftrightarroweq_{k+1}&=\{(s,t)\mid\forall s \xrightarrow{a}s^{\prime}.\exists t\xrightarrow{a}t^{\prime}\text{ such that }s^{\prime}\leftrightarroweq_{k}t^{\prime},\text{ and}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\forall t \xrightarrow{a}t^{\prime}.\exists s\xrightarrow{a}s^{\prime}\text{ such that }t^{\prime}\leftrightarroweq_{k}s^{\prime}\}.\end{split}\] Bisimilarity, denoted as \(\rightleftharpoonsuit\), is defined as the intersection of all \(k\)-bisimilarity relations for all \(k\in\mathbb{N}\): \(\rightleftharpoonsuit=\bigcap_{k\in\mathbb{N}}\rightleftharpoonsuit_{k}\). As our transition systems are finite, and therefore finitely branching, \(\rightleftharpoonsuit\) coincides with the more general co-inductive definition of bisimulation [22]. The intuition behind \(\rightleftharpoonsuit_{i}\) is that within \(i\) (atomic) observations there is no distinguishing behaviour. We sketch a rather simple example that showcases this behaviour. For every \(n\in\mathbb{N}\), we define the LTS \(\mathcal{A}_{n}=(S,\{a\},\xrightarrow{})\) with a singleton action set, and the set of states \(S=\{x_{0},\ldots,x_{n}\}\). The transition function contains a single path \(x_{i}\xrightarrow{a_{j}}x_{i-1}\) for all \(1\leqslant i\leqslant n\). In Figure 0(a) the LTS \(\mathcal{A}_{3}\) is shown. A state \(x_{i}\) can perform \(i\)\(a\)-transitions ending in a deadlock state. All states in \(\mathcal{A}_{3}\) are behaviourally inequivalent. Intuitively, we see that distinguishing the states \(x_{3}\) and \(x_{2}\) takes at least \(3\) observations. In general, it holds that for \(n\in\mathbb{N}\), the states \(x_{n}\) and \(x_{n-1}\) of the LTS \(\mathcal{A}_{n}\) are \(n-1\)-bisimilar but not \(n\)-bisimilar, i.e. \(x_{n}\looparrowright_{n-1}x_{n-1}\) but \(x_{n}\not\looparrowright_{n}x_{n-1}\). In order to distinguish these states we require \(n\) (atomic) observations. This intuition is formalized in Theorem 3.1. We state these well-known facts for an LTS \(L=(S,Act,\xrightarrow{})\), and \(k\in\mathbb{N}\): 1. The relation \(\rightleftharpoonsuit_{k}\) is an equivalence relation. 2. If two states are \(k\)-bisimilar, they are \(l\)-bisimilar for every \(l\leqslant k\). 3. If \(\rightleftharpoonsuit_{k}=\leftrightharpoonsuit_{k+1}\) then \(\rightleftharpoonsuit_{k}=\leftrightharpoonsuit_{k+u}=\rightleftharpoons\), for all \(u\in\mathbb{N}\). For technical reasons we also define \(m\)-nested similarity [10] which uses the concept of similarity. [Similarity] Given an \(L=(S,Act,\xrightarrow{})\), we define similarity \(\rightleftharpoons\subseteq S\times S\) as the largest relation such that if \(s\simeq t\) then for all transitions \(s\xrightarrow{a}s^{\prime}\) there is a \(t\xrightarrow{a}t^{\prime}\) such that \(s^{\prime}\neq t^{\prime}\). We say a state \(s\) is _simulated_ by \(t\) iff \(s\neq t\). [cf. Def. 8.5.2. [10]] Let \(L=(S,Act,\xrightarrow{})\) be an LTS, and \(m\in\mathbb{N}\) a number. We inductively define \(m\)-nested similarity inclusion as follows: \(\rightleftharpoons^{0}=\rightleftharpoons\), and for every \(i\in\mathbb{N}\), the relation \(\rightleftharpoons^{i+1}\subseteq S\times S\) is the largest relation such that for all \((s,t)\in\rightleftharpoons^{i+1}\) it holds that: 1. \(s\xrightarrow{a}t\) and \(t\xrightarrow{a^{i}}s\), and 2. if \(s\xrightarrow{a}s^{\prime}\) then there is a \(t\xrightarrow{a}t^{\prime}\) such that \(s^{\prime}\rightleftharpoons^{i+1}t^{\prime}\). We write \(\rightleftharpoons^{m}\) as the symmetric closure of \(m\)-nested similarity inclusion, i.e. \(\rightleftharpoons^{m}=\rightleftharpoons^{m}\cap\left(\rightleftharpoons^{m} \left)^{-1}\), which we call \(m\)_-nested similarity_. Note that we deviate slightly from the definition in [10], where \(1\)-nested simulation equivalence coincides with simulation equivalence. For every \(n\in\mathbb{N}\), we define the LTS \(\mathcal{B}_{n}=(S,\{a\},\xrightarrow{})\) with a singleton action set, the set of states \(S=\{x_{0},\ldots,x_{n},y_{0},\ldots,y_{n}\}\), and the transition relation containing the transition \(y_{0}\xrightarrow{a}y_{0}\) and, for every \(i\in[1,n]\), the transitions: Figure 1: Two example LTSs. * \(y_{i}\xrightarrow{a}y_{i-1}\) and \(x_{i}\xrightarrow{a}x_{i-1}\), and * \(y_{i}\xrightarrow{a}x_{i-1}\) if \(i\) is even, or \(x_{i}\xrightarrow{a}y_{i-1}\) if \(i\) is odd. In Figure 0 the LTS \(\mathcal{B}_{3}\) is shown. We observe that \(x_{0}\) is simulated by \(y_{0}\), since \(x_{0}\) has no outgoing transitions. So it is the case that \(x_{0}=^{0}y_{0}\), but \(y_{0}\neq^{0}x_{0}\), and hence \(x_{0}\neq^{0}y_{0}\). In general, for all \(n\geq 1\) it holds in the LTS \(\mathcal{B}_{n}\) that \(x_{n}\xrightleftharpoons[n]{x^{n-1}}y_{n}\), but \(x_{n}\neq^{n}y_{n}\). ### Hennessy-Milner logic (HML) We use Hennessy-Milner Logic (HML) [11] to distinguish states. For some finite set of actions \(Act\), the syntax of HML is defined as \[\phi::=tt\mid\langle a\rangle\phi\mid\neg\phi\mid\phi\wedge\phi,\] where \(a\in Act\). The logic consists of three necessary elements: * _Observations_\(\langle a\rangle\phi\), the state witnesses an observation \(a\) to a state that satisfies \(\phi\). * _Negations_\(\neg\phi\), the state does not satisfy \(\phi\). * _Conjunctions_\(\phi_{1}\wedge\phi_{2}\), the state satisfies both \(\phi_{1}\) and \(\phi_{2}\). The set \(\mathcal{F}\) is defined to contain all HML formulas. It is common to use the abbreviations \(\not{H}=\neg tt\), \([a]\phi=\neg\langle a\rangle\neg\phi\) and \(\phi_{1}\vee\phi_{2}=\neg(\neg\phi_{1}\wedge\neg\phi_{2})\). Given an LTS \(L=(S,Act,\rightarrow)\), we define the semantics of this logic \(\llbracket-\rrbracket_{L}:\mathcal{F}\to 2^{S}\), inductively as follows: \[\llbracket tt\rrbracket_{L} =S,\] \[\llbracket a\rangle\phi\rrbracket_{L} =\{s\in S\mid\exists s^{\prime}\in S\text{ s.t. }s\xrightarrow{a}s^{\prime}\text{ and }s^{\prime}\in\llbracket\phi\rrbracket_{L}\},\] \[\llbracket\neg\phi\rrbracket_{L} =S\setminus\llbracket\phi\rrbracket_{L},\text{ and}\] \[\llbracket\phi_{1}\wedge\phi_{2}\rrbracket_{L} =\llbracket\phi_{1}\rrbracket_{L}\cap\llbracket\phi_{2}\rrbracket_{L},\] for \(a\in Act\) and \(\phi,\phi_{1},\phi_{2}\in\mathcal{F}\). This function yields for a formula \(\phi\in\mathcal{F}\) the subset of \(S\) where \(\phi\) is true. Often we omit the reference to the LTS \(L\) when it is clear from the context. We use HML formulas to describe distinguishing behaviour. Let \(L=(S,Act,\rightarrow)\) be an LTS, \(s\in S\) and \(t\in S\) states, and \(\phi\in\mathcal{F}\) a HML formula. We write \(s\sim_{\phi}t\) iff \(s\in\llbracket\phi\rrbracket\Leftrightarrow t\in\llbracket\phi\rrbracket\), and conversely \(s\not\sim_{\phi}t\) iff \(s\in\llbracket\phi\rrbracket\Leftrightarrow t\not\in\llbracket\phi\rrbracket\). Additionally, we write \(s\leqslant_{\phi}t\) if \(s\in\llbracket\phi\rrbracket\Rightarrow t\in\llbracket\phi\rrbracket\). Given a set of HML formulas \(\mathcal{G}\) we write \(s\sim_{\mathcal{G}}t\) iff for every \(\psi\in\mathcal{G}\), it holds that \(s\sim_{\psi}t\). Similarly, we write \(s\leqslant_{\mathcal{G}}t\) iff \(s\leqslant_{\psi}t\) for all \(\psi\in\mathcal{G}\). Given an LTS \(L=(S,Act,\rightarrow)\) and two states \(s,t\in S\), then a formula \(\phi\in\mathcal{F}\) distinguishes \(s\) and \(t\) iff \(s\not\sim_{\phi}t\). #### Metrics To express the size of a formula we use three different metrics: * _size_ the total number of observations, * _observation-depth_ the largest number of nested observation in the formula, and * _negation-depth_ the largest number of nested negations in the formula. For these metrics we inductively define the functions \(|\cdot|:\mathcal{F}\rightarrow\mathbb{N}\) for size, \(d_{\circ}:\mathcal{F}\rightarrow\mathbb{N}\) for observation-depth and \(d_{\circ}:\mathcal{F}\rightarrow\mathbb{N}\) for negation-depth, as follows: \[\begin{array}{llllll}|tt|&=0,&d_{\circ}(tt)&=0,&d_{\neg}(tt)&=0,\\ |\langle a\rangle\phi|&=|\phi|+1,&d_{\circ}(\langle a\rangle\phi)&=d_{\circ}( \phi)+1,&d_{\neg}(\langle a\rangle\phi)&=d_{\neg}(\phi),\\ |\neg\phi|&=|\phi|,&d_{\circ}(\neg\phi)&=d_{\circ}(\phi),&d_{\neg}(\neg\phi)&=d_ {\neg}(\phi)+1,\\ |\phi_{1}\wedge\phi_{2}|&=|\phi_{1}|+|\phi_{2}|.&d_{\circ}(\phi_{1}\wedge\phi_{2 })&=\max(d_{\circ}(\phi_{1}),d_{\circ}(\phi_{2})).&d_{\neg}(\phi_{1}\wedge\phi_{2 })&=\max(d_{\circ}(\phi_{1}),d_{\neg}(\phi_{2})).\end{array}\] Given natural numbers \(n,m\in\mathbb{N}\) we define the sets \(\mathcal{F}_{n}\) and \(\mathcal{F}^{m}\) as the fragment of HML formulas with bounded observation- and respectively negation-depth, i.e. \(\mathcal{F}_{n}=\{\phi\mid d_{\circ}(\phi)\leqslant n\}\), and \(\mathcal{F}^{m}=\{\phi\mid d_{\neg}(\phi)\leqslant m\}\). We write \(\mathcal{F}_{n}^{m}\) for the set \(\mathcal{F}_{n}^{m}=\mathcal{F}_{n}\cap\mathcal{F}^{m}\). Based on these metrics we define multiple notions of _minimal_ distinguishing formulas. Given an LTS \(L=(S,Act,\xrightarrow{\rightarrow})\), let \(\phi\in\mathcal{F}\) be an HML formula that distinguishes \(s\in S\) and \(t\in S\). Then in distinguishing \(s\) and \(t\), the formula \(\phi\) is called: * to have _minimal observation-depth_ iff \(\phi\) has the least nested modalities, i.e. for all \(\phi^{\prime}\in\mathcal{F}\) if \(s\not\sim_{\phi^{\prime}}t\) then \(d_{\circ}(\phi)\leqslant d_{\circ}(\phi^{\prime})\); * to have _minimal negation-depth_ iff \(\phi\) has the least nested negations, i.e., for all \(\phi^{\prime}\in\mathcal{F}\) if \(s\not\sim_{\phi^{\prime}}t\) then \(d_{\neg}(\phi)\leqslant d_{\neg}(\phi^{\prime})\); * to be _minimal_ iff \(\phi\) has the least number of modalities, i.e., for all \(\phi^{\prime}\in\mathcal{F}\) if \(s\not\sim_{\phi^{\prime}}t\) then \(|\phi|\leqslant|\phi^{\prime}|\); * to have _minimal observation- and negation-depth_ iff it is minimal in the lexicographical order of observation and negation-depth, i.e., iff for all \(\phi^{\prime}\in\mathcal{F}\) if \(s\not\sim_{\phi^{\prime}}t\) then \(d_{\circ}(\phi)\leqslant d_{\circ}(\phi^{\prime})\) and if \(d_{\circ}(\phi)=d_{\circ}(\phi^{\prime})\) then \(d_{\neg}(\phi)\leqslant d_{\neg}(\phi^{\prime})\); * irreducible [6, Def. 2.5] iff no \(\phi^{\prime}\) obtained by replacing a non-trivial subformula of \(\phi\) with the formula tt distinguishes \(s\) from \(t\). The first three notions correspond directly to the metrics we defined. The notion of _irreducible_ distinguishing formulas corresponds to the minimality notion used in the work by Cleaveland [6]. The different notions are not comparable. This is witnessed by the LTS \(M\) pictured in Figure 2. The formula \(\phi_{1}=\langle a\rangle\langle a\rangle tt\) distinguishes \(s_{0}\) and \(s_{1}\) since \(s_{0}\in\llbracket\phi_{1}\rrbracket\) and \(s_{1}\not\in\llbracket\phi_{1}\rrbracket\). Additionally, \(\phi_{1}\) is irreducible, since any formula obtained by replacing a subformula by \(tt\) is not a distinguishing formula. However, the formula \(\phi_{1}\) is not _minimal_ since the formula \(\phi_{2}=\langle b\rangle tt\) also distinguishes \(s_{0}\) and \(s_{1}\). #### Representation A note has to be made on the representation of distinguishing formulas. It is known that distinguishing formulas can grow very large. In fact there is a family of LTSs that showcases an exponential lower bound on the size of the minimal distinguishing formula [8, 25]. This exponential lower bound is not in contradiction with the polynomial-time algorithm from Cleaveland [6] since [6] uses equations to represent the subformulas. For example the formula \(\langle a\rangle\langle b\rangle\langle c\rangle tt\wedge\langle b\rangle \langle c\rangle tt\) can be represented using the equations \(\phi_{1}=\langle a\rangle\phi_{2}\wedge\phi_{2}\) and \(\phi_{2}=\langle b\rangle\langle c\rangle tt\), or as the term in Figure 3. The shared representation does not change the observation-depth and the negation-depth. The size of a formula is influenced, but it does not affect the NP-hardness result. Figure 3: A HML formula represented as a shared term. #### Correspondences There are strong correspondences between different fragments of HML on the one hand and \(m\)-nested similarity and bisimilarity on the other hand. We use these to obtain minimal distinguishing formulas. The first theorem states that those HML formulas that have at most \(k\)-nested observations exactly capture \(k\)-bisimilarity. (cf. [11, Theorem 2.2]) Given an LTS \(L=(S,Act,\rightarrow)\) and two states \(s,t\in S\). For every \(k\in\mathbb{N}\), \[s\leftrightarrow_{k}t\iff s\sim_{\mathcal{F}_{k}}t.\] In this work we are mainly interested in the contraposition of this theorem. For every \(k\in\mathbb{N}\), two states \(s,t\in S\) are not \(k\)-bisimilar iff there is a \(\phi\in\mathcal{F}_{k}\) that distinguishes \(s\) and \(t\), i.e. \(s\not\sim_{\phi}t\). For this reason for every \(k\in\mathbb{N}\) we call \(s\) and \(t\)_-distinguishable_ iff \(s\not\leftrightarrow_{k}t\). We call the states \(s\) and \(t\)_distinguishable_ iff they are \(k\)-distinguishable for some \(k\in\mathbb{N}\). Given an LTS \(L=(S,Act,\rightarrow)\) and two states \(s,t\in S\). For every \(k\in\mathbb{N}\), \[s\not\leftrightarrow_{k}t\iff\text{there is a formula }\phi\in\mathcal{F}_{k} \text{ such that }s\not\sim_{\phi}t.\] In [10] it is shown that fragments of HML with bounded negation-depth allow a similar relational classification. The following theorem relates the fragment \(\mathcal{F}^{m}\) to \(m\)-nested similarity inclusion. (cf. [10, Corollary 8.7.6]) Let \(L=(S,Act,\rightarrow)\) be an LTS, then for all \(m\in\mathbb{N}\), and states \(s,t\in S\): \[s\simeq^{m}t\iff s\leqslant_{\mathcal{F}^{m}}t.\] The main use for our work is that if two states are not \(m\)-nested similar, then there is a distinguishing formula with at most \(m\) nested negations. Let \(L=(S,Act,\rightarrow)\) be an LTS, then for all \(m\in\mathbb{N}\), and states \(s,t\in S\): \[s\not\neq^{m}t\iff\text{there is a formula }\phi\in\mathcal{F}^{m}\text{ s.t. }s\in\llbracket\phi\rrbracket\text{ and }t\not\in\llbracket\phi\rrbracket.\] Let us recall the LTS \(\mathcal{A}_{3}\) from Example 3 drawn in Figure 0(a). In this LTS we see that \(x_{3}\leftrightarrow_{2}x_{2}\), but \(x_{3}\not\leftrightarrow_{3}x_{2}\). As a result of Corollary 3 we know that there is a formula \(\phi\in\mathcal{F}_{3}\) that distinguishes \(x_{3}\) and \(x_{2}\). This is witnessed by the formula \(\phi=\langle a\rangle\langle a\rangle\langle a\rangle tt\in\mathcal{F}_{3}\), which is a distinguishing formula, since \(x_{3}\in\llbracket\phi\rrbracket\) and \(x_{2}\not\in\llbracket\phi\rrbracket\). We also see that \(x_{3}\sim_{\mathcal{F}_{2}}x_{2}\), hence there is no such formula in \(\mathcal{F}_{2}\). For the LTS \(\mathcal{B}_{3}\) from Example 7, we aim to distinguish the states \(x_{3}\) and \(y_{3}\). According Corollary 3 there is a distinguishing formula \(\phi\in\mathcal{F}^{3}\), since \(x_{3}\neq^{3}y_{3}\). This is witnessed by the formula \(\phi=\langle a\rangle\neg\langle a\rangle\neg\langle a\rangle\neg\langle a \rangle t\). This is a distinguishing formula as \(x_{3}\in\llbracket\phi\rrbracket\) and \(y_{3}\not\in\llbracket\phi\rrbracket\). Corollary 3 also shows that this is the minimal negation-depth formula distinguishing \(x_{3}\) and \(y_{3}\), as \(x_{3}\mathrel{\mathop{\mp}\limits^{2}}\!\!\mathrel{\mathop{\mp}\limits^{2}}y_{3}\). #### Traces Let \(Act\) be a finite set of action labels. We denote by \(Act^{*}:=\bigcup_{i\in\mathbb{N}}Act^{i}\) the set of all finite sequences on the action labels \(Act\). We write \(\varepsilon\) for the empty sequence. For sequences \(w,u\in Act^{*}\), we denote with \(|w|\) its length and \(w\cdot u\) the concatenation of \(w\) and \(u\), which is sometimes also written as \(wu\). **Definition 14**.: _Given an LTS \(L=(S,Act,\rightarrow)\). The set of traces\(\mathit{Tr}(s)\subseteq Act^{*}\) of a state \(s\in S\) is the smallest set satisfying:_ 1. \(\varepsilon\in\mathit{Tr}(s)\)_, and_ 2. _for an action_ \(a\in Act\)_, and state_ \(s^{\prime}\in S\) _if a trace_ \(w\in\mathit{Tr}(s^{\prime})\) _and_ \(s\xrightarrow{a_{0}}s^{\prime}\)_, then_ \(aw\in\mathit{Tr}(s)\)_._ Inductively, we define the formula \(\phi_{w}\) for every word \(w\in Act^{*}\), such that \(\phi_{\varepsilon}=\mathit{tt}\), and \(\phi_{aw}=\langle a\rangle\phi_{w}\). We call a formula \(\phi\in\mathcal{F}\) a _trace-formula_ iff there is a sequence \(w\in Act^{*}\) such that \(\phi=\phi_{w}\). **Lemma 15**.: _Let \(L=(S,Act,\rightarrow)\) be an LTS, and \(w\in Act^{*}\) a trace. Then for all \(s\in S\):_ \[s\in\llbracket\phi_{w}\rrbracket\iff w\in\mathit{Tr}(s).\] Two states \(s\in S\) and \(t\in S\) in an LTS \(L=(S,Act,\rightarrow)\) are said to be trace-equivalent iff \(\mathit{Tr}(s)=\mathit{Tr}(t)\). Bisimilarity is a more fine-grained equivalence than trace equivalence. Two states \(s\in S\) and \(t\in S\) can be trace-equivalent, while not being bisimilar. In this case there is a formula \(\phi\in F\) such that \(s\not\sim_{\phi}t\) and we know that \(\phi\) is not a trace-formula. However, \(\phi\) contains traces that are both traces of \(s\) and \(t\). To make this more precise we define the traces of a formula by induction for formulas \(\phi,\phi_{1},\phi_{2}\in\mathcal{F}\) as follows: \[\mathit{Tr}(tt) =\{\varepsilon\},\] \[\mathit{Tr}(\langle a\rangle\phi) =\{a\}\cup\{a\cdot w\mid w\in\mathit{Tr}(\phi)\},\] \[\mathit{Tr}(\neg\phi) =\mathit{Tr}(\phi),\] \[\mathit{Tr}(\phi_{1}\wedge\phi_{2}) =\mathit{Tr}(\phi_{1})\cup\mathit{Tr}(\phi_{2}).\] The traces of a formula allow us to state the correspondence between \(k\)-distinguishability and the length of shared traces. We formulate this using the minimal observation depth that, given two distinguishable states, yields the smallest \(i\in\mathbb{N}\) such that the states are \(i\)-distinguishable: **Definition 16**.: _Let \(L=(S,Act,\rightarrow)\) be an LTS. We define the minimal observation depth \(\Delta:S\times S\rightarrow\mathbb{N}\cup\{\infty\}\) by_ \[\Delta(s,t)=\left\{\begin{array}{ll}i&\mbox{if $s\not\models_{i}t$, and $s\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{\mathrel \mathrel{{ \mathrelmathrelmathrel {{ }}}}}{}}{}}{{}}{{}{{{{{{{{ \mathrel{\mathrel{\mathrel {\mathrel {\mathrel { \mathrel { { { }}}}}}}}}}{{{{{{{{ }}}}}{{{{{{{\mathrel{\mathrel{\mathrel{ {\mathrel{{\mathrel { { { { }}}}}}}}}}}}{{{{}}}{{{}}{{{ {}}{{{}{{}{}{{}}{{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{}{ {}}{{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{ {}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{{}{}{}{}{}{}{{}{}{{}{}{{}{}{ {}{}{{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{ {}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{ {}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{ {}{{}{}{{}{}{{}{}{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{}{ {}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{ {}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{ {}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{ {}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{{}{}{}{}{{}{}{}{}{}{ {}{{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{ {}{{}{}{{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{}{{}{}{ {}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{ {}{}{{}{}{{}{{}{}{}{{}{}{{}{}{}{}{{}{}{{}{}{{}{}{}{{}{}{}{}{}{{}{}{{}{ {}{}{{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{ {}{}{}{{}{}{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{}{}{{}{}{{}{}{{}{ {}{}{{}{}{}{{}{}{}{}{{}{{}{}{}{{}{}{{}{}{}{}{{}{}{{}{}{{}{}{}{{}{ {}{{}{}{{}{}{{}{}{}{}{}{}{{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{ {}{{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{ {}{}{{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{{}{}{}{{}{}{{}{}{{}{{}{}{}{{} {{}{}{{{}{}{}{{}{{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{ {}{}{{}{}{{}{{}{}{{}{}{}{{}{}{{}{{}{{}{{}{}{}{{}{}{}{{}{}{{ {}{{}{}{{}{}{{}{}{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{{}{}{ {}{{{}{}{}{}{{}{}{}{{}{}{{}{{}{}{{}{{}{}{}{{}{}{}{{}{ {{{}{}{}{}{{}{}{{}{}{{}{}{}{}{{}{}{{}{}{{}{{}{}{{}{}{{}{ {{}{}{}{}{}{{}{{}{}{}{}{{}{{}{}{}{}{{}{{}{}{{}{}{ {{}{{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{{}{{}{}{}{{}{}{{}{}{{}{ {}{}{{}{}{{}{}{{{}{}{}{}{{}{}{{}{}{}{{}{{}{}{{}{}{{}{}{}{ {}{{}{}{{}{}{}{{{}{}{{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{ {}{{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{}{}{{}{{}{}{{}{{} {}{{}{}{{}{}{{}{}{{}{{}{}{}{{}{}{{}{{}{}{}{{}{}{{}{{}{}{}{}{{}{} {{}{{}{{}{}{{}{}{{}{}{{}{}{{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{{} {}{{{}{}{{}{}{{}{}{{}{{}{}{{}{}{}{{}{}{{}{{}{{}{{}{}{{}{{}{}{}{{}{{}{}{{{} {}{{}{}{{}{{}{}{{}{}{}{{}{}{{}{} ## 3 NP-hardness results In this section we show that finding minimal distinguishing formulas is NP-hard. We first show that the existence of a short trace is NP-complete similar to a result of Hunt [13, Sec. 2.2] on acyclic NFAs. A corollary of the construction is that finding the _minimal size_ distinguishing formula is NP-hard. We define the decision problems _TRACE-DIST_ and _MIN-DIST_. Given an LTS \(L=(S,Act,\rightarrow)\), two states \(s,t\in S\) such that \(s\not\models_{i}t\) for \(i=|S|\), and a number \(l\in\mathbb{N}\). _TRACE-DIST_: There is a trace-formula \(\phi\in\mathcal{F}_{i}\), such that \(\phi\) distinguishes \(s\) and \(t\). _MIN-DIST_: There is a formula \(\phi\in\mathcal{F}_{i}\), such that \(\phi\) distinguishes \(s\) and \(t\), and \(|\phi|\leqslant l\). We point out that _TRACE-DIST_ is not the same as deciding trace-equivalence. The problem _TRACE-DIST_ decides whether there is a distinguishing trace of length \(i\), and \(i\) is smaller than the number of states, and a minimal distinguishing trace might be super-polynomial in size [7, Sec. 5]. ### Reduction We prove that _TRACE-DIST_ is NP-complete and _MIN-DIST_ is NP-hard by a reduction from the decision problem _CNF-SAT_. This decision problem decides whether a given propositional formula \(\mathcal{C}\) in conjunctive normal form (CNF) is satisfiable. For this we define an LTS \(L_{\mathcal{C}}\), based on the CNF formula \(\mathcal{C}\). Let \(\mathcal{C}=C_{1}\wedge\ldots\wedge C_{n}\) be a CNF formula over the set of proposition letters \(\text{Prop}=\{p_{1},\ldots,p_{k}\}\). We define the LTS \(L_{\mathcal{C}}=(S,Act,\rightarrow)\) as follows: * The set of states \(S\) is defined as \[S= \{\text{unsat}_{i}^{C}\mid C\in\{C_{1},\ldots,C_{n}\},i\in[0,k]\} \cup\{\text{sat}_{i}\mid i\in[0,k]\}\] \[\cup\{\bot_{i}\mid i\in[0,k]\}\cup\{s,t,\delta\}.\] * The set of actions \(Act\) is defined as \[Act=\{p,\overline{p}\mid p\in\text{Prop}\}\cup\{\text{init},\text{ false}\}.\] * The relation \(\rightarrow\) contains for each \(C\in\{C_{1},\ldots,C_{n}\}\) and \(i\in[1,k]\): \[\text{unsat}_{i-1}^{C}\xrightarrow{E_{i}}\left\{\begin{array}{ ll}\text{sat}_{i}&\text{if $p_{i}$ is a literal of $C$},\\ \text{unsat}_{i}^{C}&\text{otherwise,}\\ \text{sat}_{i-1}^{C}&\overline{E_{i}}\end{array}\right.\] \[\text{sat}_{i}^{C}\] \[\text{if $\neg p_{i}$ is a literal of $C$},\] \[\text{unsat}_{i}^{C}\] \[\text{sat}_{i-1}\xrightarrow{x}\text{sat}_{i}\text{ for $x\in\{p_{i}, \overline{p_{i}}\}$},\text{ and}\] \[\bot_{i-1}\xrightarrow{x}\bot_{i}\text{ for $x\in\{p_{i}, \overline{p_{i}}\}$}.\] Additionally, it contains the auxiliary transitions \[\text{unsat}_{k}^{C}\xrightarrow{\text{false}}\delta\text{ for $C\in\{C_{1}, \ldots,C_{n}\}$},\] \[\bot_{k}\xrightarrow{\text{false}}\delta\text{,}\] \[t\xrightarrow{\text{init}}\text{unsat}_{0}^{C}\text{ for $C\in\{C_{1}, \ldots,C_{n}\}$},\] \[t\xrightarrow{\text{init}}\text{sat}_{0}\text{,}\] \[s\xrightarrow{\text{init}}\text{sat}_{0}\text{,}\] \[s\xrightarrow{\text{init}}\bot_{0}\text{,}\] The LTS \(L_{\mathcal{C}}\) for the CNF formula \(\mathcal{C}=C_{1}\wedge C_{2}\) with clauses \(C_{1}=\neg p_{1}\vee\neg p_{2}\) and \(C_{2}=p_{2}\lor p_{3}\) is depicted in Figure 4. In this construction an interpretation of the propositions \(\mathit{Prop}=\{p_{1},\ldots,p_{k}\}\) is directly related to a word \(w=a_{1}\ldots a_{k}\), where \(a_{i}\in\{p_{i},\overline{p_{i}}\}\) for every \(i\in[1,k]\). The set of truth assignments encoded as words is defined as: \[\mathit{Truths}=\{a_{1}\ldots a_{k}\mid a_{i}\in\{p_{i},\overline{p_{i}}\} \text{ for all }i\in[1,k]\}.\] Given a truth assignment \(\rho:\mathit{Prop}\rightarrow\mathbb{B}\), we define \(w_{\rho}\) as \(w_{\rho}=a_{1}\ldots a_{k}\), where \(a_{i}=p_{i}\) if \(\rho(p_{i})=\mathit{true}\) and \(a_{i}=\overline{p_{i}}\), otherwise. Conversely, for a word \(w=a_{1}\ldots a_{k}\), a trace from \(\mathit{Truths}\), it represents the truth assignment \(\rho_{w}\) defined for each \(i\in[1,k]\) as: \[\rho_{w}(p_{i})=\left\{\begin{array}{ll}\mathit{true}&\text{if }a_{i}=p_{i},\\ \mathit{false}&\text{if }a_{i}=\overline{p}_{i}.\end{array}\right.\] The idea of the construction of \(L_{\mathcal{C}}\) is that it contains a \(\bot\) component, a sat component, and an unsat\({}^{C}\) component for every clause \(C\). All components are deterministic and acyclic, and hence describe a finite set of traces. All the traces of these components start by a truth assignment \(w\in\mathit{Truths}\). By construction, for every truth assignment \(w\in\mathit{Truths}\), \(w\cdot\mathit{false}\in\mathit{Tr}(\bot_{0})\). In this way the \(\bot\) component represents falsehood. Conversely, the state sat\({}_{0}\) represents a tautology, since for any truth assignment \(w\in\mathit{Truths}\), \(w\cdot\mathit{false}\not\in\mathit{Tr}(\texttt{sat}_{0})\). For every clause \(C\), and truth assignment \(w\in\mathit{Truths}\) the state unsat\({}_{0}^{C}\) contains \(w\cdot\mathit{false}\) as trace iff \(\rho_{w}\) does not satisfy \(C\). Let \(L_{\mathcal{C}}=(S,Act,\rightarrow)\) be the LTS for a CNF formula \(\mathcal{C}=C_{1}\wedge\ldots\wedge C_{n}\) with propositions \(\{p_{1},\ldots,p_{k}\}\), then: \[\mathit{Tr}(\texttt{sat}_{0}) =\{u\in Act^{*}\mid\exists w\in\mathit{Truths.}\ u\text{ is a prefix of }w\},\] \[\mathit{Tr}(\bot_{0}) =\mathit{Tr}(\texttt{sat}_{0})\cup\{w\cdot\mathit{false}\mid w \in\mathit{Truths}\},\text{ and}\] \[\mathit{Tr}(\texttt{unsat}_{0}^{C}) =\mathit{Tr}(\texttt{sat}_{0})\cup\{w\cdot\mathit{false}\mid w \in\mathit{Truths}\text{ and }\rho_{w}\text{ does not satisfy }C\}.\] This lemma is easily verified from the construction of \(L_{\mathcal{C}}\). **Corollary 10**: _Let \(w\in\text{Truths be a trace, and }L_{\mathcal{C}}\) the LTS for the CNF formula \(\mathcal{C}=C_{1}\wedge\ldots\wedge C_{n}\). Then for any clause \(C\in\{C_{1},\ldots,C_{n}\}\):_ \[w\text{-}\mathtt{false}\in\text{Tr}(\mathtt{unsat}_{0}^{C})\iff C\text{ is not satisfied under }\rho_{w}.\] The following lemma contains the main idea for the reduction of the main theorem showing _TRACE-DIST_ is NP-complete. Given the LTS \(L_{\mathcal{C}}=(S,\rightarrow,Act)\) for a CNF formula \(\mathcal{C}=C_{1}\wedge\ldots\wedge C_{n}\), with propositions \(\text{Prop}=\{p_{1},\ldots,p_{k}\}\). Then there is a trace \(w\in Act^{k+1}\) such that \(w\in\text{Tr}(\bot_{0})\), and \(w\not\in\text{Tr}(\mathtt{unsat}_{0}^{C})\) for every \(C\in\{C_{1},\ldots,C_{n}\}\) if and only if \(\mathcal{C}\) is satisfiable. Proof.: We prove this in both directions separately. \((\Rightarrow)\) As a witness, we obtain a trace \(w\in\text{Tr}(\bot_{0})\) of length at most \(k{+}1\) such that \(w\not\in\text{Tr}(\mathtt{unsat}_{0}^{C})\) for all clauses \(C\in\{C_{1},\ldots,C_{n}\}\). Since \(w\in\text{Tr}(\bot_{0})\) by Lemma 19 either \(w\in\text{Tr}(\mathtt{sat}_{0})\) or \(w\in\{v\cdot\mathtt{false}\mid v\in\text{Truths}\}\). Since \(\text{Tr}(\mathtt{sat}_{0})\subseteq\text{Tr}(\mathtt{unsat}_{0}^{C})\), and \(w\not\in\text{Tr}(\mathtt{unsat}_{0}^{C})\), there is a trace \(v\in\text{Truths}\) such that \(w=v\cdot\mathtt{false}\). By Corollary 10 all clauses \(C\) are satisfied by \(\rho_{w}\). This means \(\rho_{w}\) is a satisfying assignment for \(\mathcal{C}\). \((\Leftarrow)\) If there is a satisfying assignment \(\rho\) for \(\mathcal{C}\) then we show that \(w_{\rho}\cdot\mathtt{false}\) witnesses the implication. First observe that by definition \(w_{\rho}\cdot\mathtt{false}\in\text{Tr}(\bot_{0})\). Let \(C\in\{C_{1},\ldots,C_{n}\}\) be any clause. Since \(\rho\) is a satisfying assignment, \(C\) is satisfied under \(\rho\). This means by Corollary 10 that \(w_{\rho}\not\in\text{Tr}(\mathtt{unsat}_{0}^{C})\). Now we are ready to prove the main theorem of this section. Deciding TRACE-DIST is NP-complete. Proof.: First we verify that TRACE-DIST is in NP. Given an LTS \(L=(S,Act,\rightarrow)\), and two states \(s,t\in S\). As a witness we get a formula \(\phi\in\mathcal{F}_{|S|}\), which is a trace-formula. Since \(d_{\circ}(\phi)\leqslant|S|\) this is polynomial in size. It is well known that given a formula \(\phi\) we can check in polynomial time whether \(s\sim_{\phi}t\). To show TRACE-DIST is NP-hard we reduce CNF-SAT to TRACE-DIST. Let \(\mathcal{C}=C_{1}\wedge\ldots\wedge C_{n}\) be a CNF formula over the propositions \(\text{Prop}=\{p_{1},\ldots,p_{k}\}\). Then for the LTS \(L_{\mathcal{C}}\) we show there is a distinguishing trace smaller than \(|S|\) for \(s\in S\) and \(t\in S\) if and only if \(\mathcal{C}\) is satisfiable. We begin by observing the sets \(\text{Tr}(s)\), \(\text{Tr}(t)\): \[\text{Tr}(s) =\{\varepsilon,\mathtt{init}\}\cup\{\mathtt{init}\cdot w\mid w\in \text{Tr}(\bot_{0})\cup\text{Tr}(\mathtt{sat}_{0})\},\] \[\text{Tr}(t) =\{\varepsilon,\mathtt{init}\}\cup\{\mathtt{init}\cdot w\mid w\in \text{Tr}(\mathtt{sat}_{0})\cup\bigcup_{i\in[1,n]}\text{Tr}(\mathtt{unsat}_{0 }^{C_{i}})\}.\] Since for every \(C\in\{C_{1},\ldots,C_{n}\}\), \(\text{Tr}(\mathtt{unsat}_{0}^{C})\subseteq\text{Tr}(\bot_{0})\) and \(\text{Tr}(\mathtt{sat}_{0})\subseteq\text{Tr}(\mathtt{unsat}_{0}^{C})\), we know that if there is a distinguishing trace it has to be \(\mathtt{init}\cdot w\in\text{Tr}(s)\) for a \(w\in\text{Tr}(\bot_{0})\). By Lemma 11 this trace \(w\) exists iff \(\mathcal{C}\) is satisfiable. Hence, the states \(s\) and \(t\) are in _TRACE-DIST_ if and only if \(\mathcal{C}\) is in _CNF-SAT_. The LTS \(L_{\mathcal{C}}\) can be computed in polynomial time, as it has \((n+2)(k+1)+3\) states and \(2k(n+2)+2n+4\) transitions. This concludes the proof that _TRACE-DIST_ is NP-complete. In the reduction a distinguishing trace is also a minimal distinguishing formula. Which means we can generalise our NP-hardness result. Deciding MIN-DIST is NP-hard. Proof.: We prove this by a similar reduction as in the proof of Theorem 22. The intuition is that, given a CNF formula \(\mathcal{C}=C_{1}\wedge\ldots\wedge C_{n}\) with propositions \(\mathit{Prop}=\{p_{1},\ldots,p_{k}\}\), in the LTS \(L_{\mathcal{C}}\) a distinguishing formula \(\phi\in\mathcal{F}\) such that \(|\phi|=k+2\) necessarily is a trace-formula. We reduce CNF-SAT to MIN-DIST. Let \(\mathcal{C}=C_{1}\wedge\ldots\wedge C_{n}\) be a CNF formula over the propositions \(\mathit{Prop}=\{p_{1},\ldots,p_{k}\}\). Then for the LTS \(L_{\mathcal{C}}\) we show there is a distinguishing formula \(\phi\in\mathcal{F}\) for \(s\in S\) and \(t\in S\) such that \(|\phi|\leqslant k+2\) if and only if \(\mathcal{C}\) is satisfiable. For the direction \(\Rightarrow\), assume a formula \(\phi\in\mathcal{F}\) exists such that \(|\phi|\leqslant k+2\) and \(s\not\sim_{\phi}t\). We show that this means \(\mathcal{C}\) is satisfiable. We observe by the deterministic behaviour that \(s\mathrel{\raisebox{-0.43pt}{\scalebox{1.2}{$\Leftud$}}}_{k+1}t\). Hence, by Theorem 10 we know \(d_{\circ}(\phi)\geq k+2\). Since we assume \(|\phi|\leqslant k+2\) we know that \(d_{\circ}(\phi)=k+2\) and so, there are no non-trivial conjunctions, and we see that we can rewrite \(\neg\neg\phi\mapsto\phi\). Hence, there is a formula \(\psi=\triangle_{1}\ldots\triangle_{k+2}t\) such that for each \(i\in[1,k+2]\), \(\triangle_{i}\in\{(a_{i}),\neg\langle a_{i}\rangle\}\), for some \(a_{1},\ldots,a_{k+2}\in Act\), such that \(\llbracket\phi\rrbracket=\llbracket\psi\rrbracket\). By Lemma 17 there is a trace \(w\in\mathit{Tr}(\psi)\), such that \(|w|\geq k+2\) and, \(w\in\mathit{Tr}(s)\cup\mathit{Tr}(t)\). The only trace of this length of \(s\) or \(t\) is in the shape \(w=\texttt{init}\cdot\hat{p}_{1}\ldots\hat{p}_{k}\cdot\texttt{false}\), where \(\hat{p_{i}}\in\{p_{i},\overline{p}_{i}\}\) for each \(i\in[1,k]\). This means that \(a_{1}=\texttt{init}\), \(a_{j+1}=\hat{p}_{j}\) for each \(j\in[1,k]\) and \(a_{k+2}=\texttt{false}\). We are going to show that the associated truth value \(\rho=\rho_{\hat{p}_{1}\ldots\hat{p}_{k}}\) satisfies \(\mathcal{C}\) by reductio ad absurdum. If \(\rho\) does not satisfy \(\mathcal{C}\) then there is a clause \(C\) such that \(C\) is not satisfied by \(\rho\). We claim for this clause \(\texttt{unsat}_{0}^{C}\sim_{\Delta_{2}\ldots\Delta_{k+1}t}\bot_{0}\), and since both \(s\) and \(t\) have a init-transition to \(\texttt{sat}_{0}\) this means \(\psi\) does not distinguish any of the derivatives. Hence \(s\sim_{\psi}t\) which is a contradiction. For the other direction if \(\mathcal{C}\) is satisfiable then by Lemma 21 there is a \(w\in\mathit{Act}^{k+1}\) such that \(w\in\mathit{Tr}(\bot_{0})\) and \(w\not\in\mathit{Tr}(\texttt{unsat}_{0}^{C})\) for all clauses \(C\in\{C_{1},\ldots,C_{n}\}\). Using \(w\) we construct the distinguishing trace \(w^{\prime}=\texttt{init}\cdot w\). Since \(w\in\mathit{Tr}(\bot_{0})\), \(w\not\in\mathit{Tr}(\texttt{unsat}_{0}^{C})\) and by construction also \(w\not\in\mathit{Tr}(\texttt{sat}_{0})\), it is the case that \(w^{\prime}\in\mathit{Tr}(s)\) and \(w^{\prime}\not\in\mathit{Tr}(t)\). This means the formula \(\phi_{w^{\prime}}\) is a distinguishing formula and \(|\phi_{w^{\prime}}|=k+2\), which finishes the second part of the proof. The problem MIN-DIST is not a member of NP since a polynomially sized witness might not exist. However, there is always a'shared' distinguishing formula of polynomial size. Since we can compute in polynomial time if a shared formula is a distinguishing formula, the decision problem MIN-DIST formulated in terms of total'shared' modalities is NP-complete. ## 4 Efficient algorithms In this section we explain that despite the NP-hardness results from the previous section it is still possible to efficiently generate distinguishing formulas with minimal observation- and negation-depth. First, we introduce the method \(\phi(s,t)\) listed in Algorithm 1 that generates a minimal observation-depth distinguishing formula for the states \(s\) and \(t\). We extend \(\phi(s,t)\) to the function \(\psi_{i}(s,t)\) listed in Algorithm 2. This method computes a distinguishing formula with observation-depth of at most \(i\) and minimal negation-depth. Additionally, this procedure also prevents unnecessary conjuncts to be added. Finally, we indicate how to compute the equivalences \(\mathrel{\raisebox{-0.43pt}{\scalebox{1.2}{$\Leftud$}}}_{1},\ldots,\mathrel{ \raisebox{-0.43pt}{\scalebox{1.2}{$\Leftud$}}}_{k}\), and the minimal observation- and negation-depth. ### The algorithm For every \(i\in\mathbb{N}\), we define a function \(\delta_{i}:S\times S\to 2^{Act\times S}\) that gives all distinguishing observations. More precisely, given two \(i\)-distinguishable states \(s\in S\) and \(t\in S\), \(\delta_{i}(s,t)\) returns all pairs \((a,s^{\prime})\), where \(a\in Act,s^{\prime}\in S\), such that \(s\mathrel{\raisebox{-0.43pt}{\scalebox{1.2}{$\Leftud$}}}s^{\prime}\) and \(s^{\prime}\) is \((i-1)\)-distinguishable from all targets \(t\xrightarrow{a}t^{\prime}\). The definition of \(\delta_{i}(s,t)\) is: \[\delta_{i}(s,t)=\{(a,s^{\prime})\mid s\xrightarrow{a}s^{\prime}\text{ and }\forall t \xrightarrow{a}t^{\prime}\text{. }\Delta(s^{\prime},t^{\prime})\leqslant i-1\}.\] Using the function \(\delta_{i}(s,t)\), we can compute a minimal observation-depth formula using the procedure listed as Algorithm 1. The procedure selects an action state pair \((a,s^{\prime})\in\delta_{i}(s,t)\) and recursively distinguishes \(s^{\prime}\) from all \(a\)-derivatives of \(t\). If \(\delta_{i}(s,t)\) is empty the negated \(\phi_{i}(t,s)\) is calculated and in this case \(\delta_{i}(t,s)\) is necessarily not empty. Given an LTS \(L=(S,Act,\xrightarrow{\rightarrow})\) and two states \(s,t\in S\). If \(s\neq_{i}t\) then: \(\delta_{i}(s,t)\neq\emptyset\) or \(\delta_{i}(t,s)\neq\emptyset\). Proof.: As \(s\neq_{i}t\) there either is an \(s\xrightarrow{a}s^{\prime}\) such that \(s^{\prime}\neq_{i-1}t^{\prime}\) for all \(t\xrightarrow{a}t^{\prime}\), or vice-versa there is a \(t\xrightarrow{a}t^{\prime}\) such that \(t^{\prime}\neq_{i-1}s^{\prime}\) for all \(s\xrightarrow{a}s^{\prime}\). In the first case \((a,s^{\prime})\in\delta_{i}(s,t)\), in the second case \((a,t^{\prime})\in\delta_{i}(t,s)\). ### Minimal negation-depth In order to minimize the number of negations within the minimal observation-depth formula we combine the notions of \(k\)-bisimilar and \(m\)-nested similarity inclusion. Let \(L=(S,Act\xrightarrow{\rightarrow})\) be an LTS, and \(k,m\in\mathbb{N}\). We define \(m\)-nested \(k\)-similarity inclusion, denoted \(\simeq_{k}^{m}\), inductively by for all \(s,t\in S\), \(s\simeq_{0}^{m}t\) and if \(s\simeq_{k}^{m}t\) then 1. if \(s\xrightarrow{a}s^{\prime}\) there is a \(t\xrightarrow{a}t^{\prime}\) such that \(s^{\prime}\simeq_{k-1}^{m}t^{\prime}\), and 2. if \(m>0\) and \(t\xrightarrow{a}t^{\prime}\), then there is a \(s\xrightarrow{a}s^{\prime}\) such that \(t^{\prime}\simeq_{k-1}^{m-1}s^{\prime}\). Similarly to the original Hennessy-Milner correspondences, we observe the correspondence between the fragment \(\mathcal{F}_{k}^{m}\) and the relation \(\simeq_{k}^{m}\). Let \(L=(S,Act,\xrightarrow{\rightarrow})\) be an LTS. For any \(k,m\in\mathbb{N}\) and states \(s,t\in S\): \[s\leqslant_{\mathcal{F}_{k}^{m}}t\iff s\simeq_{k}^{m}t.\] Related to the distance measure \(\Delta\), we define the directed minimal negation-depth measure for the relation \(\simeq_{k}^{m}\), for states that are not \(m\)-nested \(k\)-similar for some \(k,m\in\mathbb{N}\). Let \(L=(S,Act,\xrightarrow{\rightarrow})\) be an LTS and \(i\in\mathbb{N}\) be a number. We define the directed minimal negation-depth \(\overrightarrow{\Delta}_{i}:S\times S\rightarrow\mathbb{N}\cup\{\infty\}\) by \[\overrightarrow{\Delta}_{i}(s,t)=\left\{\begin{array}{ll}j&\text{if $s\neq_{i}^{j}t$, and $s=_{i}^{j-1}t$,}\\ \infty&\text{if $s\neq_{i}t$.}\end{array}\right.\] For every \(i,j\in\mathbb{N}\) we define a function \(\delta_{i}^{j}:S\times S\to 2^{Act\times S}\) that is similar to the function \(\delta_{i}\). It adds an extra limitation on the number of negations needed to distinguish the pairs from all observations from \(t\). \[\delta_{i}^{j}(s,t)=\{(a,s^{\prime})\mid(a,s^{\prime})\in\delta_{i}(s,t)\text{ and }\forall t\xrightarrow{\alpha}t^{\prime}.\stackrel{{ \lambda}}{{\Delta}}_{i-1}(s^{\prime},t^{\prime})\leqslant j\}.\] The next lemma guarantees that a suitable distinguishing observation exists. Given an LTS \(L=(S,Act,\xrightarrow{\rightarrow})\) and two states \(s,t\in S\). Then for all \(i,j\in\mathbb{N}\), if \(s\neq_{i}^{j}t\) then \(\delta_{i}^{j}(s,t)\neq\emptyset\) or \(\delta_{i}^{j-1}(t,s)\neq\emptyset\). In Algorithm 2 we give the method \(\psi_{i}(s,t)\) that given an LTS \(L=(S,Act,\xrightarrow{\rightarrow})\) and \(i\)-distinguishable states \(s,t\in S\) generates a formula such that \(s\in\llbracket\psi_{i}(s,t)\rrbracket\) and \(t\not\in\llbracket\psi_{i}(s,t)\rrbracket\) with observation depth at most \(i\) and minimal negation-depth. The algorithm attempts to find an action label \(a\in Act\) and an \(a\)-derivative \(s\xrightarrow{a}s^{\prime}\), such that all \(a\)-derivatives \(t^{\prime}\), such that \(t\xrightarrow{a}t^{\prime}\) are distinguishable with a formula with at most \(i-1\) nested observations and \(j\) nested negations. These pairs \((a,s^{\prime})\) are given by the function \(\delta_{i}^{j}(s,t)\). In Line 6 one of these witnesses is chosen. If there is more than one suitable derivate, one is chosen at random. ``` input : Two states \(s,t\in S\) such that \(s\neq_{i}t\) for some \(i\in\mathbb{N}\) output : A formula \(\phi\in\mathcal{F}_{i}\) such that \(s\in\llbracket\phi\rrbracket\) and \(t\not\in\llbracket\phi\rrbracket\) 1Function\(\phi_{i}(s,t)\) is 2\(j:=\overrightarrow{\Delta}_{i}(s,t)\); 3\(\mathcal{X}:=\delta_{i}^{j}(s,t)\); 4if\(\mathcal{X}=\emptyset\)then 5return\(\neg\,\phi_{i}(t,s)\) 6 Select \((a,s^{\prime})\in\mathcal{X}\); 7\(T:=\{t^{\prime}\mid t\xrightarrow{a}t^{\prime}\}\); 8while\(T\neq\emptyset\)do 9 Select \(t_{\mathit{max}}\in T\) such that \(\overrightarrow{\Delta}_{i-1}(s^{\prime},t_{\mathit{max}})\geq\overrightarrow{ \Delta}_{i-1}(s^{\prime},t^{\prime})\) for all \(t^{\prime}\in T\); 10\(\phi_{t_{\mathit{max}}}:=\phi_{i-1}(s^{\prime},t_{\mathit{max}})\); 11\(\Phi:=\Phi\cup\{\phi_{t_{\mathit{max}}}\}\); 12\(T:=T\cap\llbracket\phi_{t_{\mathit{max}}}\rrbracket\); 13 14 end while 15return\(\langle a\rangle\left(\bigwedge_{\phi\in\Phi}\phi\right)\) 16 [MISSING_PAGE_POST] `` **Algorithm 2** Generate a distinguishing formula with minimal observation- and negation-depth. ### Partition refinement In order to execute Algorithm 2, we need to compute the functions \(\Delta\) and \(\overrightarrow{\Delta}\). In this section we propose a simple partition refinement algorithm that does exactly this by first computing the relations \(\uplus_{0},\uplus_{1},\ldots,\uplus_{k}\) iteratively. The pseudocode is listed in Algorithm 3. In contrast to the more efficient partition refinement algorithms [12, 21, 24], we guarantee that _older_ blocks are used first as splitter. This method is inspired by [23] where pairwise minimal distinguishing words are computed. Most algorithms deciding bisimilarity are so-called partition refinement algorithms [14, 21]. Our algorithms are also based on partition refinement. A _partition_\(\pi\) of a set \(S\) is a disjoint cover of \(S\), i.e. a set of non-empty subsets of \(S\) and every element of \(S\) is in exactly one subset. The elements \(B\in\pi\) are called _blocks_. A partition \(\pi\) induces the equivalence relation \(\sim_{\pi}:S\times S\) in which the blocks are the equivalence classes, i.e. \(\sim_{\pi}=\{(s,t)\mid\exists B\in\pi\text{ and }s,t\in B\}\). In the algorithm we filter a set of states \(U\) on a distinguishing observation with respect to a set of given states \(V\), and an action \(a\in Act\), i.e.: \(\text{split}_{a}(U,V)=\{s\in U\mid\exists s^{\prime}\in V.s\xrightarrow{a}s ^{\prime}\}\). The next theorem states that the procedure listed as Algorithm 3 produces a sequence of partitions, in which the \(i\)-th partition induces \(i\)-bisimilarity. **Theorem 31**: _Given an LTS \(L=(S,Act,\xrightarrow{\rightarrow})\) and partitions \(\pi_{0},\ldots,\pi_{k}\) produced by Algorithm 3. Then \(\sim_{\pi_{i}}=\uplus_{i}\), for all \(0\leq i\leq k\)._ It is possible to compute the function \(\overrightarrow{\Delta}_{i}(s,t)\) in polynomial time from the computed \(k\)-bisimilarity relations calculated in Algorithm 3. It is important to use dynamic programming such that \(\overrightarrow{\Delta}_{i}(s,t)\) for every \(i\), \(s\) and \(t\) is only calculated once. ### Evaluation The computation of Algorithm 2 needs to account for redundancies to guarantee a polynomial time algorithm. We use dynamic programming to achieve this. For any pair of states \(s,t\in S\) if the function \(\psi_{i}(s,t)\) is invoked, it stores the generated shared formula. Whenever the function is called again, the previously generated formula is used, with only constant extra computing and memory usage. Hence, given an LTS \(L=(S,Act,\xrightarrow{\rightarrow})\) the number of recursive calls is limited to the combination of states and level \(k\leq|S|\), i.e. \(\mathcal{O}(|S|^{3})\) calls. **Corollary 32**: _Given an LTS \(L=(S,Act,\xrightarrow{\rightarrow})\) and a pair of distinguishable states \(s,t\in S\), then the following is computable in polynomial time:_ * _A minimal observation-depth distinguishing formula,_ * _A minimal observation- and negation-depth distinguishing formula._ A naive implementation of the algorithms requires quadratic memory. This could be a bottleneck for large state spaces. Representing the equivalences \(\uplus_{k}\) as a splitting tree [17] is more memory efficient. In addition, an optimization is to generate only distinguishing formulas between equivalence classes of the generated equivalences, instead of individual states. We implemented a prototype of the method introduced here. We also implemented the method proposed by Cleaveland [6] in which we decided bisimilarity by a partition refinement algorithm in which the splitter selected is the latest created block, since heuristically this has the best runtime [1, 2]. For Cleaveland's method the strategy for splitter selection matters for the size of the formulas generated. However, regardless of strategy chosen, the formulas that our method generates are always more concise in all metrics. We post-processed the formulas to ensure both implementations resulted in formulas that are irreducible. For the benchmark we used the model from [18] containing 188.568 states and 340.607 transitions. We compared this model to 5 modified versions where we omitted one randomly chosen transition. In Table 1 the results of running the algorithms 10 times are shown. Under 'Max', the worse-case of the different runs for each metric is listed for our method ('Our'), next to the result of the implementation of Cleaveland ('Cleav.'). Under 'Average' the average of the 10 runs is shown. We see that our new method consistently outputs a minimal observation- and negation-depth formula, and the generated formulas only rarely deviates in size. It outperforms the method of Cleaveland in all cases. In some cases the depth is improved a factor 10. ## 5 Conclusions & Future work In this work we studied the problem of computing minimal distinguishing formulas. We introduced three metrics: size, observation-depth, and negation-depth. Using a reduction directly from CNF-SAT we showed that finding a minimal sized distinguishing formula is NP-hard. However, for observation- and negation-depth, we introduce polynomial time algorithms that compute minimal formulas. A prototype demonstrates the potential improvement over the method introduced by Cleaveland [6]. A more rigorous version is implemented in the mCRL2 toolset [5]. For future work it would be interesting to extend our algorithms for equivalences beyond strong bisimilarity. For instance, a more generic coalgebraic treatment, extending [25], or computing smaller witnesses for equivalences with abstractions like branching and weak bisimilarity, improving upon the work of Korver [16]. \begin{table} \begin{tabular}{l|c c|c|c|c|c|c|c|c|c|c|c|} & \multicolumn{4}{c|}{Max} & \multicolumn{4}{c|}{Average} \\ \hline \multirow{2}{*}{Benchmark} & \multicolumn{2}{c|}{\(d_{c}(\phi)\)} & \multicolumn{2}{c|}{\(|\phi|\)} & \multicolumn{2}{c|}{\(d_{-}(\phi)\)} & \multicolumn{2}{c|}{\(d_{c}(\phi)\)} & \multicolumn{2}{c|}{\(|\phi|\)} & \multicolumn{2}{c|}{\(d_{-}(\phi)\)} \\ \cline{2-13} & Our & Cleav. & Our & Cleav. & Our & Cleav. & Our & Cleav. & Our & Cleav. & Our & Cleav. \\ \hline ieee-1394-1 & 64 & 891 & 69 & 1355 & 0 & 886 & 64,0 & 247,2 & 69.0 & 373,7 & 0.0 & 243,2 \\ ieee-1394-2 & 37 & 224 & 42 & 320 & 1 & 219 & 37,0 & 92,0 & 42,0 & 120,0 & 1.0 & 88,2 \\ ieee-1394-3 & 102 & 698 & 102 & 1092 & 2 & 696 & 102,0 & 299,1 & 102,0 & 465,4 & 2.0 & 295,7 \\ ieee-1394-4 & 76 & 363 & 83 & 506 & 2 & 360 & 76,0 & 196,6 & 80,9 & 276,5 & 2.0 & 194,5 \\ ieee-1394-5 & 18 & 155 & 18 & 214 & 2 & 146 & 18,0 & 36,0 & 18,0 & 44,8 & 2.0 & 30,4 \\ \hline \end{tabular} \end{table} Table 1: Results from prototype implementation Algorithm 1.
2303.17396
Finetuning from Offline Reinforcement Learning: Challenges, Trade-offs and Practical Solutions
Offline reinforcement learning (RL) allows for the training of competent agents from offline datasets without any interaction with the environment. Online finetuning of such offline models can further improve performance. But how should we ideally finetune agents obtained from offline RL training? While offline RL algorithms can in principle be used for finetuning, in practice, their online performance improves slowly. In contrast, we show that it is possible to use standard online off-policy algorithms for faster improvement. However, we find this approach may suffer from policy collapse, where the policy undergoes severe performance deterioration during initial online learning. We investigate the issue of policy collapse and how it relates to data diversity, algorithm choices and online replay distribution. Based on these insights, we propose a conservative policy optimization procedure that can achieve stable and sample-efficient online learning from offline pretraining.
Yicheng Luo, Jackie Kay, Edward Grefenstette, Marc Peter Deisenroth
2023-03-30T14:08:31Z
http://arxiv.org/abs/2303.17396v1
# Finetuning from Offline Reinforcement Learning: Challenges, Trade-offs and Practical Solutions ###### Abstract Offline reinforcement learning (RL) allows for the training of competent agents from offline datasets without any interaction with the environment. Online finetuning of such offline models can further improve performance. But how should we ideally finetune agents obtained from offline RL training? While offline RL algorithms can in principle be used for finetuning, in practice, their online performance improves slowly. In contrast, we show that it is possible to use standard online off-policy algorithms for faster improvement. However, we find this approach may suffer from _policy collapse_, where the policy undergoes severe performance deterioration during initial online learning. We investigate the issue of policy collapse and how it relates to data diversity, algorithm choices and online replay distribution. Based on these insights, we propose a conservative policy optimization procedure that can achieve stable and sample-efficient online learning from offline pretraining. ## 1 Introduction Offline reinforcement learning (ORL) [14; 16] considers the problem of learning policies from fixed datasets without requiring additional interaction with the real environment. ORL has the potential to enable sample-efficient learning in applications such as healthcare and robotics as it does not require potentially expensive interaction with the real environment. However, recent work [5] suggests it may be challenging for pure ORL approaches to learn optimal behavior using only offline data, e.g., if the relevant parts of the state-action space are not well represented in the offline dataset. When the offline policies are not optimal for the real environment, _finetuning_ with additional online data can enable the agents to achieve stronger performance [15; 20]. For finetuning to be practical, however, it is important to ensure that the online finetuning procedure improves fast from online data. Unlike in supervised learning, where there are more established approaches to pretraining and finetuning, how to best finetune RL agents trained from prior experience remains less well understood. In this paper, we study how to improve offline pre-trained policies with a small amount of additional online interactions. One may hope to deploy the offline policy to collect more data and reuse the same algorithm for offline learning and finetuning. However, existing studies and our findings indicate that finetuning with RL algorithms designed for offline learning converges slowly with additional online data [20]. An alternative approach to re-using the offline RL algorithm for online finetuning is to use a different online off-policy RL algorithm for online finetuning. Since recent online off-policy algorithms [1; 8; 10; 17] have demonstrated strong performance and good sample efficiency, we should expect that additional pretraining with offline RL would allow this approach to enable sample-efficient finetuning. We show that using a standard off-policy RL algorithm can work well for online finetuning. However, we also observe that sometimes finetuning with online off-policy algorithms can lead to _policy collapse_, where the policy performance degrades severely during initial online training. These observations motivate us to investigate the challenges in different strategies for finetuning from offline RL. Towards this goal, we analyze the trade-offs in choices of algorithms and whether/how to use offline data for online finetuning. We present several approaches to finetuning and discuss their merits and limitations based on empirical observations on standard offline and online RL benchmarks. Based on our observations, we conclude that effective online finetuning from offline RL may be achieved with more robust online policy optimization and present a constrained policy optimization extension to the TD3 algorithm, which we call conservative TD3 (TD3-C), that empirically helps stabilize online finetuning, thereby addressing the issue of policy collapse. ## 2 Background Off-policy online reinforcement learningWe consider reinforcement learning (RL) in a Markov Decision Process (MDP) defined by the tuple \((\mathcal{S},\mathcal{A},p,p_{0},r,\gamma)\). Off-policy RL algorithms learn a policy \(\pi_{\theta}\) with experience generated by a different policy \(\mu\). Example algorithms include Deep Deterministic Policy Gradient (DDPG) [17] and Twin Delayed Deep Deterministic Policy Gradient (TD3) [8]. These deep off-policy algorithms learn an approximate state-action value function \(Q_{\phi}\) and deterministic policy \(\pi_{\theta}\) by alternating policy evaluation and improvement. During policy evaluation, we learn an approximate state-action value function \(Q_{\phi}\) by minimizing the Bellman error \[\phi^{*}=\arg\min_{\phi}\mathbb{E}_{s,a,r,s^{\prime}\sim\mathcal{B}}\left[(Q_{ \phi}(s,a)-(r+\gamma Q_{\phi^{\prime}}(s^{\prime},\pi_{\theta^{\prime}}(s^{ \prime})))^{2}]\right], \tag{1}\] where \(Q_{\phi^{\prime}}\) and \(\pi_{\theta^{\prime}}\) are the target critic and policy networks used to stabilize TD learning with function approximation. During policy improvement, the policy parameters \(\theta\) are updated to maximize the current state-action value function \[\theta^{*}=\arg\max_{\theta}\mathbb{E}_{s\sim\mathcal{B}}\left[Q_{\phi}(s,\pi_ {\theta}(s))\right]. \tag{2}\] Leveraging demonstrations in RLIn addition to the standard reinforcement learning problem definition, we consider the setting where we have access to an additional behavior dataset. for example, a dataset consisting of environment transitions \(\mathcal{B}=\{(s_{i},a_{i},r_{i},s_{i+1})\}_{i=1}^{N}\) of experience collected by an unknown behavior policy \(\mu\) in the same MDP. The idea of leveraging demonstrations in reinforcement learning is not new. Behavior cloning [23] is a supervised learning method that trains the policy to mimic the behavior policy induced by the demonstration dataset consisting of states and corresponding actions. Imitation learning [24] aims at learning policies that produce trajectories close to those in the demonstration dataset; however, actions are typically not observed in imitation learning. In addition, expert demonstrations have been shown to be effective in accelerating learning in sparse reward problems [26]. These approaches require the demonstrations to consist of expert or high-quality trajectories. Offline RL with online finetuningMore recently, offline RL has received increasing attention due to its potential for scaling RL to large-scale, real-world applications. Unlike imitation learning or behavior cloning, offline RL utilizes reward-annotated datasets and can in principle learn an optimal policy given sub-optimal logged trajectories by "stitching" together good behaviors. In principle, we can apply off-policy algorithms, such as DDPG or TD3, to learn from a fixed dataset; however, in practice, they are ineffective when learning from fixed offline datasets due to extrapolation errors [9]. This manifests as learning massively over-estimates state-action values during offline training. Recently, many deep offline RL algorithms [9; 12; 13; 27] have been proposed to reduce the extrapolation error. These algorithms modify standard deep off-policy actor-critic algorithms and constrain policy learning to be supported by the dataset, which helps reduce extrapolation error. Approaches to reduce extrapolation error include, but are not limited to, value-constrained methods that learn conservative value estimates for out-of-distribution actions [13] and policy-constrained methods that regularize the policy to not deviate significantly from the empirical behavior policy [12; 20; 27]. These offline algorithms can learn competitive policies, surpassing the performance of the empirical behavior policies in the dataset. Typical offline RL algorithms focus on learning from the fixed dataset and do not permit access to an online environment; their sole focus is to learn a good policy using the offline dataset only [16]. In this paper, we are interested in using additional online environment interactions to improve the agent's performance further. Concretely, we consider the setting where we first pretrain a policy using offline RL and then further finetune it online, similar to the scenario considered in [12; 15; 20]. Although offline RL algorithms can be used for finetuning with additional online data, previous work [15; 20] found that many offline RL algorithms improve slowly when given additional data. For example, [15] found that online finetuning with the offline RL algorithm CQL results in a slight improvement with the addition of online data. In this paper, we are interested in understanding the sub-optimality in performance and finding better strategies for finetuning agents obtained from offline pretraining. ## 3 Challenges in Finetuning from Offline RL In this section, we empirically study and analyze the challenges in performing online finetuning after pretraining with offline RL. Our analysis builds on top of MuJoCo [25] tasks in the D4RL benchmark suite [6]. We consider datasets from the walker2d, halfcheetah, hopper, and ant tasks. For each task, we perform finetuning given the corresponding medium and medium-replay datasets. The medium datasets consist of transitions collected by the evaluation policy of an early-stopped, sub-optimal agent. Medium-replay datasets refer to the transitions stored in the replay buffer of an early-stopped agent. Both types of datasets contain transitions that enable the agent to acquire medium performance. The medium quality dataset includes only transitions from the learned policy, which, on average, have higher quality than the medium-replay datasets but lack diversity. ### Experimental Setup For offline pretraining, we use TD3-BC [7]. TD3-BC is a policy-constrained offline RL algorithm that extends the TD3 algorithm by including an additional behavior cloning (BC) regularizer in the policy improvement step to encourage the policy to stay close to the behaviors in the offline dataset. Specifically, TD3-BC augments the policy optimization objective in Equation (2) with an additional penalty \((a-\pi_{\theta}(s))^{2}\), such that \[\theta^{*}=\arg\max_{\theta}\mathbb{E}_{s,a\sim\mathcal{B}}\left[\lambda Q_ {\phi}(s,\pi_{\theta}(s))+(a-\pi_{\theta}(s))^{2}\right], \tag{3}\] where \(a\) is the action stored in the offline dataset, and \(\lambda\geq 0\) modulates the relative strength of the original policy optimization objective and the behavior cloning regularization. We selected TD3-BC because of its good empirical performance on the MuJoCo locomotion benchmarks. Further, it is also easy to compare TD3-BC with its off-policy counterparts, TD3, for finetuning performance since switching from TD3-BC to TD3 requires only removing the BC penalty, keeping all other hyper-parameters fixed. During online finetuning, we load the weights for the neural networks obtained from offline training and use either TD3 [8] or TD3-BC as the finetuning algorithm. Hyperparameters are chosen to be the same as the offline setting. In both cases, we pretrain the actor and the critic offline for \(500K\) iterations and then train online for \(200K\) environment steps. We perform one gradient step after every transition in the online environment during online finetuning. Therefore, one learner step after the pretraining stage corresponds to one step of online environment interaction. Note that the online training setup is different from the online batch setting considered, for example in [15; 20], but is closer to the standard online benchmark setting used in [8]. Our experiments are implemented in JAX [3] based on DeepMind's Acme RL library [11] and JAX ecosystem [2]. We use an internal computing cluster for the experiments; each job has access to a single GPU. Reproducing the experimental results in this paper takes approximately 600 GPU hours. Code for reproducing our experiments will be open-sourced. ### The Effect of Online Algorithms During Finetuning We start by analyzing how the choice of online algorithms impacts finetuning performance. For this experiment, we use either TD3 or TD3-BC as the online algorithms for finetuning. Following the finetuning protocol in [12; 20; 28], we also incorporate the offline data during finetuning by initializing the online replay buffer with transitions from the offline dataset. The transitions are sampled uniformly during both offline pretraining and online finetuning. Figure 1 shows the comparison between the different choices of online algorithms on evaluation performance. The results reveal a few interesting findings. Offline RL algorithms improve more slowly compared to their online counterparts.Finetuning online with TD3-BC improves slowly compared to using TD3. This suggests that ORL algorithms, which constrain the target policy to be close to behavior policy, may improve more slowly than their standard off-policy counterparts. While we restrict our comparison to using the TD3 algorithm as the base RL algorithm, previous work [15; 20] shows similar findings for other ORL algorithms. Online finetuning with off-policy algorithms suffers from policy collapse.While online finetuning with TD3 achieves a better evaluation score compared to TD3-BC, there is a noticeable training instability for some datasets in the early stages of online finetuning. This is illustrated by the sudden drop in performance as finetuning starts (i.e., 500K learner steps). This is distinct from the fluctuations in performance that is typical in deep off-policy RL algorithms. This phenomenon is sometimes referred to as _policy collapse_. Policy collapse happens as the critic is inaccurate when finetuning starts and is over-optimistic on novel states encountered early during finetuning. Nevertheless, finetuning with TD3 after 200K environment steps always rivals or surpasses TD3-BC. Note that policy collapse is not a specific problem for TD3: [15; 20] consider offline pretraining with CQL followed by finetuning with SAC, and the training instability can also be observed in their experiments. Policy collapse is more severe when the diversity of the dataset is lower.The extent of policy collapse varies across domains and dataset qualities and is more noticeable as the diversity of the dataset decreases. Although the offline performance on the medium and medium-replay datasets are comparable, finetuning from agents pretrained with TD3 on the medium datasets is more unstable. This is expected since the medium datasets are less diverse than the medium-replay datasets. Notice that the rate at which TD3 recovers its original performance also varies across the datasets, and it is more difficult to recover the performance when pretrained on less diverse datasets. Figure 1: Comparison of TD3, TD3-BC for online finetuning on the D4RL benchmark suite. The agents are pretrained for 500K steps with TD3-BC before finetuning with additional online data for 200K environment steps. We perform one gradient step for every environment step, so the number of learner steps after 500K corresponds to the number of environment steps taken during finetuning. TD3-BC (orange) improves slowly with finetuning compared to TD3. However, TD3 (blue) shows policy collapse during initial finetuning. Policy collapse is more observable in the medium datasets than in the medium-replay datasets, which are more diverse. We also include results for training a TD3 agent (green) without any offline pretraining and it performs worse than agents with offline pretraining, demonstrating that offline pretraining is useful in accelerating sample efficiency in online learning. Results are averaged with ten random seeds. ### The Effect of Offline Data During Finetuning In Section 3.2, we followed the finetuning protocol used in [12; 28], which loads the offline data into the online replay before finetuning starts. In the following, we consider what happens if we do not utilize any offline data during finetuning. In this case, we are just initializing the online policy and critic networks with the pretrained weights. Figure 2 shows the effect of loading the offline data into the online replay before finetuning begins. TD3-BC remains stable during finetuning even when offline data is not utilized. Discarding the offline data allows it to obtain higher sample efficiency and rival or even outperform online finetuning on six out of the eight datasets. This indicates that constraining the policy outputs to be close to actions stored in the replay offers improved stability even when the replay data is collected purely from the online environment. At the same time, however, this regularization via BC may hurt finetuning if the goal is to maximize performance improvement given a small number of online interactions. TD3-BC can still perform worse than online RL algorithms, as evident from the lower sample efficiency in the halfcheetah datasets. TD3 can also obtain higher performance without utilizing offline data, and finetuning is more stable with pretraining on medium-replay datasets than medium datasets. The only exception is walker2d-medium-replay, where not initializing with offline data results in policy collapse. However, for the less diverse medium datasets, policy collapse happens independent of whether offline data is reused, and the drop in performance happens immediately as finetuning starts. ### Summary of Empirical Observations In the following, we summarize our observations of the empirical study. * The degree of the conservativeness of the RL algorithms has a substantial impact on the data efficiency during finetuning, where the goal is to maximize the improvement compared to the pretrained policies. Mitigation strategies that reduce extrapolation errors during offline training can also benefit online finetuning by improving the stability of the online training. However, this may come as a trade-off for online finetuning sample efficiency. Using online RL algorithms may result in a faster rate of improvement but can be less stable, and such an instability may be undesirable if the online sampling budget is limited. The degree of instability depends on the properties of Figure 2: Effect of using offline datasets for online finetuning. We compare what happens if we do not initialize the online replay with transitions from the offline dataset. TD3-BC enjoys significant improvement if we do not sample offline transitions during online finetuning. TD3 exhibits policy collapse independent of whether offline data is utilized during finetuning. The standard deviation is ignored in the plots for better visibility. the underlying MDP and the diversity of the offline dataset. When the offline dataset has enough diversity, the errors encountered during online finetuning will be mild and may not be significant enough to collapse a good pretrained policy. However, if the value estimate is inaccurate during finetuning, then policy optimization algorithms that maximize the erroneous critic estimate will result in policy collapse. * The online replay sampling distribution plays an important role. The two approaches we considered, namely whether we initialize the online replay with offline datasets, present different trade-offs. However, when the online replay is initialized with the offline dataset, the sampled transitions during the initial finetuning period should resemble those seen during offline training. For policy-constrained offline RL algorithms such as TD3-BC, the additional policy constraint will regularize the online training policy heavily towards sub-optimal behaviors in the offline dataset. This explains why TD3-BC improves faster when we completely discard offline data during finetuning. However, online algorithms that do not incorporate any additional constraints suffer from training instability. * Initializing the online replay with offline data followed by uniform sampling during finetuning may present additional issues that influence the stability of online algorithms such as TD3. When the offline dataset is large, during the initial period of finetuning, samples from the replay buffer would consist mainly of transitions from the offline dataset since the amount of new samples is relatively small compared to the number of offline samples. This can create a similar effect as learning offline during online finetuning. Since TD3 is not designed for offline learning, it may overestimate the values during initial finetuning. We discuss this issue in more detail in Section 4. A solution that prevents the offline samples from crowding out the online replay is to fix the ratio of offline and online samples during finetuning. This has the advantage of ensuring that the agent immediately uses online interactions collected by the agent during finetuning, decoupling the rate at which online transitions are sampled from the size of the offline dataset. The results are shown in Appendix C.1. In this case, TD3 still suffers from policy collapse and can only be explained by an inaccurate critic. * When the offline data is not used to initialize the online replay, all transitions sampled during finetuning will consist purely of online interactions. The critic may encounter more transitions not seen during pretraining, making stable finetuning difficult. The agent may also suffer from the risk of catastrophic forgetting since the agent would not be able to recall any bad behavior it has experienced during offline learning. While we have presented experimental results for the two extreme settings, it is unlikely that either would be the best solution for practical applications. While [15] explored using prioritized replay that learns to mix the offline and online samples that allow utilization of offline samples during online learning, the issue of policy collapse can still happen with their approach. Therefore, our experiments suggest that if the online algorithm does not explicitly address issues that prevent extrapolation errors in the critic from crippling the policy, it is unlikely that a different choice of the replay sampling strategy can help prevent policy collapse. Our observations suggest that regularizing the policy optimization during finetuning is useful for ensuring online stability. However, excessive regularization may result in slow finetuning. Can we improve existing online off-policy algorithms so th t online finetuning is more stable without compromising sample efficiency? In the following section, we attempt to answer this question. ## 4 Conservative Policy Improvement in TD3 While offline RL algorithms constrain the policy to be close to the empirical data distribution, such a constraint is inadequate for finetuning since it may be too conservative to allow for fast online learning. On the other hand, constraining policy optimization to the vicinity of a historically good policy may be beneficial since it may limit the influence of an inaccurate critic. Therefore, we propose to improve the online TD3 algorithm by changing the unconstrained policy improvement step to a constrained update that penalizes large policy updates. Concretely, we propose TD3-C, an off-policy deep RL algorithm based on TD3 that uses the following constrained policy improvement step in place of the original TD3 policy optimization step: \[\max_{\theta}\quad\mathbb{E}_{s\sim\mathcal{B}}[Q_{\phi}(s,a)|_{a=\pi_{ \theta}(s)}]\qquad\text{s.t.}\quad\mathbb{E}_{s\sim\mathcal{B}}[(\pi_{\theta }(s)-\pi_{\theta^{\prime}}(s))^{2}]\leq\epsilon. \tag{4}\] Here \(\theta\) is the online policy network parameter, \(\theta^{\prime}\) is the target policy network parameter, and \(\epsilon\) is a hyper-parameter that controls the tightness of the constraint. The constraint regularizes the online policy to not deviate too much from a moving target policy. This formulation resembles the constrained optimization used in MPO [1], except that we work with a deterministic policy and use the \(\ell_{2}\) norm as the constraint. The constraint has the interpretation of a KL-divergence between two Gaussian policies with fixed scales and mean parameterized with the output from the online and target policies' output. For a practical implementation, we can optimize the objective by formulating the Lagrangian with dual variables \(\lambda\geq 0\). The constrained optimization now becomes \[\max_{\theta}\min_{\lambda\geq 0} \mathbb{E}_{s\sim B}[Q_{\phi}(s,a)-\lambda[\epsilon-(a-\pi_{ \theta^{\prime}}(s))^{2}]],\quad a=\pi_{\theta}(s), \tag{5}\] where the primal \(\theta\) and dual variables \(\lambda\) can be jointly optimized by stochastic gradient descent. Unlike the behavior cloning regularization in Equation (3), which constrains the policy network to produce actions similar to those in the sampled batches, we constrain the policy optimization not to take large steps. Thus, our formulation should be less conservative than TD3-BC but more robust than TD3 without policy regularization. ## 5 Evaluation In this section, we position our results in the context of recent literature that also considers finetuning from offline pretrained agents. We compare our results with Online Decision Transformer (ODT) [28] and Implicit Q-learning (IQL). DDT is a recent approach based on the Decision Transformer (DT) [4]. Similar to DT, it formulates RL as a sequence modeling problem and leverages the expressiveness of transformer architectures to learn in the RL setting. IQL is an offline Q-learning method that learns the optimal Q-function using in-sample transitions and extracts a policy with advantage weighted regression (AWR) [21; 22]. Both approaches have demonstrated better performance when used in the finetuning setting compared to previous work [9; 13; 20]. For a fairer comparison with IQL and DDT, we consider the setting where we initialized the online replay buffer with the offline dataset. For all methods, we report the offline performance, performance after 200K steps of online interactions and the relative performance improvement \(\delta\), computed as the difference between the online and offline performance. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{medium-replay-v2} & \multicolumn{3}{c}{medium-v2} \\ \cline{3-8} Task & Agent & Offline & Online & \(\delta\) & Offline & Online & \(\delta\) \\ \hline \multirow{4}{*}{ant} & TD3 & \(78.34\pm 19.5\) & **127.59 \(\pm\) 5.24** & \(49.25\) & **113.48 \(\pm\) 4.74** & **123.34 \(\pm\) 2.67** & \(9.86\) \\ & TD3-C & \(85.68\pm 13.93\) & **121.72 \(\pm\) 10.06** & \(36.04\) & **112.44 \(\pm\) 5.71** & **123.05 \(\pm\) 2.31** & \(10.61\) \\ & TD3-BC & \(84.99\pm 15.95\) & **126.73 \(\pm\) 3.15** & \(41.74\) & **111.44 \(\pm\) 5.79** & **120.91 \(\pm\) 7.97** & \(9.47\) \\ & ODT & \(86.56\pm 3.26\) & \(91.57\pm 2.73\) & \(5.01\) & \(91.33\pm 4.13\) & \(90.79\pm 5.85\) & \(-0.54\) \\ & IQL & \(91.21\pm 7.27\) & \(91.36\pm 1.47\) & \(0.15\) & \(99.92\pm 5.86\) & \(100.85\pm 2.02\) & \(0.93\) \\ \hline \multirow{4}{*}{halfCheetah} & TD3 & \(43.82\pm 0.49\) & **70.13 \(\pm\) 3.4** & \(26.3\) & \(46.99\pm 0.4\) & **69.69 \(\pm\) 2.57** & \(22.7\) \\ & TD3-C & \(43.84\pm 0.69\) & **66.0 \(\pm\) 2.06** & \(22.16\) & \(46.92\pm 0.42\) & **65.85 \(\pm\) 2.46** & \(18.93\) \\ & TD3-BC & \(43.74\pm 0.53\) & \(48.7\pm 1.18\) & \(4.96\) & \(46.92\pm 0.41\) & \(48.71\pm 0.52\) & \(1.79\) \\ & ODT & \(39.99\pm 0.68\) & \(404.42\pm 1.61\) & \(0.43\) & \(42.72\pm 0.46\) & \(42.16\pm 1.48\) & \(-0.56\) \\ & IQL & \(44.1\pm 1.14\) & \(44.14\pm 0.3\) & \(0.04\) & \(47.37\pm 0.29\) & \(47.41\pm 0.15\) & \(0.04\) \\ \hline \multirow{4}{*}{hopper} & TD3 & \(46.21\pm 21.91\) & **103.08 \(\pm\) 3.7** & \(56.87\) & \(54.95\pm 3.75\) & **88.59 \(\pm\) 28.84** & \(33.65\) \\ & TD3-C & \(49.18\pm 25.6\) & **96.24 \(\pm\) 11.3** & \(47.06\) & \(56.54\pm 4.71\) & **87.3 \(\pm\) 24.62** & \(30.77\) \\ & TD3-BC & \(51.36\pm 24.94\) & **87.72 \(\pm\) 12.96** & \(36.36\) & \(55.48\pm 4.69\) & \(58.44\pm 6.72\) & \(2.96\) \\ & ODT & **86.64 \(\pm\) 5.41** & 88.89 \(\pm\) 6.33 & \(2.25\) & \(66.95\pm 3.26\) & **97.54 \(\pm\) 2.1** & \(30.59\) \\ & IQL & **92.13 \(\pm\) 10.43** & **96.23 \(\pm\) 4.35** & \(4.1\) & \(63.81\pm 9.15\) & \(66.79\pm 4.07\) & \(2.98\) \\ \hline \multirow{4}{*}{walker2d} & TD3 & \(72.56\pm 11.37\) & **100.06 \(\pm\) 5.74** & \(27.5\) & \(80.88\pm 4.9\) & \(82.09\pm 21.38\) & \(1.21\) \\ & TD3-C & \(74.78\pm 10.97\) & **97.21 \(\pm\) 3.07** & \(22.44\) & \(79.34\pm 5.7\) & \(78.97\pm 21.03\) & \(-0.36\) \\ \cline{1-1} & TD3-BC & \(79.12\pm 7.2\) & **90.26 \(\pm\) 2.27** & \(11.14\) & \(80.73\pm 3.72\) & \(84.56\pm 2.09\) & \(3.82\) \\ \cline{1-1} & ODT & \(68.92\pm 4.79\) & \(76.86\pm 4.04\) & \(7.94\) & \(72.19\pm 6.49\) & \(76.79\pm 2.3\) & \(4.6\) \\ \cline{1-1} & IQL & \(73.67\pm 6.37\) & \(70.55\pm 5.81\) & \(-3.12\) & \(79.89\pm 3.06\) & \(80.33\pm 2.33\) & \(0.44\) \\ \hline \hline \end{tabular} \end{table} Table 1: Results with ODT [28] and IQL [12] on the D4RL MuJoCo locomotion tasks. The ODT and IQL results were obtained from [28]. Our results are averaged over ten seeds. We report the average final offline performance (Offline), the final online performance after 200K of online steps (Online) and the relative performance improvement \(\delta=\text{Online}-\text{Offline}\). Results for the method with the best performance and comparable alternatives (within 10% of the best method’s mean) are in **bold**. Table 1 shows that our proposed approaches are consistently better than ODT and IQL. This is evident in the better evaluation results after finetuning for 200K steps. Note that a better offline performance cannot explain the better final finetuning performance before finetuning begins since offline learning with TD3-BC performs no better than ODT and IQL except for ant-medium-v2. Even in cases where pretraining with TD3-BC has better offline performance than ODT or IQL, we still observe more significant performance improvement as evident from the larger relative improvement. In hopper-medium-replay-v2, although pretraining with TD3-BC performs worse during offline learning, finetuning with TD3 or TD3-C allows us to improve significantly. Note that ODT, considered a strong baseline for finetuning from offline RL, performs hyperparameter tuning for each task and initializes the replay buffer using only top-performing trajectories from the offline dataset. In contrast, our approach is easy to implement and requires minimal changes to existing algorithms during online finetuning. Furthermore, we use the same hyperparameters for all datasets, and the hyperparameters are chosen to be the same as the default values used in previous work. We also do not change how data are sampled from the replay buffer and perform additional pre-processing to the dataset1. We also include results for not initializing with the offline dataset during finetuning in Table C3. In this case, finetuning with TD3 and TD3-C still outperforms finetuning with ODT and IQL significantly. At the same time, the performance of finetuning with TD3-BC also improves due to constraining to only online data collected during finetuning. Footnote 1: The original TD3-BC performs observation normalization, which shows improved performance in some environments. In our implementation, we do not normalize the observations. Is conservative policy optimization effective at mitigating policy collapse?We investigate if incorporating conservative policy optimization helps mitigate policy collapse. Figure 3 shows the finetuning performance between TD3, TD3-BC and TD3-C on the eight datasets with or without initializing the online replay with from the offline dataset. While TD3-C improves training stability in both settings, the effect is more significant when no offline data is used during finetuning. The results illustrate the difference between the constraint used in TD3-C and the behavior cloning regularizer in TD3-BC. For TD3-BC, the policy is regularized towards the empirical behavior policy, which is crucial for minimizing extrapolation errors but may result in slow improvement. For TD3-C, we regularize the policy towards the changing target policy, avoiding large policy changes during optimization due to an inaccurate critic. However, TD3-C can still coll Figure 3: Results of TD3, TD3-BC and TD3-C for finetuning. We compare the three approaches for finetuning varying whether we initialize the online replay buffer with transitions from the offline dataset. TD3-C demonstrates better stability compared to TD3 and can improve faster compared to finetuning with TD3-BC. online replay with offline data, as seen in ant-medium-v2. As explained in Section 3.3, the online sampling distribution remains almost unchanged during the initial period of finetuning. Using TD3 or TD3-C on this distribution can suffer from overestimation errors due to the lack of prompt online feedback. Thus, the regularizer in TD3-C, coupled with uniform sampling from an online replay buffer initialized with offline transitions, can still suffer from policy collapse due to optimization with a "fixed" dataset. ## 6 Discussion and Practical Guidelines Our observations provide a few points for practitioners to consider when leveraging offline RL as a pretraining step and improving offline policies with online data. First, using offline RL methods to continue finetuning may be sub-optimal if the goal is to maximize performance improvement with additional online samples. Using online off-policy algorithms may allow convergence to a better final policy. Second, our empirical results suggest that from a practical perspective, it is crucial to pretrain the agents on datasets that include diverse transitions, provided the offline RL algorithm does not deteriorate significantly with the addition of these suboptimal transitions. While both the policy and the critic can enjoy more robustness and better generalization capability with diverse suboptimal data, the critic should be exposed to both good and bad actions during pretraining to ensure that finetuning with online algorithms is stable. Our results reveal some limitations on the reported benchmark results in finetuning from offline RL. Previous work [12; 20; 28] demonstrates that some offline RL algorithms enjoy better finetuning performance. However, the conclusion is usually made by comparing alternative offline RL methods for finetuning or an online RL baseline. In our paper, we demonstrate that using online RL algorithms with offline RL pretraining is a simple and effective approach that is often not compared with in previous work, such as [12; 20; 28] with the exception of [15]. However, we do not argue that online RL for finetuning is necessarily superior and should always be preferred. As we have seen in Section 3, policy constraint methods, such as TD3-BC, enjoy better stability, and the performance can sometimes rival online RL algorithms. However, we argue that, instead of utilizing constraints designed for offline learning, there are alternative constraints that would work better for online finetuning. We discussed one approach using conservative policy optimization in Section 4. While we do not find finetuning with TD3-BC to suffer from policy collapse, other works [18; 19] have shown that policy collapse can indeed happen even when using offline RL methods for finetuning. We expect that incorporating conservative policy optimization can also help improve stability in those cases. Our study also suggests that the evaluation protocols used by some previous works do not sufficiently reflect the challenges we need to address in finetuning from offline RL. We find that the conclusion drawn from evaluating finetuning performance depends on the number of online samples allowed. When a small number of online samples is chosen, the evaluation will favor algorithms with better stability, but not necessarily algorithms that are more finetuning-efficient. In practice, a trade-off between stability and relative performance improvement exists, and we hope future work can also discuss more in more detail during evaluation. Recent work aiming to improve finetuning from offline RL often incorporates algorithmic improvements [12; 15; 20; 28] coupled with changes to the underlying actor-critic algorithms and replay sampling strategy. This makes it difficult to understand performance improvements coming from individual components, hindering our understanding of what makes finetuning from offline RL difficult. In our paper, we attempt to isolate these changes and demonstrate that, by just changing the online algorithms during finetuning or the online replay initialization, existing algorithms can have a significant performance boost that rivals or outperforms more sophisticated approaches. Given the relative ease of implementing these changes, we hope future work can incorporate them as baselines to measure better our progress on finetuning from offline RL. ## 7 Conclusion We studied the difficulty in leveraging offline RL as pretraining for online RL. We found that finetuning with offline RL results in slow improvement while finetuning with online RL algorithms is sensitive to distribution shifts. We found that conservative policy optimization is a promising approach for stabilizing finetuning from offline RL when the offline dataset lacks diversity.
2306.12430
Exponential lower bound for the eigenvalues of the time-frequency localization operator before the plunge region
We prove that the eigenvalues $\lambda_n(c)$ of the time-frequency localization operator satisfy $\lambda_n(c) > 1 - \delta^c$ for $n = [(1-\varepsilon)c]$, where $\delta = \delta(\varepsilon) < 1$ and $\varepsilon > 0$ is arbitrary, improving on the result of Bonami, Jaming and Karoui, who proved it for $\varepsilon \ge 0.42$. The proof is based on the properties of the Bargmann transform.
Aleksei Kulikov
2023-06-03T22:05:37Z
http://arxiv.org/abs/2306.12430v1
Exponential lower bound for the eigenvalues of the time-frequency localization operator before the plunge region ###### Abstract. We prove that the eigenvalues \(\lambda_{n}(c)\) of the time-frequency localization operator satisfy \(\lambda_{n}(c)>1-\delta^{c}\) for \(n=[(1-\varepsilon)c]\), where \(\delta=\delta(\varepsilon)<1\) and \(\varepsilon>0\) is arbitrary, improving on the result of Bonami, Jaming and Karoui, who proved it for \(\varepsilon\geq 0.42\). The proof is based on the properties of the Bargmann transform. Key words and phrases:prolate spheroidal wave functions, Bargmann-Segal-Fock space, Hermite functions ## 1. Introduction For a measurable set \(\Omega\subset\mathbb{R}\) we define the projection \(P_{\Omega}:L^{2}(\mathbb{R})\to L^{2}(\mathbb{R})\) as \(P_{\Omega}f=f\chi_{\Omega}\) and the Fourier projection \(Q_{\Omega}:L^{2}(\mathbb{R})\to L^{2}(\mathbb{R})\) as \(Q_{\Omega}=\mathcal{F}^{-1}P_{\Omega}\mathcal{F}\), where \(\mathcal{F}\) is the Fourier transform \[\mathcal{F}(f)(\xi)=\int_{\mathbb{R}}f(x)e^{-2\pi ix\xi}dx.\] For a pair of sets \(T,\Omega\subset\mathbb{R}\) we define the time-frequency localization operator \(S_{T,\Omega}\), associated with them, as \(S_{T,\Omega}=P_{T}Q_{\Omega}P_{T}\). It is easy to check that \(S_{T,\Omega}\) is a bounded self-adjoint non-negative definite operator. If the measures of \(T\) and \(\Omega\) are finite then \(S_{T,\Omega}\) is a Hilbert-Schmidt operator with the Hilbert-Schmidt norm equal to \(|T||\Omega|\) (see [6, proof of Theorem 2.3.1]). In particular, in this case \(S_{T,\Omega}\) is a compact operator and as such it has a sequence of eigenvalues \(1>\lambda_{1}(T,\Omega)\geq\lambda_{2}(T,\Omega)\geq\ldots>0\). The first eigenvalue \(\lambda_{1}(T,\Omega)\) is equal to the norm of \(S_{T,\Omega}\) and in particular it is always at most \(1\), but it can be shown that it is always strictly less than \(1\) (see [6, Theorem 2.3.3]). A highly non-trivial result of Nazarov [11, Theorem II] shows that there exist absolute constants \(c,C>0\) such that we always have \(\lambda_{1}(T,\Omega)\leq 1-ce^{-C|T||\Omega|}\) regardless of the geometry of sets \(T\) and \(\Omega\). The famous Donoho-Stark conjecture [4] says that if \(T\) is an interval then among the sets \(\Omega\) of fixed measure the maximum of \(\lambda_{1}(T,\Omega)\) is achieved when both \(T\) and \(\Omega\) are intervals. In this case, it can be seen by dilation that the eigenvalues depend only on the product of lengths of the intervals \(c=|T||\Omega|\), so we have a sequence \(1>\lambda_{1}(c)>\lambda_{2}(c)>\ldots>0\). The distribution of these eigenvalues is the main subject of this paper. It turns out, as was discovered by Slepian [14] and rigorously proved by Landau and Widom [10], that the eigenvalues exhibit a phase transition around the point \(n_{0}=c\): if \(c-n\gtrsim\log c\) then \(\lambda_{n}(c)\approx 1\), if \(n-c\gtrsim\log c\) then \(\lambda_{n}(c)\approx 0\), and only in the plunge region \(|c-n|\lesssim\log c\) the eigenvalues \(\lambda_{n}(c)\) have intermediate value. Specifically, they proved the following theorem. **Theorem 1.1**.: _For a fixed \(b\in\mathbb{R}\) we have_ \[\lim_{c\to\infty}\lambda_{n(c,b)}(c)=(1+e^{b})^{-1}, \tag{1.1}\] _where \(n(c,b)=[c+\frac{1}{\pi^{2}}b\log(c)]\) and \([t]\) denotes the integer part of \(t\)._ Their proof had no uniformity in \(b\) and so in recent years there were some results on getting similar results and bounding the size of the plunge region for varying values of \(b\), in particular by Israel [7], culminating in the following result of Karnik, Romberg and Davenport [8] (note that its formulation is slightly different since we use a different normalization of the Fourier transform). **Theorem 1.2** ([8, Theorem 3]).: _For all \(c>0\) and \(0<\varepsilon<\frac{1}{2}\) we have_ \[|\{n:\varepsilon<\lambda_{n}<1-\varepsilon\}|\leq\frac{2}{\pi^{2}}\log(50c+25) \log\left(\frac{5}{\varepsilon(1-\varepsilon)}\right)+7. \tag{1.2}\] For \(c\to\infty,\varepsilon\to 0\) their bound has the form \(\log(c)\log(\varepsilon^{-1})(\frac{2}{\pi^{2}}+o(1))\), coinciding with the bound from Theorem 1.1 asymptotically while being completely explicit and uniform in all parameters. There is also quite a few results establishing asymptotic and non-asymptotic decay bounds on the eigenvalues \(\lambda_{n}(c)\) when \(n>c\). Widom [16] showed that for fixed \(c\) the eigenvalues decay like \(\lambda_{n}(c)\sim\left(\frac{e\pi c}{(8n+4)}\right)^{2n+1}\), and the works of Osipov [12], Bonami and Karoui [3] and Bonami, Jaming and Karoui [2] established uniform in \(c\) upper bounds on \(\lambda_{n}(c)\), in particular [3, Theorem 1] essentially says that after the plunge region (say, for \(n>(1+\varepsilon)c\)) the eigenvalues start with an exponential decay and then catch up to the super-exponential decay similar to the Widom's result. On the other hand, not much is known about how close are the eigenvalues \(\lambda_{n}(c)\) to \(1\) when \(n\) is significantly less than \(c\). Apart from the general bounds on the plunge region like (1.2), we are aware of only two results dealing with this regime: the result of Fuchs [5] who showed that for fixed \(n\) the eigenvalues \(\lambda_{n}(c)\) satisfy \[1-\lambda_{n}(c)\sim\frac{\sqrt{\pi}}{2}\frac{8^{n}}{(n-1)!}\left(\frac{\pi c}{2 }\right)^{n-1/2}e^{-\pi c},c\to\infty,\] in particular showing that \(\lambda_{n}(c)\) are exponentially close to \(1\), and the following result of Bonami, Jaming and Karoui [2](we present it in a slightly weaker form to highlight the important parts). **Theorem 1.3**.: _For \(0\leq n\leq c\) and \(c>100\) we have_ \[1-\frac{\left(\pi c\right)^{n}}{n!}e^{-\frac{\pi}{2}c}\leq\lambda_{n}(c)<1. \tag{1.3}\] Note that this lower bound is meaningful only if \(\frac{(\pi c)^{n}}{n!}e^{-\frac{\pi}{2}c}\leq 1\), which asymptotically means \(n\leq 0.58c\), and for such values of \(n\) we have an exponential lower bound \(\lambda_{n}(c)\geq 1-\gamma^{c}\) for some \(\gamma<1\). The goal of the present work is to push this estimate all the way to the plunge region and obtain the following result. **Theorem 1.4**.: _For any \(\varepsilon>0\) there exists constant \(0<\delta=\delta(\varepsilon)<1\) such that for large enough \(c\) we have_ \[\lambda_{n}(c)\geq 1-\delta^{c},\] _where \(n=[(1-\varepsilon)c]\)._ Note that exponential lower bound is the best we can achieve since this is the best possible bound already for the \(\lambda_{1}(c)\). The proof of Theorem 1.3 is based on applying the min-max principle to the operator \(S_{T,\Omega}\) and picking a suitable subspace of \(L^{2}(\mathbb{R})\). The subspace that they chose is generated by the first \(n\) Hermite functions. However, there are two reasons why they only prove the result for \(\varepsilon\geq 0.42\). The first one is that their elementary real-analytic estimates for the tails of the Hermite functions are not the strongest possible. The second one is that even with the best possible bounds on the Hermite functions we can not prove the theorem in full generality - we must consider more general subspaces generated by the time-frequency shifts of Hermite functions and the best we can get from the Hermite functions alone is \(\varepsilon>1-\frac{\pi}{4}=0.21\). Our approach is to also apply the min-max principle, but translate the problem to the realm of complex analysis by means of the Bargmann transform and work with the functions in the Bargmann-Segal-Fock space. Although the projection \(P_{\Omega}\) becomes not the most pleasant operator in this setting, since we only care about the lower bounds we can approximate it by some crude estimates which are enough to get Theorem 1.4. Additionally, since the time-frequency shifts of the Hermite functions are no longer orthogonal we also need to upper bound their inner products to show that they are almost orthogonal. It turns out that in the Bargmann-Segal-Fock space the \(k\)'th Hermite function lives essentially on the disk of area \(k\) centred at the origin, and, if we let \(T=\Omega=[-\frac{\sqrt{c}}{2},\frac{\sqrt{c}}{2}]\), to have good bounds on the operator \(S_{T,\Omega}\) we need this disk to lie within the square \(T\times\Omega\). This gives us the aforementioned value of \(\varepsilon>1-\frac{\pi}{4}\). When we apply the time-frequency shift we shift this disk on the complex plane and the Hermite functions corresponding to different disks have small inner product if these disks do not intersect. This idea naturally leads us to the following purely geometric lemma. **Lemma 1.5**.: _For any \(\varepsilon>0\) there exists a finite union of pairwise disjoint closed disks \(D_{1},D_{2},\ldots,D_{N}\subset(-\frac{1}{2},\frac{1}{2})\times(-\frac{1}{2},\frac{1}{2})\) such that the area of their union is at least \(1-\varepsilon\)._ Note that since the disks we consider are closed and the square we consider is open, there exists \(\gamma=\gamma(\varepsilon)>0\) such that all disks are at least \(\gamma\) away from the boundaries of the square and from each other. Harder version of this lemma was given as a problem at the first USSR mathematical olympiad for students [1]. To keep the article self-contained we present its proof at the end of the text. It is worth noting that our interest in the lower bounds for the eigenvalues of the time-frequency operator came from our study of the Fourier interpolation formulas [9], such as the Radchenko-Viazovska formula [13]. In particular, Theorem 1.4 allows us to prove the main result of [9], although with a weaker error term \(o(4WT)\) instead of \(O(\log^{2}(4WT))\). We also find it interesting that the formula in [13] came from the celebrated work of Viazovska on sphere packings [15] and our proof is based on a circle packing of the square from Lemma 1.5, although it is crucial that we allow disks of varying radii. The structure of this paper is as follows. In Section 2 we recall the definitions and basic properties of the Bargmann transform and the Bargmann-Segal-Fock space. In Section 3 we construct an almost orthogonal system of functions with strong time-frequency localization properties and in Section 4 using simple functional analysis we prove Theorem 1.4. Finally, in the last section we prove Lemma 1.5. ## 2. Bargmann transform Let \(f\) be a function in \(L^{2}(\mathbb{R})\). For \(z\in\mathbb{C}\) we define the Bargmann transform of \(f\) at \(z\) as \[\mathcal{B}f(z)=2^{1/4}\int_{\mathbb{R}}f(t)e^{2\pi tz-\pi t^{2}-\frac{\pi}{2 }z^{2}}dt. \tag{2.1}\] The function \(\mathcal{B}f\) turns out to be an entire function belonging to the Bargmann-Segal-Fock space \(\mathcal{F}\) of entire functions for which the norm \[||F||_{\mathcal{F}}^{2}=\int_{\mathbb{C}}|F(z)|^{2}e^{-\pi|z|^{2}}dz\] is finite. Moreover, \(||f||_{L^{2}(\mathbb{R})}=||\mathcal{B}f||_{\mathcal{F}}\) and the Bargmann transform is a bijection between \(L^{2}(\mathbb{R})\) and \(\mathcal{F}\). For the proofs of these facts see [6, Section 3.4]. Since the space \(L^{2}(\mathbb{R})\) has a rich group of isometries, coming from translations and modulations, they should have a counterpart on the Bargmann transform side. Indeed, for \(w\in\mathbb{C}\) and \(F\in\mathcal{F}\) consider \[T_{w}F(z)=F(z-w)e^{\pi z\bar{w}-\frac{\pi}{2}|w|^{2}}.\] Clearly, \(F_{w}\) is still an entire function and direct computation shows that \(||T_{w}F||_{\mathcal{F}}=||F||_{\mathcal{F}}\), in particular if \(F\in\mathcal{F}\) then \(T_{w}F\in\mathcal{F}\). As usual for the spaces of analytic functions, point evaluations are continuous in \(\mathcal{F}\). Specifically, for all \(F\in\mathcal{F}\) and \(z\in\mathbb{C}\) we have \(|F(z)|\leq e^{\frac{\pi}{2}|z|^{2}}||F||_{\mathcal{F}}\), which can be verified from (2.1) and the Cauchy-Schwarz inequality. The space \(\mathcal{F}\) contains an orthonormal basis \(\{\sqrt{\frac{\pi^{k}}{k!}}z^{k}\}_{k\in\mathbb{N}_{0}}\). While it is not technically necessary for our proof, it is important to mention that these functions are exactly the Bargmann transform of the Hermite functions \(h_{k}(x)\). Thus, this basis is the Bargmann transform of the basis used by Bonami, Jaming and Karoui. We will consider more generally \(T_{w}\) applied to this basis for various values of \(w\). Last fact that we need is that the Fourier transform corresponds to rotation by 90 degrees in the complex plane, that is \((\mathcal{B}\mathcal{F}f)(z)=\mathcal{B}f(iz)\). This can be either verified using the aforementioned fact about the Hermite functions, or derived directly from (2.1) by using the fact that the Fourier transform is an isometry and the Fourier transform of a Gaussian is a Gaussian. ## 3. Construction of the system Since eigenvalues \(\lambda_{n}(I,J)\) for the intervals \(I,J\) depend only on the product \(|I||J|\), we will assume without loss of generality that \(I=J=[-\frac{\sqrt{c}}{2},\frac{\sqrt{c}}{2}]\). Let us fix an \(\varepsilon>0\) in the Theorem 1.4 and consider the set of disks \(D_{1},D_{2},\ldots,D_{N}\) from Lemma 1.5 corresponding to this \(\varepsilon\). By \(w_{m}\) and \(r_{m}\) we denote the center and radius of \(D_{m}\), respectively. We consider the following set of functions from \(\mathcal{F}\) \[\mathcal{U}=\{T_{\sqrt{c}w_{m}}\frac{z^{k}}{\sqrt{k!}}\mid 1\leq m\leq N,0\leq k \leq c\pi r_{m}^{2}\} \tag{3.1}\] and put \(\mathcal{V}=\mathcal{B}^{-1}\mathcal{U}\) (the set \(\mathcal{V}\) consists of some time-frequency shifts of Hermite functions). Note that all functions in \(\mathcal{V}\) have \(L^{2}(\mathbb{R})\) norm equal to \(1\) and that \(\mathcal{V}\) contains at least \((1-\varepsilon)c\) elements. The goal of this section is to show that the functions from \(\mathcal{V}\) are almost orthogonal and have very strong concentration on the interval \(I\) and Fourier concentration on the interval \(J\). Specifically, we will prove the following proposition. **Proposition 3.1**.: _There exists \(\alpha=\alpha(\varepsilon)<1\) such that for big enough \(c\) the following conditions hold._ 1. _For all_ \(f,g\in\mathcal{V},f\neq g\) _we have_ \(|\langle f,g\rangle|\leq\alpha^{c}\)_._ 2. _For all_ \(f\in\mathcal{V}\) _we have_ \(||(\operatorname{Id}-P_{I})f||\leq\alpha^{c}\) _and_ \(||(\operatorname{Id}-Q_{J})f||\leq\alpha^{c}\)_._ Proof.: We begin with proving \((i)\). Since Bargmann transform is an isometry, it is enough to bound \(|\langle\mathcal{B}f,\mathcal{B}g\rangle|\). Set \(F=\mathcal{B}f,G=\mathcal{B}g\). If \(F\) and \(G\) correspond to the same disk \(D_{m}\) then they are orthogonal since the monomials \(z^{k}\) are pairwise orthogonal and \(T_{\sqrt{c}w_{m}}\) is an isometry of the space \(\mathcal{F}\). Thus, we can assume that \(F\) corresponds to the disk \(D_{n}\) and \(G\) corresponds to the disk \(D_{m}\), \(m\neq n\). Put \(v_{n}=\sqrt{c}w_{n},v_{m}=\sqrt{c}w_{m}\) and assume that \(F=T_{v_{n}}\frac{z^{k}}{\sqrt{k!}},G=T_{v_{m}}\frac{z^{l}}{\sqrt{l!}}\). We are going to estimate \(|\langle F,G\rangle|\) directly. We have \[|\langle F,G\rangle|\leq\int_{\mathbb{C}}|F(z)||G(z)|e^{-\pi|z|^{2}}dz\leq\] \[\int_{|z-v_{n}|>\sqrt{c}(r_{n}+\frac{\gamma}{2})}|F(z)||G(z)|e^{-\pi|z|^{2}}dz +\int_{|z-v_{l}|>\sqrt{c}(r_{m}+\frac{\gamma}{2})}|F(z)||G(z)|e^{-\pi|z|^{2}}dz,\] where in the second inequality we used that \(D_{n}\) and \(D_{m}\) are at least \(\gamma\) apart. We will estimate only the first of these two integrals since the estimate for the other one is similar. Since \(||G||_{\mathcal{F}}=1\), we have \(|G(z)|e^{-\frac{\pi}{2}|z|^{2}}\leq 1\). Therefore \[\int_{|z-v_{n}|>\sqrt{c}(r_{n}+\frac{\gamma}{2})}|F(z)||G(z)|e^{-\pi|z|^{2}}dz \leq\int_{|z-v_{n}|>\sqrt{c}(r_{n}+\frac{\gamma}{2})}|F(z)|e^{-\frac{\pi}{2}|z |^{2}}dz.\] We will take this integral in polar coordinates with \(|z-v_{n}|=r,\arg(z-v_{n})=\theta\). This way we get \[2\pi\sqrt{\frac{\pi^{k}}{k!}}\int_{\sqrt{c}(r_{n}+\frac{\gamma}{2})}^{\infty} r^{k+1}e^{-\frac{\pi}{2}r^{2}}dr.\] Note that to justify this formula we can either do a direct computation or first apply \(T_{v_{n}}^{-1}\) to the whole thing since it is an isometry of \(\mathcal{F}\). Next, we do the change of variables \(s=r^{2}\) and get \[\pi\sqrt{\frac{\pi^{k}}{k!}}\int_{c(r_{n}+\frac{\gamma}{2})^{2}}^{\infty}s^{ \frac{k}{2}}e^{-\frac{\pi}{2}s}ds.\] Observe that the expression in the integral is decreasing in \(s\). Indeed, differentiating it we can see that it is decreasing if \(k<\pi s\) and since \(k\leq c\pi r_{n}^{2}\), we can see that it is decreasing. Therefore, the integral is at most \[\sum_{p=0}^{\infty}\left(p+c\left(r_{n}+\frac{\gamma}{2}\right)^{2}\right)^{ \frac{k}{2}}e^{-\frac{\pi}{2}(p+c(r_{n}+\frac{\gamma}{2})^{2})}.\] Let us estimate the ratio of two consecutive elements of this sum, setting \(p+c(r_{n}+\frac{\gamma}{2})^{2}=s\) for brevity. We have \[\left(\frac{s+1}{s}\right)^{\frac{k}{2}}e^{-\frac{\pi}{2}}\leq e^{\frac{k}{2s }-\frac{\pi}{2}}\leq e^{\frac{err_{n}^{2}}{2c(r_{n}+\frac{\gamma}{2})^{2}}- \frac{\pi}{2}}=e^{\frac{\pi}{2}\left(\frac{1}{(1+\frac{\gamma}{2r_{n}})^{2}}- 1\right)},\] where in the first inequality we used the well-known fact that \((1+\frac{1}{s})^{s}\leq e\). Therefore, the ratio of any two consecutive terms is at most some \(\beta=\beta(\varepsilon)<1\). Hence, the whole sum is at most the first term multiplied by some constant depending only on \(\varepsilon\). Thus, the integral is at most \[C_{\varepsilon}\pi\sqrt{\frac{\pi^{k}}{k!}}\left(c\left(r_{n}+\frac{\gamma}{2 }\right)^{2}\right)^{\frac{k}{2}}e^{-\frac{\pi}{2}c(r_{n}+\frac{\gamma}{2})^ {2}}.\] The last ingredient that we need is the classical inequality \(k!\geq\left(\frac{k}{e}\right)^{k}\). Collecting everything, we get the upper bound \[C_{\varepsilon}\pi\left(\frac{ce\pi\left(r_{n}+\frac{\gamma}{2}\right)^{2}}{k }\right)^{\frac{k}{2}}e^{-\frac{\pi}{2}c(r_{n}+\frac{\gamma}{2})^{2}}.\] Differentiating this quantity with respect to \(k\) we can see that it is increasing for \(k<c\pi\left(r_{n}+\frac{\gamma}{2}\right)^{2}\), in particular we can without loss of generality assume that \(k=c\pi r_{n}^{2}\). Substituting this value we get \[C_{\varepsilon}\pi\left(e\left(1+\frac{\gamma}{2r_{n}}\right)^{2}\right)^{ \frac{c\pi r_{n}^{2}}{2}}e^{-\frac{\pi}{2}c(r_{n}+\frac{\gamma}{2})^{2}}=C_{ \varepsilon}\pi\left(e\left(1+\frac{\gamma}{2r_{n}}\right)^{2}e^{-(1+\frac{ \gamma}{2r_{n}})^{2}}\right)^{\frac{c\pi r_{n}^{2}}{2}}.\] Let \(u=\left(1+\frac{\gamma}{2r_{n}}\right)^{2}-1>0\). Then the quantity in the brackets is \((1+u)e^{-u}=\nu<1\), therefore the whole expression is \[C_{\varepsilon}\pi\nu^{c\frac{\pi r_{n}^{2}}{2}},\] which is exponentially small in \(c\), as required. Doing the same for the second integral we get \(C_{\varepsilon}^{\prime}\pi\nu^{c\frac{\pi r_{n}^{2}}{2}}\). Let \(\nu_{0}\) be the maximum of all \(\nu\) over all pairs \((n,m)\), \(c_{\varepsilon}\) be the maximum of \(C_{\varepsilon}\) over all pairs \((n,m)\) and \(r\) be the minimum of \(r_{n}\). Then we get the upper bound \[2c_{\varepsilon}\pi\nu_{0}^{c\frac{\pi r^{2}}{2}}.\] Taking \(\nu_{0}^{\frac{\pi r^{2}}{2}}<\alpha<1\) and taking \(c\) big enough we can have this quantity less than \(\alpha^{c}\), as required. Now, we turn to proving \((ii)\). We will only bound \(||(\operatorname{Id}-P_{I})f||\) since the Fourier transform corresponds to the 90 degrees rotation of the complex plane, and when we rotate the square with center at the origin we get back the same square (in fact, with slight tweaking to the proof of Lemma 1.5 we can achieve that set \(\mathcal{V}\) is invariant under the Fourier transform). We have \[||(\operatorname{Id}-P_{I})f||^{2}=\int_{-\infty}^{-\frac{\sqrt{c}}{2}}|f(t)|^ {2}dy+\int_{\frac{\sqrt{c}}{2}}^{\infty}|f(t)|^{2}dy.\] We will only bound the second term, the other one being similar. To estimate it, we use duality: \[\int_{\frac{\sqrt{c}}{2}}^{\infty}|f(t)|^{2}dy=\sup_{\operatorname{supp}g\subset [\frac{\sqrt{c}}{2},+\infty),||g||_{2}=1}|\langle f,g\rangle|^{2}.\] Since Bargmann transform is an isometry, this inner product is equal to \(\langle\mathcal{B}f,\mathcal{B}g\rangle\). Recall that \(\mathcal{B}f\) is just a \(T_{w}\) shift of some normalized monomial. Specifically, let \(f\) correspond to the disk \(D_{n}\), with center \(w_{n}\) and radius \(r_{n}\), so that \(\mathcal{B}f=F=T_{v_{n}}\frac{z^{k}}{\sqrt{k!}}\), where \(v_{n}=\sqrt{c}w_{n}\). We want to obtain an upper bound for \[\int_{\mathbb{C}}|F(z)||G(z)|e^{-\pi|z|^{2}}dz,\] where \(G=\mathcal{B}g\). We will again split this integral into two integrals \[\int_{|z-v_{n}|\geq\sqrt{c}(r_{n}+\frac{\gamma}{2})}|F(z)||G(z)|e^{-\pi|z|^{2 }}dz+\int_{|z-v_{n}|<\sqrt{c}(r_{n}+\frac{\gamma}{2})}|F(z)||G(z)|e^{-\pi|z|^ {2}}dz.\] For the first integral we bound \(|G(z)|e^{-\frac{\pi}{2}|z|^{2}}\leq 1\) and proceed exactly like in the part \((i)\). For the second integral, we in turn use an estimate \(|F(z)|e^{-\frac{\pi}{2}|z|^{2}}\leq 1\) and so we need to obtain a stronger pointwise bound for \(|G(z)|\) when \(|z-v_{n}|\leq\sqrt{c}(r_{n}+\frac{\gamma}{2})\). We will do this by applying the Cauchy-Schwarz inequality to the definition of the Bargmann transform (2.1). Note that this will be the only place where we use an explicit formula for the Bargmann transform. We have \[e^{-\frac{\pi}{2}|z|^{2}}\mathcal{B}g(z)=e^{-\frac{\pi}{2}|z|^{2}}2^{1/4}\int _{\frac{\sqrt{c}}{2}}^{\infty}g(t)e^{2\pi tz-\pi t^{2}-\frac{\pi}{2}z^{2}}dt.\] Applying the Cauchy-Schwarz inequality we get \[\left|e^{-\frac{\pi}{2}|z|^{2}}\mathcal{B}g(z)\right|^{2}\leq e^{-\pi|z|^{2}}2^{ 1/2}\int_{\frac{\sqrt{c}}{2}}^{\infty}e^{\operatorname{Re}(4\pi tz-2\pi t^{2}- \pi z^{2})}dt.\] This quantity can be seen to be the tail of the Gaussian distribution. This can be checked directly, but to guess this we can use the following heuristic: if the lower limit was \(-\infty\) then this would have been just \(1\) from the pointwise bound in the Bargmann-Segal-Fock space, and by applying the shift \(T_{z}^{-1}\) we will get the same result as if we were integrating from \(\frac{\sqrt{c}}{2}-\operatorname{Re}z\). So, we have the bound \[\left|e^{-\frac{\pi}{2}|z|^{2}}\mathcal{B}g(z)\right|^{2}\leq\sqrt{2}\int_{ \frac{\sqrt{c}}{2}-\operatorname{Re}z}^{\infty}e^{-2\pi t^{2}}dt.\] Observe that if \(|z-v_{n}|\leq\sqrt{c}(r_{n}+\frac{\gamma}{2})\) then \(\operatorname{Re}z\leq\frac{\sqrt{c}}{2}(1-\frac{\gamma}{2})\), because disk \(D_{n}\) is at least \(\gamma\) away from the boundary of the square \((-\frac{1}{2},\frac{1}{2})\times(-\frac{1}{2},\frac{1}{2})\). Therefore, for such \(z\) we have \[\left|e^{-\frac{\pi}{2}|z|^{2}}\mathcal{B}g(z)\right|^{2}\leq\sqrt{2}\int_{ \frac{\gamma\sqrt{c}}{2}}^{\infty}e^{-2\pi t^{2}}dt.\] As in the part \((i)\), we observe that the function \(e^{-2\pi t^{2}}\) is decreasing and \(e^{-2\pi(t+1)^{2}+2\pi t^{2}}\leq e^{-2\pi}<1\) for \(t\geq 0\), therefore the integral is at most \(C_{\varepsilon}\) multiplied by \(e^{-2\pi\left(\frac{\gamma\sqrt{c}}{2}\right)^{2}}\) for some \(C_{\varepsilon}>0\). That is, we have \[\left|e^{-\frac{\pi}{2}|z|^{2}}\mathcal{B}g(z)\right|\leq\sqrt{C_{\varepsilon }\sqrt{2}}e^{-\frac{\pi\gamma^{2}}{4}c}.\] When we integrate this bound over the disk \(|z-v_{n}|\leq\sqrt{c}(r_{n}+\frac{\gamma}{2})\) we multiply by the area of this disk. Since it is contained in the square \((-\frac{\sqrt{c}}{2},\frac{\sqrt{c}}{2})\times(-\frac{\sqrt{c}}{2},\frac{ \sqrt{c}}{2})\), its area is at most \(c\). Thus, the integral is at most \[c\sqrt{C_{\varepsilon}\sqrt{2}}e^{-\frac{\pi\gamma^{2}}{4}c}.\] This is exponentially decreasing in \(c\), thus we proved that for some \(\alpha<1\) we have \(|\langle F,G\rangle|\leq\alpha^{c}\) if \(c\) is big enough, as required. ## 4. Proof of Theorem 1.4 Let us take a subset \(\mathcal{V}_{0}\subset\mathcal{V}\) of size \(n=[(1-\varepsilon)c]\) and let \(V\subset L^{2}(\mathbb{R})\) be the subspace generated by \(\mathcal{V}_{0}\). We will apply min-max principle to \(V\) to deduce the lower bound for \(\lambda_{n}(c)\) (in particular, we will show that functions in \(\mathcal{V}_{0}\) are linearly independent so that the dimension of \(V\) is indeed \([(1-\varepsilon)c]\)). We start with estimating \(P_{I}Q_{J}P_{I}f\) for \(f\in\mathcal{V}\). We have \[\left|\left|(\operatorname{Id}-Q_{J}P_{I})f\right|\right|=\left|\left|( \operatorname{Id}-Q_{J})f+Q_{J}(\operatorname{Id}-P_{I})f\right|\right|\leq \left|\left|(\operatorname{Id}-Q_{J})f\right|\right|+\left|\left|( \operatorname{Id}-P_{I})f\right|\right|\leq 2\alpha^{c},\] where we used \(||Q_{J}||\leq 1\) since \(Q_{J}\) is a projection. Doing this one more time we get \[||(\operatorname{Id}-P_{I}Q_{J}P_{I})f||=||P_{I}(\operatorname{Id}-Q_{J}P_{I})f+ (\operatorname{Id}-P_{I})f||\leq||(\operatorname{Id}-Q_{J}P_{I})f||+||( \operatorname{Id}-P_{I})f||\leq 3\alpha^{c}.\] Let \(\mathcal{V}_{0}=\{f_{1},f_{2},\ldots,f_{n}\}\) and consider \(f(t)=\sum_{k=1}^{n}a_{k}f_{k}(t)\) where not all \(a_{k}\) are equal to zero. We are interested in the quantity \(\frac{||P_{I}Q_{J}P_{I}f||}{||f||}\). We have \[||f||^{2}=\sum_{k=1}^{n}|a_{k}|^{2}+\sum_{1\leq k,l\leq n,k\neq l}a_{k}\bar{a}_{ l}\langle f_{k},f_{l}\rangle.\] From the Proposition 3.1 we know that \(|\langle f_{k},f_{l}\rangle|\leq\alpha^{c}\). Therefore, we can crudely bound \[||f||^{2}\geq\sum_{k=1}^{n}|a_{k}|^{2}-\left(\sum_{k=1}^{n}|a_{k}|\right)^{2} \alpha^{c}.\] Now, we can show that \(\mathcal{V}_{0}\) is linearly independent. By the inequality between the arithmetic mean and the quadratic mean we have \(\sum_{k=1}^{n}a_{k}^{2}\geq\frac{1}{n}\left(\sum_{k=1}^{n}|a_{k}|\right)^{2}\), therefore, if \(\alpha^{c}<\frac{1}{2c}\), we have \[||f||^{2}\geq(1-n\alpha^{c})\sum_{k=1}^{n}|a_{k}|^{2}\geq\frac{1}{2}\sum_{k=1} ^{n}|a_{k}|^{2}, \tag{4.1}\] in particular it is non-zero (here we used a crude bound \(n\leq c\)). Similarly, we can estimate \(||P_{I}Q_{J}P_{I}f||\). We have \[||(\operatorname{Id}-P_{I}Q_{J}P_{I})f||=||\sum_{k=1}^{n}(\operatorname{Id}-P _{I}Q_{J}P_{I})a_{k}f_{k}||\leq\sum_{k=1}^{n}|a_{k}|||(\operatorname{Id}-P_{I }Q_{J}P_{I})f_{k}||\leq 3\alpha^{c}\sum_{k=1}^{n}|a_{k}|.\] By the triangle inequality this implies \[||f||-3\alpha^{c}\sum_{k=1}^{n}|a_{k}|\leq||P_{I}Q_{J}P_{I}f||.\] Dividing this by \(||f||\) we get \[\frac{||P_{I}Q_{J}P_{I}f||}{||f||}\geq 1-\frac{3\alpha^{c}\sum_{k=1}^{n}|a_{k}|} {||f||}.\] Using the estimate (4.1) and the inequality between the arithmetic mean and the quadratic mean this is at least \(1-3\sqrt{2c}\alpha^{c}\). Choosing \(\alpha<\delta<1\) and \(c\) big enough this is at least \(1-\delta^{c}\). By the min-max principle this means that \(\lambda_{n}(c)\geq 1-\delta^{c}\), as required. ## 5. Proof of Lemma 1.5 We will prove by induction on \(n\) that there exists a finite set of disjoint closed disks in \(R=(-\frac{1}{2},\frac{1}{2})\times(-\frac{1}{2},\frac{1}{2})\) with total measure at least \(1-2^{-n}\). For \(n=1\) we can take disk with center at the origin and radius \(0.49\), its area is \(\pi 0.49^{2}>\frac{1}{2}\). Assume that we already constructed disks \(D_{1},\ldots,D_{m}\) with total measure at least \(1-2^{-n}\). We want to add some disks so that the total measure becomes at least \(1-2^{-n-1}\). Let us pick a very large integer \(N\) to be determined later and slice \(R\) into the squares of side length \(\frac{1}{N}\). We will split the squares into three groups: squares in the group \(A\) are those which lie entirely inside one of the disks, squares in the group \(B\) are those which do not intersect any of the disks, and squares in the group \(C\) are those which intersect the boundary of one of the disks. We claim that there exists a constant \(c\), independent of \(N\) (but depending on the already chosen disks \(D_{1},\ldots,D_{m}\)) such that the number of squares in \(C\) is at most \(cN\). Indeed, let us consider any square \(H\) from the group \(C\) and assume that it intersects the boundary of \(D_{l}\). Since the diameter of \(H\) is \(\frac{\sqrt{2}}{N}\), square \(H\) lies entirely in the \(\frac{\sqrt{2}}{N}\)-vicinity of the circumferences of \(D_{l}\). Therefore, the total measure of disks in \(C\) is not greater than the total measure of the union of \(\frac{\sqrt{2}}{N}\)-vicinities of the circumferences of the disks \(D_{1},\ldots,D_{m}\). Let the radius of \(D_{l}\) be \(r_{l}\). Then the measure of its \(\frac{\sqrt{2}}{N}\)-vicinity is \[\pi\left(r_{l}+\frac{\sqrt{2}}{N}\right)^{2}-\pi\left(r_{l}-\frac{\sqrt{2}}{N }\right)^{2}=\frac{\pi 4\sqrt{2}r_{l}}{N},\] assuming that \(N\) is chosen so big that \(\frac{\sqrt{2}}{N}\) is smaller than the radius of every disk \(D_{l}\). Therefore, the measure of the unions of these vicinities is at most \[\frac{\pi 4\sqrt{2}\sum_{l=1}^{m}r_{l}}{N}.\] Since each square \(H\) from our decomposition has area \(\frac{1}{N^{2}}\), there can be at most \(\pi 4\sqrt{2}\sum_{l=1}^{m}r_{l}N\) squares in the set \(C\), so \(c=\pi 4\sqrt{2}\sum_{l=1}^{m}r_{l}\) works. We have \(|A|+|B|+|C|=N^{2}\). Let \(\mu\) be the measure of the union of the already chosen disks \(D_{1},\ldots,D_{m}\). By the induction hypothesis \(\mu\geq 1-2^{-n}\). For each square in \(B\) we will consider the disk with the center at the center of \(B\) and radius \(\frac{0.49}{N}\). Clearly, these disks are pairwise disjoint, they are disjoint from the already chosen disks \(D_{1},\ldots,D_{m}\) and they lie inside of \(R\) (since \(0.49<0.50\), they don't even touch each other or the boundary of \(R\), as required). Their total measure is \(\frac{|B|0.49^{2}\pi}{N^{2}}\geq\frac{3|B|}{4N^{2}}\). Adding them to our set, the total measure we get is at least \(\frac{3|B|}{4N^{2}}+\mu\). It remains to show that this is at least \(1-2^{-n-1}\). Since all squares from \(A\) lie inside of the disks \(D_{1},\ldots,D_{m}\), we have \(\frac{|A|}{N^{2}}\leq\mu\). Subtracting this inequality from \(1\), we get \[1-\mu\leq\frac{|B|+|C|}{N^{2}}.\] Subtracting further \(\frac{3|B|}{4N^{2}}\) from both sides, we obtain \[1-\mu-\frac{3|B|}{4N^{2}}\leq\frac{|B|+4|C|}{4N^{2}}. \tag{5.1}\] Since all the squares from \(B\) do not intersect any of the disks \(D_{1},\ldots,D_{m}\), their total measure is at most \(1-\mu\), that is \(\frac{|B|}{N^{2}}\leq 1-\mu\leq 2^{-n}\). Plugging this into (5.1) and recalling that \(|C|\leq cN\) we get \[1-\mu-\frac{3|B|}{4N^{2}}\leq 2^{-n-2}+\frac{c}{N}.\] Choosing \(N\) so that \(\frac{c}{N}<2^{-n-2}\) we get the desired result. ### Acknowledgments I would like to thank Fabio Nicola, Kristian Seip and Mikhail Sodin for helpful discussions. This work was supported by BSF Grant 2020019, ISF Grant 1288/21, and by The Raymond and Beverly Sackler Post-Doctoral Scholarship.
2304.14638
Entanglement of Magnetically Levitated Massive Schrödinger Cat States by Induced Dipole Interaction
Quantum entanglement provides a novel way to test short-distance quantum physics in a non-relativistic regime. We provide entanglement-based protocols to potentially test the magnetically induced dipole-dipole interaction and the Casimir-Polder potential between the two nano-crystals kept in a Schrodinger Cat state. Our scheme is based on the Stern-Gerlach (SG) apparatus, where we can witness the entanglement mediated by these interactions for the nano-crystal mass m~10^-19 kg with a spatial superposition size of order 0.1 micron in a trap relying on diamagnetic levitation. We show that it is possible to close the SG interferometer in position and momentum with a modest gradient in the magnetic field.
Ryan J. Marshman, Sougato Bose, Andrew Geraci, Anupam Mazumdar
2023-04-28T05:54:33Z
http://arxiv.org/abs/2304.14638v1
# Entanglement of Magnetically Levitated Massive Schrodinger Cat States by Induced Dipole Interaction ###### Abstract Quantum entanglement provides a novel way to test short-distance quantum physics in a non-relativistic regime. We provide entanglement-based protocols to potentially test the magnetically induced dipole-dipole interaction and the Casimir-Polder potential between the two nano-crystals kept in a Schrodinger Cat state. Our scheme is based on the Stern-Gerlach (SG) apparatus, where we can witness the entanglement mediated by these interactions for the nano-crystal mass \(m\sim 10^{-19}\) kg with a spatial superposition size of order 0.1 micron in a trap relying on diamagnetic levitation. We show that it is possible to close the SG interferometer in position and momentum with a modest gradient in the magnetic field. ## I Introduction Quantum entanglement is a critical observable that demarcates the classical from the quantum world [1]. Entanglement provides an unquestionable quantum signature that cannot be mimicked by any classical operation between the two quantum systems. In fact, a well-known theorem (known as local operation and classical communication (LOCC)) prohibits entangling the two quantum systems that are mediated by a classical interaction [2]. After all, the Standard Model (SM) interactions are known to be quantum, and quantum entanglement-based protocols allow us to test how various forces of nature can be entangled in a laboratory setup [3]. At a short distance, but still, in the infrared (IR) regime, we can test virtual photon-induced interactions [4], such as Coulomb, Casimir-Polder [5], and magnetically induced dipole-dipole interaction [6]. In search for the answer to the profound question; "does gravity follows the rules of quantum mechanics or not?" witnessing gravitationally mediated entanglement has been proposed as a key protocol to test the quantum nature of gravity in a laboratory [7; 8](for a related work see [9]). The scheme relies on two masses, each prepared in a spatial superposition and placed at distances where they couple solely gravitationally. If gravity follows the rules of quantum interaction, and not that of a classical real-valued field, then the two masses will entangle [10; 11; 8; 12; 13; 14]. However, to test the quantum nature of gravity we will need heavy masses \(10^{-14}-10^{-15}\)kg, and large spatial superposition of \(10-100\mu\)m, with long coherence times of \(1-2\) seconds. Furthermore, one can also test the quantum origin of gravitational interaction between quantum matter and light [15], which will enable us to understand the spin nature of the gravitational interaction. All these protocols are commonly known as quantum gravity-induced entanglement of masses (QGEM) [8]. One crucial ingredient for all these experiments is to understand the entanglement generation from the known well-established photon-induced electromagnetic interactions. The photon-induced entanglement will create a background that is necessary to understand before we could perform the QGEM experiment [4; 16]. The aim of this paper will be to show that there exists a very natural electromagnetic background which is an inevitable consequence of any neutral nano-crystal in the presence of an external magnetic field. The external magnetic field, which we require for trapping the nano-crystal, and eventually creating the quantum superposition will induce a magnetic dipole to the nano-crystal. The two nano-crystals in a quantum superposition will get entangled by the electromagnetic interaction, and hence it will provide a dominant background for the QGEM experiment at short distances. The use of a Stern-Gerlach (SG) apparatus is one of the most promising approaches towards atom interferometry [17; 18; 19; 20; 21; 22]. However, for larger objects such as nanoparticles we can also exploit the SG properties to create spatial superposition, see [23; 24; 25; 26; 27; 28]. Such interferometers have already been realized using atom chips [19; 20; 22], for both half-loop [18] and full-loop [22] configurations achieving the superposition size of 3.93 \(\mu\)m and 0.38 \(\mu\)m in the experimental time of 21.45 ms and 7 ms, respectively. Based on this SG scheme there have been theoretical studies to create even more ambitious superposition sizes [26; 27; 28]. In all these cases, the idea is to manipulate the nitrogen vacancy (NV) center of a nano-diamond. The NV center provides a spin defect, which can be manipulated in the presence of an external magnetic field. One can use the spins of two adjacent interferometers to create the entanglement witness by performing spin correlations [29; 8]. Of course, one of the key challenges is to understand numerous sources of decoherence and noise [30; 31; 32; 33; 34; 35; 36; 4; 4; 38; 4; 4; 39; 5; 6; 4; 7; 8; 9; 10; 4; 8; 11; 4; 9; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36]. They arise from residual gas collisions and environmental photons, which can be attenuated by vacuum and low-temperature technologies [37; 38]. In addition, the Humpty-Dumpty effect [39], internal cooling of the nanodiamond to improve the spin coherence time [40; 41; 42; 43], as well as to tackle the Majorana spin-flip, under development [25]. Moreover, there are also a series of gravitational channels for dephasing [36; 44]; the emission of gravitons is negligible [45], gravity gradient noise (GGN) can be mitigated with an exclusion zone [36], and relative acceleration can be mitigated by improving the vacuum and isolating the experimental box as much as possible. In this work, we will aim to understand two particular EM-induced potentials which will inevitably entangle any two quantum systems, even if we assume that the monopole contribution is neutralized, which is experimentally feasible [46]. The two potentials are the Casimir-Polder (CP) potential and the dipole-dipole (DD) potential due to induced-dipole moment in the presence of an external magnetic field, present in the SG setup. In this paper, we will provide a superposition scheme in a levitating setup to witness the entanglements due to these potentials. We will show that with a modest magnetic field gradient in the SG setup, we can measure the entanglement witness for both CP and DD potentials for a mass of neutral nano-diamond \(m\sim 10^{-19}\)Kg, and the spatial superposition of \(0.1\mu\)m. A diamagnetic material in a magnetic trapping potential will evolve according to the Hamiltonian \[\hat{H}=\frac{\hat{\mathbf{p}}^{2}}{2m}+\hbar D\hat{S}_{z}+g_{s}\frac{e\hbar} {2m_{e}}\hat{\mathbf{S}}\cdot\mathbf{B}+mg\hat{y}-\frac{\chi_{\rho}m}{2\mu_{0} }\mathbf{B}^{2} \tag{1}\] where the first term in Eq.(11) represents the kinetic energy of the nanodiamond, \(\hat{\mathbf{p}}\) is the momentum operator and \(m\) is the mass of the nanodiamond. The second term represents the zero-field splitting of the NV center with \(D=(2\pi)\times 2.8\) GHz, \(\hbar\) is the reduced Planck constant, and \(\hat{S}_{z^{\prime}}\) is the spin component operator aligned with the NV axis. The third term represents the interaction energy of the NV electron spin magnetic moment with the magnetic field \(\mathbf{B}\). The spin magnetic moment operator \(\hat{\mathbf{\mu}}=-g_{s}\mu_{B}\hat{\mathbf{S}}\), where \(g_{s}\approx 2\) is the Lande g-factor, \(\mu_{B}=e\hbar/2m_{e}\) is the Bohr magneton and \(\hat{\mathbf{S}}\) is the NV spin operator. The fourth term is the gravitational potential energy, \(g\approx 9.8\) m/s\({}^{2}\) is the gravitational acceleration and \(\hat{\mathbf{z}}\) is the position operator along the direction of gravity (\(z\) axis). The final term represents the magnetic energy of a diamagnetic material (diamond) in a magnetic field, \(\chi_{\rho}=-6.2\times 10^{-9}\) m\({}^{3}\)/kg is the mass susceptibility and \(\mu_{0}\) is the vacuum permeability. For the purpose of our scheme, we will make use of a well-known trap profile \(\mathbf{B}_{T}\) given by [47] \[\mathbf{B}_{T}= -\left[\frac{3a_{4}\sqrt{\frac{35}{\pi}}x^{2}y}{8y_{0}^{3}}+ \frac{3a_{4}\sqrt{\frac{35}{\pi}}y\left(x^{2}-y^{2}\right)}{16y_{0}^{3}}- \frac{a_{3}\sqrt{\frac{7}{6\pi}}x^{2}}{y_{0}^{2}}+\frac{a_{2}\sqrt{\frac{15}{ \pi}}y}{4y_{0}}+\frac{a_{3}\sqrt{\frac{7}{6\pi}}\left(-x^{2}-y^{2}+4z^{2} \right)}{2y_{0}^{2}}\right]\hat{x}\] \[-\left[-\frac{3a_{4}\sqrt{\frac{35}{\pi}}xy^{2}}{8y_{0}^{3}}+ \frac{3a_{4}\sqrt{\frac{35}{\pi}}x\left(x^{2}-y^{2}\right)}{16y_{0}^{3}}- \frac{a_{3}\sqrt{\frac{7}{6\pi}}xy}{y_{0}^{2}}+\frac{a_{2}\sqrt{\frac{15}{\pi} }x}{4y_{0}}\right]\hat{y}-\left[\frac{2a_{3}\sqrt{\frac{14}{3\pi}}xz}{y_{0}^{2 }}\right]\hat{z} \tag{2}\] where \(y_{0}=75\)\(\mu\)m is the distance from the center of the trap to the pole pieces which help generate the trap and \(a_{2}=-1.3\) T, \(a_{3}=0.0183\) T, and \(a_{4}=0.72\) T determine the magnetic field strength. We will take the particle at time \(t=0\) to be at rest in the trapping potential, with an initial spin state \(\frac{1}{\sqrt{2}}\left(\left|+1\right\rangle+\left|-1\right\rangle\right)\). The trapping potential is given by \(U=(\chi_{\rho}mB^{2}/2\mu_{0})+mgy\), since \(\chi<0\), the particle can be trapped at a frequency \(\omega_{\zeta}=\sqrt{-(\chi/2\rho\mu_{0})(\partial^{2}B^{2}/\partial\zeta^{2})}\), where \(\zeta=x,y,z\). A large gradient, linear magnetic field \(\vec{B}_{P}\) is then pulsed on, for a short time \(t_{p}\) which acts to create a momentum difference correlated with the internal spin state of the particle. The particle then evolves in the weakly trapping potential for some time \(t_{T}\) before the large gradient linear magnetic field is again turned on for a further time \(t_{P}\). At this point, the particle should be returned to its initial position at time \(t=2t_{P}+t_{T}\equiv T\), with the internal spin state now in the form \(\frac{1}{\sqrt{2}}\left(e^{i\phi_{+}}\left|+1\right\rangle+e^{-\phi_{-}}\left|-1 \right\rangle\right)\) where the phases \(\phi_{\pm}\) are a result of the interactions with external sources. To create the superposition we will first assume the pulsed magnetic field is linear over the region experienced by the particle, with a profile \[\mathbf{B}_{p}=\eta\left(y(t=0)-y\right)\hat{y}+\eta z\hat{z} \tag{3}\] where \(\eta\) gives the magnetic field gradient and \(y(t=0)\) is the initial position of the particle in the \(y\) direction, which is away from the minimum of the potential for the particle. We wish to start high up in the potential to gain large momentum, which will help the two spin states to get separated in \(z\)-direction as much as possible. With this definition of the pulse field, it makes a minimal disruption to the trapping field, being zero magnitudes (although non-zero gradient) at the particle's location (\(x=0\), \(y\approx y(t=0)\), and \(z\approx 0\)). We can control the average magnitude of the diamagnetically induced magnetic field of the diamond by ini tially displacing the particle in the \(y\) direction. This will result in the diamond oscillating in the \(y\) direction according to the trapping frequency in this direction. This trapping frequency in the \(y\) direction will be much faster than that in the \(z\) (splitting) direction as the particle is intended to be more confined in this direction. This oscillation is seen in Figure 1b which shows an example space-time trajectory for each arm of the interferometer if it is allowed to oscillate for a full period before the pulsed magnetic field is re-applied to close the spatial superposition. The dual oscillatory behavior is due to the mass not occupying the ground state of the trapping potentials in both \(y\) and \(z\) directions, with the \(y\) trapping frequency in the \(z\) direction, i.e. \(\omega_{y}\ll\omega_{z}\). To ensure that both the spatial wavefunction completely matches at time \(t=T\), it is necessary to ensure the trapping frequency in the \(y\) direction is an integer multiple of that in the \(z\) direction such that both the position and momentum matches at the final time. It will also be necessary to ensure the pulsed magnetic field opening and closing the superposition matches in both magnitude and pulse time, with deviations from this leading to reduced contrast in the form of decoherence in the final spin states. The pulse time \(t_{p}=2\pi/\omega\) is determined by the pulsed magnetic field gradient, \(\eta\), to ensure the maximum momentum difference is induced between the two masses, we are interested in a quarter of the oscillation period or \(t_{p}/4=\pi/(2\omega)\), where the frequency is given by the diamagnetic-induced magnetic field \(\omega=\sqrt{k/m}\), where \(k\) is the spring constant determined by the effective potential of the crystal, \(U_{\pm}(y)=-(\chi_{\rho}m/2\mu_{0})\eta^{2}y^{2}\) \[t_{p}=\frac{\pi}{2\eta}\sqrt{\frac{-\mu_{0}}{2\chi_{\rho}}} \tag{4}\] This time is a quarter of the oscillation period of the Harmonic trap caused by the pulsed magnetic field All results were numerical simulations of the classical equations of motion derived from the Hamiltonian given in Eq. 1. The magnetic profile used was \(\mathbf{B}=\mathbf{B}_{T}+\mathbf{B}_{p}(t)\) where \(\mathbf{B}_{T}\) is given by Eq. 4 and \(\mathbf{B}_{p}(t)=0\) any time the linear gradient pulse is switched off, otherwise it is given by Eq. 5. Thus, the pulsed magnetic field was taken to be switched on and off instantly. Note that, as shown in Figure 1c, due to the downward shift of the rest position of the mass in the background magnetic field due to the downward pull of gravity, the zero-field region of the magnetic field can be avoided by simply initializing the mass away from it. This serves the dual purpose of generating the induced diamagnetic field in the particle and naturally avoiding Majorana spin flips. Once, we have created one superposition in a trapping potential given by Eq. (A), we can imagine bringing two such trapping potentials, hence bringing two interferometers close to each other at a distance \(d\) shown in Fig. (1a), separated by a distance \(d\). This is the parallel setup of the original QGEM proposal, studied in [33; 48; 3]. If two such interferometers are placed near one another, separated in the \(x\) direction by a distance \(d\) with the direction the spatial splitting parallel as shown in Figure 1a, we expect the joint state to evolve to \[\ket{\Psi}= \frac{1}{2}\left(\ket{+1,+1}+\ket{-1,-1}\right)\] \[+\frac{e^{i\Delta\phi}}{2}\left(\ket{+1,-1}+\ket{-1,+1}\right) \tag{5}\] where \(\Delta\phi=\phi_{+-}-\phi_{++}\) and \(\phi_{ij}\) is the phase due to the interaction between the \(i\) and \(j\) arms of the interferometer. We refer to this phase difference \(\Delta\phi\) as the entanglement phase, which is actually the summation of all particle-particle interactions, as considered separately in the following section. Now, we are able to experimentally probe many interesting questions, such as various particle-particle interactions mediated by photons, which may be witnessed via the entanglement phase. These may include Casimir-Polder (cp), and induced dipole-dipole (dd), assuming that our crystal is charged neutral (experimentally feasible [46]). Depending on the chosen experimental parameters, it's likely that at least one of these interactions will be negligible, indeed, we will consider a situation where only the dd interaction produces a significant effect, with the cp interaction serving as the primary background. The potentials for each interaction are given by \[U_{cp}(d)= -\frac{23\hbar c}{4\pi}\frac{\varepsilon-1}{\varepsilon+2}\frac {\left(\frac{3m}{4\pi\rho}\right)^{2}}{d^{7}} \tag{6}\] \[U_{dd}(d)= \frac{2\chi_{\rho}^{2}m^{2}\left|\vec{B}(x)\right|^{2}}{4\pi\mu_ {0}d^{3}} \tag{7}\] where \(m\), \(\rho,\chi_{\rho}\) and \(\varepsilon\) are the particle's mass, density, and magnetic mass susceptibility respectively and \(d\) is the center of the mass distance between the particles. Note that we consider the magnetic field across the particles to be approximately constant and given by the magnetic field at the centre of mass. These interactions \(i\) will each give rise to an entanglement phase given by \[\Delta\phi_{i}=\frac{1}{\hbar}\int_{0}^{T}\mathrm{d}t\ U_{i}\left(d_{1}(t) \right)-U_{i}\left(d_{2}(t)\right) \tag{8}\] where \(d_{1}(t)\) (\(d_{2}(t)\)) is the furthest (closest) separation distance between the two particles due to the two superposed spatial states. The entanglement can then be found by measuring an entanglement witness, such as see [49] \[\mathcal{W}_{i}=1-\left(2e^{-\frac{1}{2}(\Gamma_{n}+\Gamma_{d})}\sin\left( \Delta\phi_{i}\right)+\frac{1}{2}\left(e^{-2\Gamma_{n}-\Gamma_{d}}+1\right)\right) \tag{9}\] where \(\Gamma_{d}\) and \(\Gamma_{n}\) are the damping and noise decoherence respectively [49]. Note that as defined here, this decoherence is defined as the decoherence rate multiplied by interferometer time. We plot the witness \(\mathcal{W}_{dd}\) with regards to total decoherence and damping rate: \(\Gamma=\Gamma_{d}+\Gamma_{n}\), see Fig. ((b)b). To witness the entanglement, we require \(\mathcal{W}<0\)[29]. Fig. (a)a shows how the entanglement phase from each particle-particle differs with regards to the separation between the two matter-wave interferometers for both the external magnetic field-induced dipole-dipole entanglement and the CP-induced entanglement. We can see that the entanglement witness due to the magnetic dipole-dipole interaction dominates, as expected, at distances \(d>6\mu\)m, see Fig. (b)b. Below this CP-induced entanglement dominates, as expected again due to the fall-off of the interaction strength with regard to the separation distance \(d\). In Fig. (b)b, we have taken the range of decoherence rates and shown that the dominant entanglement is due to the induced magnetic dipole-dipole interaction. Here we have assumed the mass is similar in both the interferometers, \(m=3.8\times 10^{-19}\)kg, and the largest superposition we are generating around \(\sim 1.1\mu\)m. Before we conclude, we should mention that the time duration of coherence for the NV spin is one of the limiting factors, but the spin coherence times are perpetually increasing (approaching 1s [40], even 30 s [41]), but adapting to our scenario remains an open challenge [50]. We will also have to make sure that the external temperature is below 1k and the gas pressure is near \(\mathcal{O}(10^{-15})\) Pa. In our analysis we have assumed that the NV spin is not wobbling due to external torque, in reality, the NV spin will processes, and future analysis has to be taken into account [51]. The nano-diamond can also wobble in the presence of the external magnetic field [52]. However, this can be suppressed even before being released from the trap, for example, by using anisotropically shaped nanoparticles, which can be aligned with any given direction in space by using linearly polarized lasers or electric fields [53] or magnetic fields [54]. Besides, there will be vibrational excitations from the breathing mode of the nano-diamond, e.g. phonon vibration, which we will Figure 1: (a)a shows a schematic representation of the manner in which two of these devices can be arranged, offset by a distance \(d\) in the \(x\) direction to enable the generation of entanglement between the two particles.(b)b shows and actual space-time trajectory for a two \(z\) oscillation interferometer. (c)c shows the same trajectory in the \(y-z\) plane and the trapping magnetic field while (d)d shows the same trajectory in the \(x-y\) plane, for an interferometer distance \(d=20\)\(\mu\)m. All trajectories are for \(m=3.8\times 10^{-19}\) kg masses, a magnetic field gradient pulse time \(t_{p}\approx 160\)\(\mu\)s and the particle is started at \(y(t=0)=-1.11\)\(\mu\)m. need to analyze for this experiment. The phonon vibration can be suppressed for any state manipulation which does not excite the phonons in resonance. We also note that it has already been shown that the internal degrees of freedom (phonons) do not pose a problem [55]. To conclude, we have found a new entanglement for any matter-wave interferometers, which relies on creating the macroscopic quantum superposition with the help of a neutral diamagnetic nano-object in the presence of an external magnetic field. Here we have shown for the first time an explicit scheme to create a small-scale spatial superposition to test the known electromagnetic-induced entanglements, the dominant effect arises from external magnetic field-induced dipole-dipole entanglement which dominates for the external magnetic field of \(\mathcal{O}(1)\)T, with a separation of \(\mathcal{O}(10)\mu\)m for a mass of order \(10^{-19}\)kg. As we have shown in Figs. 1(a),1(b), the entanglement is generated via the external magnetic field induced in the nano-diamond, which is essential to create the trapping potential and the superposition. The witness depends on the total decoherence rate and the separation distance \(d\). Such a system also allows the characterization of what would be background effects in gravitational mediated entanglement experiments. Indeed, we have shown how optimizing the distance and trajectories can have a profound effect on the various entangling interactions. ## Acknowledgements R.J.M. is supported by the Australian Research Council (ARC) under the Centre of Excellence for Quantum Computation and Communication Technology (CE170100012). A.G. is supported in part by NSF grants PHY-2110524 and PHY-2111544, the Heising-Simons Foundation, the John Templeton Foundation, the W. M. Keck foundation, and ONR Grant N00014-18-1-2370. S.B. would like to acknowledge EPSRC grants (EP/N031105/1, EP/S000267/1 and EP/X009467/1) and grant ST/W006227/1.
2302.01805
Model-free inequality for data of Einstein-Podolsky-Rosen-Bohm experiments
We present a new inequality constraining correlations obtained when performing Einstein-Podolsky-Rosen-Bohm experiments. The proof does not rely on mathematical models that are imagined to have produced the data and is therefore ``model-free''. The new inequality contains the model-free version of the well-known Bell-CHSH inequality as a special case. A violation of the latter implies that not all the data pairs in four data sets can be reshuffled to create quadruples. This conclusion provides a new perspective on the implications of the violation of Bell-type inequalities by experimental data.
Hans De Raedt, Mikhail I. Katsnelson, Manpreet S. Jattana, Vrinda Mehta, Madita Willsch, Dennis Willsch, Kristel Michielsen, Fengping Jin
2023-02-03T15:17:07Z
http://arxiv.org/abs/2302.01805v1
# Model-free inequality for data of Einstein-Podolsky-Rosen-Bohm experiments ###### Abstract We present a new inequality constraining correlations obtained when performing Einstein-Podolsky-Rosen-Bohm experiments. The proof does not rely on mathematical models that are imagined to have produced the data and is therefore "model-free". The new inequality contains the model-free version of the well-known Bell-CHSH inequality as a special case. A violation of the latter implies that not all the data pairs in four data sets can be reshuffled to create quadruples. This conclusion provides a new perspective on the implications of the violation of Bell-type inequalities by experimental data. Einstein-Podolsky-Rosen-Bohm experiments, Bell's theorem, Bell-Clauser-Horn-Shimony-Holt inequalities, data analysis The Einstein-Podolsky-Rosen thought experiment was introduced to question the completeness of quantum theory [1]. Bohm proposed a modified version that employs spin-1/2 objects instead of coordinates and momenta of a two-particle system [2]. This modified version, which we refer to as the Einstein-Podolsky-Rosen-Bohm (EPRB) experiment, has been the subject of many experiments [3; 4; 5; 6; 7; 8; 9; 10], primarily focusing on ruling out the model for the EPRB experiment proposed by Bell [11; 12]. The essence of the EPRB thought experiment is shown and described in Fig. 1. Motivated by the work of Bell [12] and Clauser et al. [13; 14], many EPRB experiments [4; 5; 6; 7; 8; 9; 10] focus on demonstrating a violation of the Bell-CHSH inequality [12; 13]. To this end, one performs four EPRB experiments under conditions defined by the directions \((\mathbf{a},\mathbf{c})\), \((\mathbf{a},\mathbf{d})\), \((\mathbf{b},\mathbf{c})\), and \((\mathbf{b},\mathbf{d})\), yielding the data sets of pairs of discrete data \[\mathcal{D}_{s}=\{(A_{s,n},B_{s,n})\,|\,A_{s,n},B_{s,n}=\pm 1\,;\,n=1, \ldots,N_{s}\}\, \tag{1}\] where \(s=1,2,3,4\) labels the four alternative conditions and \(N_{s}\) is the number of pairs emitted by the source. Then one computes correlations according to \[C_{s}=\frac{1}{N}\sum_{n=1}^{N}A_{s,n}B_{s,n}\, \tag{2}\] where \(N=\min(N_{1},N_{2},N_{3},N_{4})\). In general, each correlation \(C_{s}\) may take values \(-1\) or \(+1\), independent of the values taken by other contributions, yielding the trivial bound \(|C_{1}\mp C_{2}|+|C_{3}\pm C_{4}|\leq 4\). Without introducing a specific model for the process generating the data, we can derive a nontrivial bound that is sharper by exploiting the commutative property of addition. Let \(K_{\text{max}}\) be the maximum number of quadruples \((x_{k},y_{k},z_{k},w_{k})\) that can be found by searching for permutations \(P\), \(\widetilde{P}\), \(\widetilde{P}\), and \(P^{\prime}\) of \(\{1,2,\ldots,N\}\) such that for \(k=1,\ldots,K_{\text{max}}\), \[x_{k}=A_{1,P(k)}=A_{2,\widetilde{P}(k)}\,\qquad y_{k}=A_{3, \widetilde{P}(k)}=A_{4,P^{\prime}(k)}\,\] \[z_{k}=B_{1,P(k)}=B_{3,\widetilde{P}(k)}\,\qquad w_{k}=B_{2, \widetilde{P}(k)}=B_{4,P^{\prime}(k)}. \tag{3}\] These quadruples are found by rearranging/reshuffling the data in \(\mathcal{D}_{1}\), \(\mathcal{D}_{2}\), \(\mathcal{D}_{3}\), and \(\mathcal{D}_{4}\) without affecting the value of the correlations \(C_{1}\), \(C_{2}\), \(C_{3}\), and \(C_{4}\). **Theorem:** For any (real or computer or thought) EPRB experiment, the correlations Eq. (2) computed from the four data sets \(\mathcal{D}_{1}\), \(\mathcal{D}_{2}\), \(\mathcal{D}_{3}\), and \(\mathcal{D}_{4}\), must satisfy the model-free inequal Figure 1: (color online) Conceptual representation of the Einstein-Podolsky-Rosen experiment [1] in the modified form proposed by Bohm [2]. A source produces pairs of particles. The particles of each pair carry opposite magnetic moments, implying that there is a correlation between the two magnetic moments of each pair leaving the source. The magnetic field gradients of the Stern-Gerlach magnets (cylinders) with their uniform magnetic field component along the directions of the unit vectors \(\mathbf{a}\) and \(\mathbf{c}\) divert each incoming particle into one of the two spatially separated directions labeled by \(+1\) and \(-1\). The pair \((\mathbf{a},\mathbf{c})\) represents the conditions, denoted by the subscript “1”, under which the discrete data \((A_{1,n},B_{1,n})\) is collected. The values of \(A_{1,n}\) and \(B_{1,n}\) correspond to the labels of the directions in which the particles have been diverted. The result of this experiment is the set of data pairs \(\mathcal{D}_{1}=\{(A_{1,1},B_{1,1}),\ldots,(A_{1,N_{1}},B_{1,N_{1}})\}\) where \(N_{1}\) denotes the number of pairs emitted by the source. The alternative conditions \((\mathbf{a},\mathbf{d})\), \((\mathbf{b},\mathbf{c})\), and \((\mathbf{b},\mathbf{d})\) are labeled by subscripts “2”, “3”, and “4”, respectively. ities \[\mathcal{C}_{\pm}=|C_{1}\mp C_{2}|+|C_{3}\pm C_{4}|\leq 4-2\Delta\, \tag{4}\] where \(0\leq\Delta=K_{\text{max}}/N\leq 1\). **Proof:** We rewrite the correlations in Eq. (2) as \[C_{1}=\frac{1}{N}\sum_{n=1}^{N}A_{1,P(n)}B_{1,P(n)},\quad C_{2}= \frac{1}{N}\sum_{m=1}^{N}A_{2,\tilde{P}(n)}B_{2,\tilde{P}(n)}\,\] \[C_{3}=\frac{1}{N}\sum_{n=1}^{N}A_{3,\tilde{P}(n)}B_{3,\tilde{P}(n )},\quad C_{4}=\frac{1}{N}\sum_{m=1}^{N}A_{4,P^{\prime}(n)}B_{4,P^{\prime}(n)}. \tag{5}\] Obviously, reordering the terms of the sums does not change the value of the sums themselves. As \(|x_{k}|=|y_{k}|=|z_{k}|=|w_{k}|=1\), we (trivially) have \(|x_{k}z_{k}-x_{k}w_{k}|+|y_{k}z_{k}+y_{k}w_{k}|=2\). Splitting each of the sums in Eq. (5) into a sum over \(k=1,\dots,K_{\text{max}}\) and the sum of the remaining terms, application of the triangle inequality yields \[\mathcal{C}_{\pm}\leq\frac{2K_{\text{max}}}{N}\\ +\frac{1}{N}\sum_{n=K_{\text{max}}+1}^{N}\Big{(}\big{|}A_{1,P(n)} B_{1,P(n)}\big{|}+\big{|}A_{2,\tilde{P}(n)}B_{2,\tilde{P}(n)}\Big{|}\\ +\big{|}A_{3,\tilde{P}(n)}B_{3,\tilde{P}(n)}\big{|}+\big{|}A_{4, P^{\prime}(n)}B_{4,P^{\prime}(n)}\big{|}\Big{)}\\ \leq\frac{2K_{\text{max}}}{N}+\frac{4(N-K_{\text{max}})}{N}=4-2 \Delta. \tag{6}\] QED. In the same manner, one can prove a model-free version of the Clauser-Horne inequality [14; 15], applicable to the data of the EPRB experiments reported in Ref. [9] and [10]. The choice represented by Eq. (3) is motivated by the EPRB experiment, see Fig. 1. In general, other choices to define quadruples are possible and may yield different values of the maximum fraction of quadruples. We introduce the Bell-CHSH-like function \[S=\max_{p}\big{|}C_{p(1)}-C_{p(2)}+C_{p(3)}+C_{p(4)}\big{|}\, \tag{7}\] where the maximum is over all permutations \(p\) of \(\{1,2,3,4\}\). The maximum guarantees that we cover all possible expressions of the original Bell-CHSH function [12; 13; 16]. By application of the triangle inequality, it directly follows from Eq. (4) that, in the case of data collected by EPRB experiments, \[S\leq 4-2\Delta. \tag{8}\] The symbol \(\Delta\) in Eqs. (4) and (8) quantifies structure in terms of quadruples which can be created by relabeling the pairs of data in the sets \(\mathcal{D}_{1}\), \(\mathcal{D}_{2}\), \(\mathcal{D}_{3}\), and \(\mathcal{D}_{4}\). If \(\Delta=0\), it is impossible to find a reshuffling that yields even one quadruple. If \(\Delta=1\), the four sets can be reshuffled such that they can be viewed as being generated from \(N\) quadruples. In the special case \(\Delta=1\), we recover the model-free version of the Bell-CHSH inequality \[S\leq 2. \tag{9}\] Traditionally, Eq. (9) is proven by assuming that the data can be modeled by a so-called "local realistic" (Bell) model [12; 13; 14; 15; 16]. However, this proof does not extend to the much more general case of the data generated by EPRB experiments, in contrast to the proof of Eq. (9) which holds for experimental data. The proof of the model-free inequalities Eqs. (4) and (8) only requires the existence of a maximum number of quadruples, the actual value of this maximum being irrelevant for the proof. However, it is instructive to write a computer program that uses pseudo-random numbers to generate the data sets \(\mathcal{D}_{1}\), \(\mathcal{D}_{2}\), \(\mathcal{D}_{3}\), and \(\mathcal{D}_{4}\) and finds the number of quadruples. We have implemented the computer program in Mathematica\({}^{\text{\textregistered}}\). Naively, finding the value of \(\Delta\) seems to require \(\mathcal{O}(N^{\ddagger})\) operations. Fortunately, the problem of determining the fraction of quadruples \(\Delta\) can be cast into an integer linear programming problem which is readily solved by considering the associated linear programming problem with real-valued unknowns. In practice, we solve the latter by standard optimization techniques [17]. For all cases that we have studied, the solution of the linear programming problem takes integer values only. Then the solution of the linear programming problem is also the solution of the integer programming problem. The results of several numerical experiments using \(N=1000000\) pairs per data set can be summarized as follows: * If the \(A\)'s and \(B\)'s are generated in the form of quadruples, all items taking random values \(\pm 1\), the program returns \(\Delta=1\), \(|C_{1}-C_{2}|+|C_{3}+C_{4}|=0.00063\), and \(|C_{1}+C_{2}|+|C_{3}-C_{4}|=0.0028\) such that Eq. (4) is satisfied. * If all \(A\)'s and \(B\)'s take independent random values \(\pm 1\), the \(C\)'s are approximately zero. We have \(|C_{1}\mp C_{2}|+|C_{3}\pm C_{4}|\approx 0\leq 4-2\Delta\) where in our particular numerical experiment \(\Delta=0.998\). * If the pairs \((A_{1,i},B_{1,i})\), \((A_{2,j},B_{2,j})\), \((A_{3,k}B_{3,k})\), and \((A_{4,j}B_{4,l})\) are generated randomly with frequencies \((1-c_{1}A_{1,i}B_{1,i})/4\), \((1-c_{2}A_{2,j}B_{2,j})/4\), \((1-c_{3}A_{3,k}B_{3,k})/4\), and \((1-c_{4}A_{4,k}B_{4,l})/4\), respectively, the simulation mimics the case of the correlation of two spin-1/2 objects in the singlet state if we choose \(c_{1}=-c_{2}=c_{3}=c_{4}=1/\sqrt{2}\). In this particular case, quantum theory yields \(\max\left(|C_{1}-C_{2}|+|C_{3}+C_{4}|,|C_{1}+C_{2}|+|C_{3}-C_{4}|\right)=2\sqrt {2}\approx 2.83\)[18]. Generating four times one million independent pairs, we obtain \(\Delta\approx 0.585\), \(S=|C_{1}-C_{2}|+|C_{3}+C_{4}|\approx 2.83\) and \(4-2\Delta\approx 2.83\), demonstrating that the value of the quantum-theoretical upper bound \(2\sqrt{2}\) is reflected in the maximum fraction of quadruples that one can create by reshuffling the data. * In the case of Bell's model, slightly modified to comply with Malus' law, we have \(C_{1}=-\mathbf{a}\cdot\mathbf{c}/2\) and \(S\leq\sqrt{2}\) Choosing \(c_{1}=-c_{2}=c_{3}=c_{4}=1/(2\sqrt{2})\) and generating four times one million independent pairs, we obtain \(S=|C_{1}-C_{2}|+|C_{3}+C_{4}|\approx 1.42\) and \(4-2\Delta\approx 2.00\), as expected for Bell's local realistic model. Except for the first case, the numerical values of \(\Delta\) quoted fluctuate a little if we repeat the \(N=1000000\) simulations with different random numbers. In the third case, the simulations suggest that inequality Eq. (4) can be saturated. Suppose that the (post-processed) data of an EPRB laboratory experiment yield \(S>2\), that is the data violate inequality Eq. (9). From Eq. (8), it follows that \(\Delta\leq 2-S/2<1\) if \(S>2\). Therefore, if \(S>2\) not all the data in \(\mathscr{D}_{1}\), \(\mathscr{D}_{2}\), \(\mathscr{D}_{3}\), and \(\mathscr{D}_{4}\) can be reshuffled such that they form quaddouples only. Indeed, the data produced by these experiments have to comply with Eq. (8) which follows from Eq. (4) holding for data, and certainly do not have to comply with the original Bell-CHSH inequality obtained from Bell's model. In other words, all EPRB experiments which have been performed and may be performed in the future and which only focus on demonstrating a violation of Eq. (9) merely provide evidence that not all contributions to the correlations can be reshuffled to form quadruples (yielding \(\Delta<1\)). These violations do not provide any clue about the nature of the physical processes that produce the data. More specifically, Eq. (4) holds for discrete data, irrespective of how the data sets \(\mathscr{D}_{1}\), \(\mathscr{D}_{2}\), \(\mathscr{D}_{3}\), and \(\mathscr{D}_{4}\) were obtained. Inequality (4) shows that correlations of discrete data violate the Bell-CHSH inequality Eq. (9) only if not all the pairs of data in Eq. (2) can be reshuffled to create quadruples. The proofs of Eq. (4) and Eq. (9) do not refer to notions such as "locality", "realism", "non-invasive measurements", "action at a distance", "free will", "superdeterminism", "(non)contextually", "complementarity", etc. Logically speaking, a violation of Eq. (9) by experimental data cannot be used to argue about the relevance of one or more of these notions to the process that generated the experimental data. The existence of the divide between the realm of experimental EPRB data and mathematical models thereof is further supported by Fine's theorem [19; 20]. Of particular relevance to the present discussion is the part of the theorem that establishes the Bell-CHSH inequalities (plus compatibility) as being the necessary and sufficient conditions for the existence of a joint distribution of the four observables involved in these inequalities. This four-variable joint distribution returns the pair distributions describing the four EPRB experiments required to test for a violation of these inequalities. Fine's theorem holds in the realm of mathematical models only. Only in the unattainable limit of an infinite number of measurements (that is by leaving the realm of experimental data), and in the special case that the Bell-CHSH inequalities hold, it may be possible to prove the equivalence between the model-free inequality Eq. (9) and the Bell-CHSH inequality [12; 13; 14; 16]. A violation of the original (non model-free) Bell-CHSH inequality \(S\leq 2\) may lead to a variety of conclusions about certain properties of the mathematical model for which this inequality has been derived. However, projecting these logically correct conclusions about the mathematical model, obtained within the context of that mathematical model, to the domain of EPRB laboratory experiments requires some care, as we now discuss. The first step in this projection is to feed real-world, discrete data into the original Bell-CHSH inequality \(S\leq 2\) derived, not for discrete data as we did by considering the case \(\Delta=1\) in Eq. (8), but rather in the context of some mathematical model, and to conclude that this inequality is violated. Considering the discrete data for the correlations as given, it may indeed be tempting to plug these rational numbers into an expression obtained from some mathematical model. However, then it is no longer clear what a violation actually means in terms of the mathematical model because the latter (possibly by the help of pseudo-random number generators) may not be able to produce these experimental data at all. The second step is to conclude from this violation that the mathematical model cannot produce the numerical values of the correlations, implying that the mathematical model simply does not apply and has to be replaced by a more adequate one or that one or more premises underlying the mathematical model must be wrong. In the latter case, the final step is to project at least one of these wrong premises to properties of the world around us. The key question is then to what extent the premises or properties of a mathematical model can be transferred to those of the world around us. Based on the rigorous analysis presented in this paper, the authors' point of view is that in the case of laboratory EPRB experiments, they cannot. We are grateful to Bart De Raedt for suggesting that finding the maximum number of quadruples might be cast into an integer programming problem and for making pertinent comments. We thank Koen De Raedt for many discussions and continuous support. The work of M.I.K. was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement 854843 FASTCORR. M.S.J. acknowledges support from the project OpenSuperQ (820363) of the EU Quantum Flagship. V.M., D.W. and M.W. acknowledge support from the project Julich UNified Infrastructure for Quantum computing (JUNIQ) that has received funding from the German Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia.
2306.02623
Do-GOOD: Towards Distribution Shift Evaluation for Pre-Trained Visual Document Understanding Models
Numerous pre-training techniques for visual document understanding (VDU) have recently shown substantial improvements in performance across a wide range of document tasks. However, these pre-trained VDU models cannot guarantee continued success when the distribution of test data differs from the distribution of training data. In this paper, to investigate how robust existing pre-trained VDU models are to various distribution shifts, we first develop an out-of-distribution (OOD) benchmark termed Do-GOOD for the fine-Grained analysis on Document image-related tasks specifically. The Do-GOOD benchmark defines the underlying mechanisms that result in different distribution shifts and contains 9 OOD datasets covering 3 VDU related tasks, e.g., document information extraction, classification and question answering. We then evaluate the robustness and perform a fine-grained analysis of 5 latest VDU pre-trained models and 2 typical OOD generalization algorithms on these OOD datasets. Results from the experiments demonstrate that there is a significant performance gap between the in-distribution (ID) and OOD settings for document images, and that fine-grained analysis of distribution shifts can reveal the brittle nature of existing pre-trained VDU models and OOD generalization algorithms. The code and datasets for our Do-GOOD benchmark can be found at https://github.com/MAEHCM/Do-GOOD.
Jiabang He, Yi Hu, Lei Wang, Xing Xu, Ning Liu, Hui Liu, Heng Tao Shen
2023-06-05T06:50:42Z
http://arxiv.org/abs/2306.02623v1
# Do-GOOD: Towards Distribution Shift Evaluation for Pre-Trained Visual Document Understanding Models ###### Abstract. Numerous pre-training techniques for visual document understanding (VDU) have recently shown substantial improvements in performance across a wide range of document tasks. However, these pre-trained VDU models cannot guarantee continued success when the distribution of test data differs from the distribution of training data. In this paper, to investigate how robust existing pre-trained VDU models are to various distribution shifts, we first develop an **out-of-distribution** (OOD) benchmark termed Do-GOOD for the fine-Grained analysis on **D**ecument image-related tasks specifically. The Do-GOOD benchmark defines the underlying mechanisms that result in different distribution shifts and contains 9 OOD datasets covering 3 VDU related tasks, _e.g._, document information extraction, classification and question answering. We then evaluate the robustness and perform a fine-grained analysis of 5 latest VDU pre-trained models and 2 typical OOD generalization algorithms on these OOD datasets. Results from the experiments demonstrate that there is a significant performance gap between the in-distribution (ID) and OOD settings for document images, and that fine-grained analysis of distribution shifts can reveal the brittle nature of existing pre-trained VDU models and OOD generalization algorithms. The code and datasets for our Do-GOOD benchmark can be found at [https://github.com/MAEHCM/Do-GOOD](https://github.com/MAEHCM/Do-GOOD). 2021 Visual Document Understanding, Out-of-distribution, Pre-trained Models, Document Information Extraction + Footnote †: [leftmargin=*] organization images have wide-ranging use cases in real-world scenarios, such as document image classification [12; 13; 48], information extraction from document images [18; 31; 36], and document visual question answering [32]. Recently, numerous pre-training techniques concerning document image understanding have been proposed and shown to be effective for various document tasks [9; 17; 20; 26; 28; 44; 48]. Despite the encouraging results achieved by these models, it cannot be guaranteed that models designed under the same training and test data distribution would continue to perform well when the distribution of test data differs from the training data distribution [3; 24]. _However, most document datasets [12; 18; 32; 36] are designed following the i.i.d. assumption, with the training and test data from the same distribution._ **Motivation**. To enable the models for document classification to have the ability to handle _out-of-distribution_ (OOD) document images, Larson et al. [24] present a new OOD testbed in terms of a widely-used document classification benchmark dataset namely RVL-CDIP. This RVL-CDIP OOD benchmark is only used to develop and evaluate the robustness of methods for document image classification, which just need the models to have the capacity to model coarse-grained information over document images. Although the RVL-CDIP OOD benchmark reveals that image information is quite important for document classification, image information _plays a relatively minor role on other document imaging tasks_, such as information extraction [39; 53]. As illustrated in Figure 1, taking the NER task on the latest pre-trained visual document understanding (VDU) model LayoutLMv3 [17] for example, when different proportions of an input image \(x\) are masked as blank, the F1 score of the LayoutLMv3 model prediction is basically the same. It indicates the prediction of the LayoutLMv3 model relies more on the text and layout information rather than the visual cues. Besides, document images naturally possess three distinct features, including image, text, and layout information. The tasks, such as information extraction from document images and document visual question answering, necessitate a fine-grained understanding of complicated interactions over image, text, and layout information [18; 32; 36]. On the other hand, models designed based on these three types of features require image, text, and layout modules to carry different perspectives of input information for a document image [17; 48; 50]. The uniqueness of document image data calls for the construction of document image specific OOD benchmarks with various distribution shifts. This naturally begs the following question: _How robust are existing pre-trained VDU models to fine-grained distribution shifts occurring on document image tasks?_ **Contribution**. To answer the above question regarding the robust estimation of the VDU models' capability in the document image OOD scenario, in this paper, we aim to develop a systematic document image OOD benchmark, namely Do-GOOD. To design Do-GOOD, we adhere to the following criteria. In particular, we expect that (1) A large distribution gap between training and test data can result in a substantial drop in model performance; (2) Fine-grained analysis of distribution shifts can expose the brittle nature of existing models; (3) Designed benchmark datasets should be possibly solvable, easily scalable, and human-readable. To meet criteria (2), as shown in Figure 2, we divide distribution shifts into three categories of different characteristics, i.e., image, text, and layout distribution shifts. The distribution shifts are used to examine the partiality of VDU models on text, image, and layout information, which could compromise the robustness of VDU models. For image shift, we first disentangle the content (e.g., text on form images) from the background (e.g., table borders on form images) and then replace the background with a natural image from MSCOCO. For text shift, we employ common text attacks, such as BERT-Attack [27] and Word Swap [34; 35], to simulate a more realistic scenario where input document images may contain problematic text caused by OCR errors. We have two strategies to induce layout shifts. The first involves merging smaller bounding boxes to form a larger box. Another option is to move a particular box to a different location on the document image. These carefully designed strategies from image, text, and layout perspectives can automatically produce OOD testbeds having substantially different distributions from the training distributions, thus meeting criteria (1) and (2). Here is a summary of our main contributions: (1) We provide a fine-grained analysis of various distribution shifts in document images from image, text, and layout perspectives; (2) To generate the OOD benchmark that meets the aforementioned three criteria, we introduce a suite of automatic strategies to generate OOD data; (3) We evaluate and compare 5 state-of-the-art pre-trained VDU models and 2 representative OOD algorithms in generated OOD testbeds (i.e., Do-GOOD) across different document image tasks. We hope that the proposed Do-GOOD benchmark, the empirical study, and our in-depth analysis will benefit future research to improve the robustness of pre-trained VDU models. ## 2. Related Work **Visual Document Understanding.** Visual document classification [12], visual document information extraction [18; 36], and visual question answering on documents [32] are among the core tasks of automated document processing. For visual document classification, early works model visual information by various Figure 1. Illustration of the different importance of the input image, text, and layout embedding for the LayoutLMv3 [17] model on the document Named Entity Recognition (NER) task. Notably, though the image \(x\) is masked with different proportions, i.e. 50%, 75% and 100%, the model prediction (F1 score) just slightly changes. CNN-based methods [13; 19]. Based on the output of OCR, RNN-based [41] and Transformer-based [31] models predict the label for the text. In DocVQA, an LSTM encoder is used to model textual information and CNN encoders are used to model visual information to answer questions about document images [32]. As a result of the recent success of large-scale pre-training in NLP, such as BERT [4] and RoBERTa [30], most methods take pre-train-and-fine-tune schemes for addressing downstream tasks together [7; 9; 17; 20; 25; 26; 28; 29; 38; 44; 47; 48; 50]. The majority of current state-of-the-art models separate scanned document images into text, vision, and layout attributes and design modules to process them individually or together. For example, most methods begin by obtaining text tokens and layouts from OCR tools and then feed the OCR-ed text into pre-trained language models to model the text information. For extracting region features, some works use object detectors [8; 28; 38; 48], while others use the vision transformer [17; 20; 21; 26]. LayoutLM [48] and its followings [49; 50] employ two-dimensional positional vectors for the layout information and fuse their transformed vectors with text embeddings for the multimodal pre-trained model. After collecting text, image, and layout features, most methods leverage a multimodal fusion module to encourage modeling interactions between them. For some exceptions, Donut [20] conducts inference in an end-to-end fashion without OCR processing. LayoutLMv3 [17] makes use of patch-level embeddings for text and image patches for alignment on document images. **Out-of-Distribution Benchmarks**. We briefly review benchmarks for distribution shifts in this section. Distribution shifts have been a long-standing problem in the machine learning community [33; 42]. Recently, increasing research has shifted their attention from achieving the highest performance under in-distribution (ID) settings towards assessing models' robustness and generalization capacities [1; 2; 6; 23; 37; 40; 52]. To this end, various OOD benchmarks have been created to encourage the building of more robust models [10; 11; 14; 22; 51]. WILDS [22] creates a curated benchmark of 10 datasets ranging from the categorization of animal species to code completion. This benchmark requires curated datasets that express large distribution shifts, are relevant in the real world, and can potentially be solved. The GOOD benchmark [10] is designed to graph OOD method evaluations based on two shifts. Beyond, Wiles et al. [45] provide a holistic analysis of current SOTA methods by evaluating multiple distinct methods across both synthetic and real-world datasets. There are also OOD benchmarks for document image tasks. Larson et al. [24] establish an OOD testbed comprised of RVL-CDIP-N and RVL-CDIP-O. RVL-CDIP-N consists of in-domain documents sampled from a different distribution than RVL-CDIP. VL-CDIP-O comprises out-of-domain document images that do not fall into RVL-CDIP categories. The LastDoc4000 [3] is designed for situations in which input document images may contain unknown layouts and keys caused by OCR errors. However, the existing two benchmarks either ignore layout or text distribution shifts and only focus on document IE tasks. In contrast to them, Do-GOOD considers distribution shifts of text, vision, and layout across multiple common document image tasks from image-centric to text-centric perspectives. Figure 2: In the Do-GOOD benchmark, each document image is extracted from the training domain. We studied five distribution shifts acting on the three modalities respectively to generate the test domain. The five distribution shift acting includes: two image distribution shifts with (a) the distorted image background or (b) the natural image background; (c) text distribution shift with Bert-Attack and Word Swap; two layout distribution shifts by (d) merging and (e) moving the layouts. Do-Good Benchmark Design Existing datasets, such as FUNSD (FUNSD, 2017), prepare training and test samples under the i.i.d. assumption. Given a data distribution \(p_{\text{train}}\) of training inputs \(x\), the goal of a document image model \(f\) is to minimize the risk \(R\) as follows: \[R(f)=\mathbb{E}_{(\mathbf{x},y^{i})\sim p_{\text{train}}}\left[\mathcal{L}\left(y^ {i},f(\mathbf{x})\right)\right], \tag{1}\] where \(\mathcal{L}\) is the loss function for a particular task. Due to confounding factors, such as selection bias in the data collection process and random data splits, it is difficult for train and test data to follow the same data distribution in practice (i.e., \(p_{\text{train}}\neq p_{\text{test}}\)). As training and test data are distributed differently, models trained on training data are expected to generalize well to test data. This calls for carefully designed OOD benchmarks to accurately assess models' generalization abilities. Taking inspiration from the recent fine-grained analysis of distribution shifts literature (Zhu et al., 2017), we provide a fine-grained analysis of distribution shifts on document images by dividing them into attributes related to image, text, and layout to investigate why a model \(f\) trained on \(p_{\text{train}}\) should generalize to \(p_{\text{test}}\). Specifically, a document image example is considered to be composed of the input \(x\), label \(y^{i}\), and its three attributes \(\{y^{\text{image}},y^{\text{text}},y^{\text{layout}}\}\). As a convenience, we use \(y^{1:K}\) to denote labels and attributes \(\{y^{1},y^{\text{image}},y^{\text{text}},y^{\text{layout}}\}\). Then, we are able to formalize different distribution shifts associated with image, text, and layout for the generation of true data as follows: \[p\left(y^{1:K},\mathbf{x}\right)=p\left(y^{1:K}\right)p\left(\mathbf{x}\mid y^{1:K}\right) \tag{2}\] In this way, the data distribution can be expressed as the product of the marginal distributions of the decomposed attributes, which enables us to perform fine-grained analyses of various distribution shifts on document images. With the help of a latent variable model, the formalization can be written as follows: \[p\left(y^{1:K},\mathbf{x}\right)=p\left(y^{1:K}\right)\int p(\mathbf{x}\mid z)p\left( z\mid y^{1:K}\right)dz, \tag{3}\] where \(z\) is the latent vector. Through the above equation, different attributes \(y^{1:K}\) can be used to affect latent variables \(z\), thereby affecting the generation of data \(\mathbf{x}\). ### Image-Specific Distribution Shift The natural and the distorted image background are two background variants for image distribution shifts. Formally, \(y^{\text{image}}\) defines the image with the finite set \(\mathcal{A}=\{a_{\text{original}},a_{\text{natural}},a_{\text{distorted}}\}\). For training, the attribute \(y^{\text{image}}\) is \(a_{\text{original}}\). During testing on out-of-distribution data with natural image backgrounds, we set the attribute \(y^{\text{image}}=a_{\text{natural}}\) and then obtain marginal distribution over this attribute \(p_{\text{natural}}\) (\(y^{1:K}\)), which is used to induce the joint distribution over latent factors and the attribute _natural_: \(p_{\text{natural}}\) (\(z,y^{1:K}\)) = \(p(z\mid y^{1:K})p_{\text{natural}}\) (\(y^{1:K}\)). Subsequently, we can get input data for testing with the joint distribution: \(p_{\text{natural}}\) (\(\mathbf{x},y^{1:K}\)) equals to \(\int p(\mathbf{x}\mid z)p_{\text{natural}}\) (\(z,y^{1:K}\)). On the other hand, the out-of-distribution test set with the background of distorted images \(p_{\text{distorted}}\) (\(z,y^{1:K}\)) can be derived in a similar way to the generation of test data with natural images. Practically, the new OOD benchmark with the joint distribution \(p_{\text{natural}}\) (\(\mathbf{x},y^{1:K}\)) can be obtained through a two-stage pipeline: (1) **Disentangling text content from background**. We locate the text content based on the position information provided by an OCR tool and extract pixels of the text content from a document image. The rest of the pixels are viewed as background pixels; (2) **Replacing the original background with natural images**. We randomly select an image from MSCOCO and resize it to match the size of the document image. To compose a new OOD sample, the extracted text content is placed on the sampled natural image (Figure 2 (b)). Document images may be distorted in real-world scenarios due to uncontrollable physical deformations, uneven illuminations, and various camera angles. To simulate this realistic environment, the OOD benchmark with the joint distribution \(p_{\text{distort}}\) (\(\mathbf{x},y^{1:K}\)) is introduced. Inspired by (Bordes and Rafter, 2017), we directly employ well-pretrained Doc-GeoNet to generate the OOD distorted images (Figure 2 (a)). ### Text-Specific Distribution Shift To simulate a realistic scenario where input document images may contain problematic text caused by OCR errors, we employ two text attack strategies for text distribution shifts (Figure 2 (c)): (1) Bert-Attack; (2) Word Swap. Formally, \(y^{\text{text}}\) defines the text with the finite set \(\mathcal{A}=\{a_{\text{original}},a_{\text{bert}},a_{\text{swap}}\}\). For training, the attribute \(y^{\text{text}}\) test is \(a_{\text{original}}\). Refer to the analysis of image-specific OOD benchmarks, the out-of-distribution test data with BERT-Attack \(p_{\text{generation}}\) (\(\mathbf{x},y^{1:K}\)) and Word Swap \(p_{\text{swap}}(\mathbf{x},y^{1:K}\)) can be obtained in a similar way. In practice, BERT-Attack, based on pre-trained masked language models exemplified by BERT, is used to produce OOD samples. The advantage of BERT-Attack is that it can generate similar but unseen words while guaranteeing fluency and semantic preservation in the generated samples. For Word Swap, we apply 5 ways to generate OOD samples: (1) _Word Swap by embedding_: Using embedding vectors to find similar words to do swap; (2) _Word Swap by homoglyph_: Replacing words with nearly identical in appearance yet different meaning; (3) _Word Swap for numbers_: Replacing number with another number since numbers play an influential role in document images; (4) _Random character deletion_: Deleting certain characters in words, such as "houses" \(\longrightarrow\) "hoses". ### Layout-Specific Distribution Shift There are two layout manipulations for layout distribution shifts: Merge and Move. The Merge manipulation is designed to investigate the impact of changing layout information from a fine-grained level to a coarse-grained level while maintaining image and text information. The move operation is used to investigate the effect of neighboring information on the content of a particular bounding box by moving the content to a distinct location. The bounding box is an enclosed area surrounded by lines. Formally, \(y^{\text{layout}}\) defines the layout with the finite set \(\mathcal{A}=\{a_{\text{original}},a_{\text{merge}},a_{\text{move}}\}\). For training, the attribute \(y^{\text{layout}}\) is \(a_{\text{original}}\). Refer to the analysis of image-specific and text-specific OOD benchmarks, the OOD test data with the Merge manipulation \(p_{\text{merge}}\) (\(\mathbf{x},y^{1:K}\)) and the OOD test data with the Move manipulation \(p_{\text{move}}(\mathbf{x},y^{1:K}\)) can be obtained in a similar way. The pseudocode of merging bounding boxes is shown in Algorithm 1. To construct the OOD benchmark \(p_{\text{merger}}\) (\(\mathbf{x},y^{\text{1.K}}\)), we perform the process described in Algorithm 1 for each image. The Merge manipulation begins with initializing an empty set \(\mathbf{S}\) which saves all the bounding boxes that have been traversed. We then traverse each bounding box and get the current bounding box \(\mathbf{B_{i}}\). Then it gets \(\mathbf{B_{i}^{\prime}}\) by dilating \(\mathbf{B_{i}}\) a little using predefined horizontal and vertical dilation distances \(\lambda_{1}\), \(\lambda_{2}\). After that, we see if this dilated bounding box intersects with another bounding box in set \(\mathbf{S}\). If there is an intersection, we get \(\mathbf{M_{i}}\) by merging the two bounding boxes, otherwise, we skip this operation. We then add \(\mathbf{M_{i}}\) or \(\mathbf{B_{i}^{\prime}}\) to set \(\mathbf{S}\). After all bounding boxes have been traversed, the merging process is complete (Figure 2 (e)) and we get merged bounding boxes \(\mathbf{M_{1}}\),\(\mathbf{M_{2}}\),...,\(\mathbf{M_{k}}\). For the construction of OOD benchmark \(p_{\text{move}}\) (\(\mathbf{x},y^{\text{1.K}}\)), we select a bounding box with strong textual semantics and then move the text content to another location without textual semantics (Figure 2 (e)). A semantic entity is considered to have strong text semantics if its prediction results remain unchanged after its corresponding layout information has been changed ten times. ``` 0: Bounding boxes \(\mathbf{B_{1}},\mathbf{B_{2}},...,\mathbf{B_{n}}\) of a image; the collection \(\mathbf{S}\) of bounding boxes that have been traversed; horizontal and vertical dilation distances \(\lambda_{1}\), \(\lambda_{2}\). 1: Initialize \(\mathbf{S}\) to empty set. 2:repeat 3: Get the \(i\)-th bounding box \(\mathbf{B_{i}}[x_{1}^{i},y_{1}^{i},x_{2}^{i},y_{2}^{i}]\). 4: Dilate \(\mathbf{B_{i}}\) with horizontal and vertical dilation distances as \(\mathbf{B_{i}^{\prime}}[x_{1}^{i}-\lambda_{1},y_{1}^{i}-\lambda_{2},x_{2}^{i} +\lambda_{1},y_{2}^{i}+\lambda_{2}]\). 5:if\(\mathbf{B_{i}}\) intersects with a bounding box in the set \(\mathbf{S_{i}}\) of \(\mathbf{S}\)then 6: Merge the two bounding boxes in \(\mathbf{S_{i}}\). 7:endif 8: Mark the area where \(\mathbf{B_{i}^{\prime}}\) is located in \(\mathbf{S_{i}}\). 9:until All the bounding boxes have been traversed. 10: Merged bounding boxes \(\mathbf{M_{1}}\),\(\mathbf{M_{2}}\),...,\(\mathbf{M_{k}}\). ``` **Algorithm 1** The procedure of merging bounding boxes. ## 4. Do-Good Datasets The purpose of this section is to introduce the datasets used in our proposed Do-GOOD benchmark. we first perform a preliminary study for analysis of the distribution shift options for a specific VDU task. Then, we elaborate on OOD datasets across different VDU tasks. Finally, 9 datasets are constructed across 3 VDU tasks. ### Preliminary Study We conducted a preliminary study in order to investigate the effect of image, text, and layout information across different datasets and VDU tasks. Thus, for a certain dataset and task, we can determine which distribution shift should be chosen to develop the OOD test dataset. During implementation, we use LayoutLMv3\({}_{\text{BASE}}\)(Lian et al., 2017) as the base model and isolate the effects of image, layout, and text information by removing the corresponding input embeddings for inference. Text is necessary for all tasks. Thus, to assess the effect of text information, we retain input text and remove image and layout embeddings. Figure 3. Samples of the distribution shift examples: (a) FUNSD-R containing a real-world OOD dataset variant of FUNSD, (b) FUNSD-H denoting the human-intervened OOD dataset variant of FUNSD, (c) RVL-CDIP-\(\mathbf{I_{1}}\) including samples with the natural image background, (d) RVL-CDIP-\(\mathbf{I_{2}}\) including samples with the distorted image background, and (e) RVL-CDIP-L containing samples with merged bounding boxes. More samples can be found at [https://github.com/MAEHCM](https://github.com/MAEHCM). The overall results are shown in Table 1. While the performance of LayoutLMv3 without (denoted as "w/o") image embeddings on RVL-CDIP drops substantially, the performance of information extraction slightly decreases and the performance of QA tasks may even increase. It indicates that document image classification is largely affected by image information. We observe that without layout information, the performance of LayoutLMv3 drops by a significant margin for information extraction and classification. However, model performance on DocVQA is not affected. We assume that most questions in DocVQA are dependent on textual content to predict the answers. The performance of LayoutLMv3 only with text embeddings is slightly better than that with all input embeddings, which strongly supports our assumption of DocVQA. Besides, using only text embeddings, LayoutLMv3 performs very poorly on information extraction and classification tasks. All of these analyses motivate us to develop image-specific OOD datasets for document image classification, text-specific OOD datasets for all tasks, and layout-specific OOD datasets for document image classification and information extraction. ### Document Information Extraction Task For the visual document information extraction task, we mainly generate OOD datasets based on FUNSD (Kumar et al., 2017). FUNSD is a dataset sampled from the RVL-CDIP dataset (Kumar et al., 2017) about noisy scanned form understanding, consisting of 199 documents (149 for training and 50 for testing) and 9,743 semantic entities. The task of FUNSD is sequential labeling, which aims to assign labels to words. **FUNSD-L** is a variant of FUNSD that includes the OOD samples produced through strategies based on two layout-specific distribution shifts, Merger and Move. As described in Section 3, the Move operation is based on semantic strength determined by the model itself. Specifically, we randomly shuffle the bounding boxes within a document image and employ the fine-tuned model to infer 30 times. For textual content, if the model prediction has fewer errors, its semantic strength is greater. **FUNSD-T** is a variant of FUNSD that contains the OOD samples generated by the two text attack methods described in Section 3. In fact, we also have OOD samples through observing and selecting from real-world datasets for the visual document information extraction task. Figure 3 (c) and (d) show examples. Specifically, **FUNSD-R** is one real-world OOD dataset variant of FUNSD. The FUNSD-R dataset is used to show the performance gap of VDU models on real-world OOD datasets and generated OOD datasets. First, we sample data examples from the large-scale document classification dataset RVL-CDIP, and then observe and select data examples that differ from the distribution of FUNSD. After that, these selected examples are manually annotated. FUNSD-R contains 50 document images in total. Further, we modify the data examples in FUNSD in order to generate a human-intervented OOD dataset variant of FUNSD, named **FUNSD-H**. Practically, we move some weak textual entities to construct layout and image shift, or add a few semantically linked texts around strong semantic content to construct 3 kinds of shifts. In the end, we obtain 50 OOD samples. Despite the fact that it is expensive and time-consuming to construct OOD datasets such as FUNSD-R and RUNSD-H, these two OOD datasets inspired us to develop a suite of OOD benchmark datasets that can be generated automatically for a wide range of VDU tasks. ### Visual Document Classification Task RVL-CDIP (Kumar et al., 2017) is a document classification dataset aiming to predict the category of a given document. It includes 400,000 data examples in 16 categories, which are divided into 320,000 training samples, 40,000 validation samples, and 40,000 test samples. **RVL-CDIP-T** is one of the OOD dataset variants of RVL-CDIP that contains the OOD samples generated by the two text attack methods. As a variant of RVL-CDIP, **RVL-CDIP-L** includes OOD samples produced through two layout-specific distribution shifts. The image-specific OOD variant, **RVL-CDIP-I**, is generated through natural **RVL-CDIP-I\({}_{1}\)**and distorted **RVL-CDIP-I\({}_{2}\)** image distribution shifts. Examples are illustrated in Figure 3. ### Document Visual Question Answering Task DocVQA (Zhu et al., 2017) is a dataset for predicting the answer given a document image and a question. To accomplish this, models need to understand the content of documents and learn to reason over them. The original DocVQA dataset consists of 10,194/1,286/1,287 images with 39,463/5,349/5,188 questions for training/validation/test, respectively. **DocVQA-T** is the OOD dataset variant of DocVQA. In order to construct DocVQA-T, we first collect text, questions, and answers from OCR results and the Microsoft READ API. Then, we obtain the OOD samples which are generated by the two text attack methods. ## 5. Experiment ### Evaluation on state-of-the-art VDU Models **VDU Models.** Larger models are generally more robust to OOD data (Kumar et al., 2017). We thus evaluate the robustness of fine-tuning the popular pre-trained VDU models (large models) for downstream tasks on our Do-GOOD benchmark. The state-of-the-art large models include (1) Pre-trained models with text and layout modalities: BROS (Kumar et al., 2017) and LiLT (Li et al., 2017); (2) Pre-trained models with text, layout and image modalities: LayoutLMv1 (Li et al., 2017), LayoutLMv2 (Li et al., 2017), and LayoutLMv3 (Li et al., 2017). **Implementation Details.** We fine-tune the VDU models on the ID datasets while selecting the best checkpoints based on the performance of ID and OOD validation sets. The evaluation metrics we use are the same as those used in the original dataset paper, such as "F1" for FUNSD and its OOD variants, "Accuracy" for RVL-CDIP and its OOD variants, and "ANLS" for DocVQA and its variants. All pre-trained models are based on Hugging Face (Li et al., 2017). For the visual document information extraction task, the learning rate is \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Model**} & **FUNSD** & **RVL-CDIP** & **DocVQA** \\ & **F1\(\uparrow\)** & **Accuracy\(\uparrow\)** & **ANLS\(\uparrow\)** \\ \hline LayoutLMv3BASE & 90.29 & 95.44 & 78.76 \\ w/o Image & 90.18 & 57.07 & 78.82 \\ w/o Layout & 29.87 & 77.05 & 78.76 \\ w/ Text & 28.65 & 18.07 & 78.82 \\ \hline \hline \end{tabular} \end{table} Table 1. Overall results of the preliminary study on FUNSD, RVL-CDIP, and DocVQA datasets. set to 3e-5, and the training epochs are set to 70. Since the original RVL-CDIP corpus did not provide text information, we used the Tesseract 3 OCR engine to extract words and their positions. The learning rate was set to 1e-6, and the training epoch was 30 rounds. For Doc VQA tasks, the learning rate is set to 2e-5, and the epoch is 40 rounds. All input images have a resolution of \(224\times 224\) pixels, and the batch in training is set to 4, while the batch in testing is set to 1. **Main Results.** Based on the criteria outlined in Section 1, DoGOOD is designed to achieve a large distribution gap between training and test data and a substantial performance drop from ID to OOD settings. To verify whether the proposed OOD benchmark meets the criteria, we conduct experiments fine-tuning pre-trained VDU models on the original ID downstream datasets and testing on both the ID and OOD datasets. Table 2 reports the overall results _w.r.t_ comparison of ID and OOD performance of the existing models on the FUNSD, RVL-CDIP and DocVQA datasets. According to the differences between ID and OOD for each distribution shift across all VDU tasks, there is a substantial and consistent performance gap between the ID and OOD settings. In most cases, LayoutLMv3 can achieve the best performance, including ID setting across all datasets, OOD\({}_{\text{T}}\) and OOD\({}_{\text{I}}\) settings of FUNSD and RVL-CDIP, and the OOD\({}_{\text{T}}\) setting of DocVQA, indicating LayoutLMv3 is one of the most robust models on VDU tasks. These motivate us to use LayoutLMv3 as our base model for comparing common OOD algorithms on Do-GOOD benchmark. BROS performs well in 4 OOD settings on FUNSD. The possible reason is that pre-training in BROS uses the relative position of the encoded text and a region masking strategy as the objective. Based on the success of LayoutLMv3 and BROS, we assume that fine-grained modeling such as patch-level or region-level modeling may be very useful for improving the robustness of models in OOD environments. **Results on FUNSD-L Dataset.** Furthermore, to explore the robustness of each model under the layout distribution shift condition, we evaluate the performance of each model in each label category on FUNSD-L. As shown in Table 3, we observe that LayoutLMv3 achieves the best performance on the FUNSD-L dataset. BROS obtains the worst Other Error score. LayoutLM [(48)] and LayoutLMv2[(50)] have higher Other Error, indicating that the prediction of weak semantic areas can be easily affected by the layout of strong semantic areas in these models. LayoutLM also has higher QA Error, indicating that the prediction of strong semantic entities may still be affected by the surrounding entities. The low header accuracy of all models indicates that the prediction of the current model for the headers largely depends on the location of the entities. **Results on FUNSD-T Dataset.** Moreover, to investigate the robustness of each model under the text distribution shift condition, we evaluate the performance of the model against various text attacks. Table 4 shows the performance of each model under various attacks on FUNSD-T. We observe that LayoutLMv3[(17)] achieves the best performance on 5 out of 6 text distribution shifts, which indicates that LayoutLMv3 is more robust than other models on text attacks. The performance gap between LayoutLMv3 and other VDU models on homoglyph is about 20 to 30 F1 score, which indicates that LayoutLMv3 model is more robust than other models when dealing with the semantic OOD caused by OCR error. ### Evaluation on Typical OOD Algorithms We further compare the representative OOD algorithms on all OOD datasets across three downstream tasks. Based on the experiment results, we briefly analyze the effect of different OOD methods. All experimental results are based on LayoutLMv3\({}_{\text{BASE}}\)[(17)]. **Baseline Methods.** We use empirical risk minimization (ERM) and two OOD algorithms as our baselines. ERM is a systematic process of identifying, assessing, and managing risks that face an organization. The goal of ERM is to maximize the potential of positive events and minimize the impact of negative ones. The 2 OOD methods are Deep Coral [(43)] and Mixup [(52)]. Deep Coral achieves domain adaptive effects by aligning second-order statistics between the source and target domains. In our experiment, we only add this method at the last layer of LayoutLMv3\({}_{\text{BASE}}\) model and set \(\lambda\) equal to 1. Mixup [(52)] achieves data augmentation without excessive overhead by interpolating input features and labels. In our implementation, we simultaneously interpolate input features, including text embedding, layout embedding, and image embedding, and then set \(\alpha\) and \(\beta\) equal to 0.4 for the Beta distribution. **Main Results.** Table 5 shows the ID and OOD results of ERM, Deep Coral, and Mixup on 3 downstream tasks. According to the observation in Table 5, none of the OOD generalization algorithms consistently outperform ERM, and even ERM is superior to Deep Coral in most cases. Mixup can outperform ERM in OOD\({}_{\text{R}}\) and \begin{table} \begin{tabular}{l|c c c c c|c c c c|c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**FUNSD**} & \multicolumn{4}{c}{**RVL-CDIP**} & \multicolumn{2}{c}{**DocVQA**} \\ & **ID** & **OOD\({}_{\text{R}}\)** & **OOD\({}_{\text{H}}\)** & **OOD\({}_{\text{T}}\)** & **OOD\({}_{\text{L}}\)** & **ID** & **OOD\({}_{\text{T}}\)** & **OOD\({}_{\text{L}}\)** & **OOD\({}_{\text{I}}\)** & **OOD\({}_{\text{I}}\)** & **ID** & **OOD\({}_{\text{T}}\)** \\ \hline BROS\({}_{\text{BASE}}\)[(16)] & 88.98 & 60.20 & **74.31** & 80.58 & 84.37 & 90.12 & 88.43 & 80.56 & 90.12 & 85.32 & 73.72 & 60.38 \\ LiLT\({}_{\text{BASE}}\)[(44)] & 88.25 & 57.45 & 72.32 & 78.41 & 68.83 & 95.68\({}^{*}\) & 85.31\({}^{*}\) & 51.20\({}^{*}\) & **95.68\({}^{*}\)** & 92.42\({}^{*}\) & 70.43 & 52.31 \\ LayoutLM\({}_{\text{BASE}}\)[(48)] & 82.82 & 47.94 & 68.44 & 72.23 & 54.64 & 94.42 & 81.35 & 54.77 & 94.42 & 87.59 & 69.34 & 59.26 \\ LayoutLMv2\({}_{\text{BASE}}\)[(50)] & 89.91 & **62.39** & 72.70 & 79.16 & 81.33 & 95.25 & 86.53 & 64.78 & 82.08 & **92.16** & 78.08 & 64.67 \\ LayoutLMv3\({}_{\text{BASE}}\)[(17)] & **90.29** & 57.88 & 73.25 & **86.82** & **84.95** & **95.44** & **89.32** & **81.06** & 86.27 & 85.02 & **78.76** & **65.69** \\ \hline \hline \multicolumn{10}{l}{\({}^{*}\)LiLT uses image features with ResNeXt101-FPN backbone in fine-tuning RVL-CDIP.} \\ \end{tabular} \end{table} Table 2. The ID and OOD performance of existing VDU models on the FUNSD, RVL-CDIP and DocVQA datasets. To compare the models fairly, all VDU models use cell-level layout embedding. Here OOD\({}_{\text{R}}\) is an OOD dataset of FUNSD which samples 50 images from RVL-CDIP. OOD\({}_{\text{H}}\) is our handcrafted dataset. OOD\({}_{\text{T}}\), OOD\({}_{\text{L}}\), OOD\({}_{\text{I}_{1}}\) and OOD\({}_{\text{I}_{2}}\) refer to text distribution shift, layout distribution shift, natural image distribution shift and distorted image distribution shift. \(\text{OOD}_{\text{H}}\) settings on FUNSD, but it underperforms ERM in \(\text{OOD}_{\text{T}}\) and \(\text{OOD}_{\text{L}}\) settings, indicating that developing a fine-grained comprehensive evaluation is of importance for OOD generalization. Next, we take a closer look at the effectiveness of the OOD algorithm on different tasks. For the information extraction task, Deep Coral and Mixup outperform ERM in \(\text{OOD}_{\text{R}}\) and \(\text{OOD}_{\text{H}}\). These results demonstrate the rationality of our manual labeling data set and prove that common OOD algorithms are also applicable to VDU models. In terms of layout distribution shifts on FUNSD, ERM performs slightly better than Deep Coral and Mixup. This may be due to the fact that common OOD algorithms are not capable of coping with excessive layout information changes. For document image classification, Deep Coral and Mixup outperform ERM in \(\text{OOD}_{\text{H}}\) and \(\text{OOD}_{\text{L}}\) settings while they still perform worse than ERM in \(\text{OOD}_{\text{L}}\) settings. The results of this study indicate that common OOD algorithms perform well in the task of document classification, which requires a high level of image information modeling. Deep Coral and Mixup score slightly below ERM on the document visual question answering task. The study demonstrates that distribution shifts in complex tasks such as document visual question answering cannot be easily handled using common OOD algorithms. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & **Baseline** & **BERT-Attack** & **Embedding** & **Homoglyph** & **Change number** & **Character deletion** \\ & **F1\(\uparrow\)** & **F1\(\uparrow\)** & **F1\(\uparrow\)** & **F1\(\uparrow\)** & **F1\(\uparrow\)** & **F1\(\uparrow\)** \\ \hline BROS\({}_{\text{BASE}}\)[16] & 88.98 & **89.04** & 82.55 & 66.56 & 89.23 & 75.51 \\ LiLT\({}_{\text{BASE}}\)[44] & 88.25 & 84.54 & 81.01 & 70.23 & 87.28 & 68.97 \\ LayoutLM\({}_{\text{BASE}}\)[48] & 82.82 & 79.75 & 75.80 & 56.99 & 82.31 & 66.31 \\ LayoutLMv2\({}_{\text{BASE}}\)[50] & 89.91 & 86.61 & 83.60 & 60.83 & 89.27 & 75.53 \\ LayoutLMv3\({}_{\text{BASE}}\)[17] & **90.29** & 88.14 & **86.44** & **84.50** & **90.10** & **84.91** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison results of existing VDU models under different text attack methods on the FUNSD dataset. All numerical results are averages of 5 runs. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & **FUNSD** & \multicolumn{6}{c}{**FUNSD-L**} \\ & **F1\(\uparrow\)** & **Precision\(\uparrow\)** & **Recall\(\uparrow\)** & **F1\(\uparrow\)** & **Other Error\(\downarrow\)** & **QA Error\(\downarrow\)** & **Header Error\(\downarrow\)** \\ \hline BROS\({}_{\text{BASE}}\)[16] & 89.26 & 77.13 & 93.10 & 84.37 & **43.97** & 1.79 & 100.00 \\ LiLT\({}_{\text{BASE}}\)[44] & 88.25 & 60.03 & 80.66 & 68.83 & 54.92 & 13.18 & **59.46** \\ LayoutLMv2\({}_{\text{BASE}}\)[48] & 82.82 & 53.68 & 55.64 & 54.64 & 82.88 & 42.32 & 72.73 \\ LayoutLMv2\({}_{\text{BASE}}\)[50] & 89.91 & 71.74 & **93.89** & 81.33 & 81.73 & **1.24** & 95.74 \\ LayoutLMv3\({}_{\text{BASE}}\)[17] & **90.29** & **80.40** & 90.05 & **84.95** & 45.46 & 4.13 & 100.00 \\ \hline \hline \end{tabular} \end{table} Table 3: Overall comparison results of existing VDU models on the FUNSD and their own FUNSD-L datasets. FUNSD-L is generated by the move operation. Other Error, QA Error and Header Error refer to error rate of entities whose labels are other, question or answer, and header respectively. Figure 4: The confusion matrix in terms of F1 score for each VDU model on FUNSD-L data generated by the other models. The columns are the VDU models for generating the data, and the rows are the models for testing the data. v3 means LayoutLMv3, v2 means LayoutLMv2, v1 means LayoutLM. \begin{table} \begin{tabular}{l|c c c c|c c c c|c c} \hline \hline \multirow{2}{*}{**Algorithm**} & \multicolumn{4}{c}{**FUNSD**} & \multicolumn{6}{c}{**RVL-CDIP**} & \multicolumn{2}{c}{**DocVQA**} \\ & **ID** & \(\text{OOD}_{\text{R}}\) & \(\text{OOD}_{\text{H}}\) & \(\text{OOD}_{\text{T}}\) & \(\text{OOD}_{\text{L}}\) & **ID** & \(\text{OOD}_{\text{T}}\) & \(\text{OOD}_{\text{L}}\) & \(\text{OOD}_{\text{L}}\) & \(\text{OOD}_{\text{L}}\) & **ID** & \(\text{OOD}_{\text{T}}\) \\ \hline ERM & **90.29** & 57.88 & 73.25 & **86.82** & **84.95** & **95.44** & 89.32 & **81.06** & 36.27 & 85.02 & **78.76** & **65.69** \\ Deep Coral [43] & 90.20 & 58.88 & 73.92 & 84.61 & 83.47 & 95.12 & 89.21 & 76.57 & 37.82 & 86.23 & 78.63 & 64.21 \\ Mixup [52] & 89.28 & **61.19** & **74.33** & 86.53 & 84.23 & 94.69 & **89.87** & 78.70 & **40.44** & **87.09** & 77.66 & 65.34 \\ \hline \hline \end{tabular} \end{table} Table 5: The ID and OOD performances of 3 OOD algorithms on 12 datasets. All numerical results are averages of 5 runs. ### Further Analysis **Effect of OOD Samples Generated by Different VDU Models.** As the generation of samples in FUNSD-L relies on the model to assess the semantic strength, we conduct experiments to investigate whether the performance of the model also drops substantially when OOD samples _w.r.t_ layout distribution shift are generated by other models. Figure 4 shows the confusion matrix for each VDU model on FUNSD-L data generated by the other models. We can observe that LayoutLM consistently performs worse on OOD datasets generated by all models. The results indicate that LayoutLM trained with fixed layout information is strongly dependent on layout information, which makes it difficult to cope with layout distribution shifts. Both LayoutLMv3 and BROS perform well on OOD datasets generated by all models, including themselves. It demonstrates that fine-grained information modeling, such as patch-level and region-level information modeling, can improve the robustness of models. **Effect of Text Shift on Document VQA Task**. In Section 4.1, we have demonstrated that LayoutLMv3 rarely uses the visual or layout information in document VQA tasks, thus for DocVQA test sets in this experiment, we only concentrate on the text information. We utilize the same text shift method as FUNSD for text which is not answer. Figure5a shows the results. It can be seen that under the influence of Bert-Attack or Word Swap, the ANLS of all models dropped by about 10 points in terms of Accuracy. It indicates that the existing VDU models are vulnerable to image corruption or OCR errors for document VQA task. **Effect of Merge Distance**. We further conduct experiments on the impact of distance parameter \(d\), and the experimental results are shown in Figure 5b. Note that \(d_{1}\) equals to \(\lambda_{1}\) means vertical spacing and \(d_{2}\) equal to \(\lambda_{2}\) means horizontal spacing. We can observe that some overlapping bounding boxes during OCR detection, thus, we explore whether the model needed fine-grained layout coordinates to predict document categories. Here \(d_{1}\) controls the horizontal stretch length while \(d_{2}\) controls the vertical stretch length. When \(d_{1}\) and \(d_{2}\) are both 0, part of the OCR overlap area merges and the accuracy decreases. It indicates that longitudinal merging reduces prediction accuracy more than horizontal merging. **Effect of Incremental Training with Do-GOOD**. We finally investigate the impact of the Do-GOOD benchmark on solving the OOD problem for existing VDU models considering the incremental training scheme. Specifically, we divide the FUNSD-R and FUNSD-H datasets into training and test data split. Specially, 20 samples are added to the FUNSD training set for incremental training, 30 samples are tested as OOD samples. We randomly sample five times and take the average of all results. The experimental results show in Figure 6a and we can see that adding OOD sample during training is effective to improve the performance of OOD test sets. We further sample 5,000 samples on the RVL-CDIP validation set for incremental training and ensure that all document types are evenly distributed. As shown in Figure 6b, the experimental results show that adding OOD data to the training set can significantly improve the performance of the model on such OOD test sets when natural scene background replacement occurs for the document background. For image distortion and Layout shift joining the training set to participate in incremental training, we find that the performance slightly changes on the OOD test set. ## 6. Conclusion In this paper, we introduced an out-of-distribution (OOD) benchmark, _i.e._ Do-GOOD, that evaluates the robustness of existing VDU models for document image-related tasks. We presented three criteria as well as a general, comprehensive framework for analyzing and benchmarking OOD document images. In this framework, we first broken down document images into image, text, and layout characteristics. Then, we discussed the distribution shifts from image, text, and layout perspectives. We finally obtained 9 OOD datasets covering 3 document image-related tasks. On the basis of these OOD datasets, we conducted experiments using 5 existing pre-trained VDU models and two commonly used OOD generalization algorithms, which demonstrate the brittle nature of existing VDU models and OOD generalization algorithms. We expected that our framework and comprehensive benchmark will facilitate research in document image-related fields, and it can be utilized by practitioners to determine which methods perform best under which distribution shifts. ## 7. Acknowledgments This work was supported in part by National Natural Science Foundation of China under Grants (No. 62222203 and 61976049). Figure 5. Further analysis on (a) text distribution shift of VDU models on the DocVQA dataset. v3 means LayoutLMv3, v2 means LayoutLMv2, v1 means LayoutLM and (b) layout shift of LayoutLMv3 on document classification dataset RVL-CDIP. Figure 6. Incremental training results of LayoutLMv3 on (a) FUNSD-R and FUNSD-H, and (b)RVL-CDIP-I\({}_{1}\), RVL-CDIP-I\({}_{2}\) and RVL-CDIP-L datasets.
2303.07185
Joint Behavior and Common Belief
For over 25 years, common belief has been widely viewed as necessary for joint behavior. But this is not quite correct. We show by example that what can naturally be thought of as joint behavior can occur without common belief. We then present two variants of common belief that can lead to joint behavior, even without standard common belief ever being achieved, and show that one of them, action-stamped common belief, is in a sense necessary and sufficient for joint behavior. These observations are significant because, as is well known, common belief is quite difficult to achieve in practice, whereas these variants are more easily achievable.
Meir Friedenberg, Joseph Y. Halpern
2023-03-13T15:28:17Z
http://arxiv.org/abs/2303.07185v2
# Joint Behavior and Common Belief ###### Abstract For over 25 years, common belief has been widely viewed as necessary for joint behavior. But this is not quite correct. We show by example that what can naturally be thought of as joint behavior can occur without common belief. We then present two variants of common belief that can lead to joint behavior, even without standard common belief ever being achieved, and show that one of them, _action-stamped_ common belief, is in a sense necessary and sufficient for joint behavior. These observations are significant because, as is well known, common belief is quite difficult to achieve in practice, whereas these variants are more easily achievable. ## 1 Introduction The past few years have seen an uptick of interest in studying cooperative AI, that is, AI systems that are designed to be effective at cooperating. Indeed, a number of influential researchers recently argued that "[w]e need to build a science of cooperative AI... progress towards socially valuable AI will be stunted unless we put the problem of cooperation at the centre of our research" [6]. One type of cooperative behavior is what we might call _joint behavior_, that is, collaboration scenarios where the success of the joint action is dependent on all agents doing their parts; one agent deviating can cause the efforts of others to be ineffective. The notion of joint behavior has been studied (in much detail) under various names such as "acting together", "teamwork", "collaborative plans", and "shared plans", and highly influential models of it were developed (see, e.g., [2, 4, 9, 10, 15, 13]). Efforts were also made to engineer some of these theories into real-world joint planning systems [20, 21]. Examples of the types of scenarios these works considered include drivers in a caravan, where if any agent deviates it might lead the entire caravan to get derailed, and a company of military helicopters, where deviation on the part of some agents can lead to the remaining agents being stranded or put in unnecessarily high-risk scenarios. One conclusion arrived at by all of these efforts was the importance of beliefs for this type of cooperation. In particular, because each agent would do her part only if she believed that all of the other agents would do their part as well, they determined that _common belief_ (often called _mutual belief_) of how the agents would behave was necessary. That is to say, not only did everyone have to believe all of the agents would act as desired, but everyone had to believe everyone believed it, and everyone had to believe that everyone believed everyone believed it, etc. This, they argued, followed from the fact that everyone acts only if they believe everyone else will. (See, e.g., [2, 4, 9, 15, 13, 12] for examples of this claim.) As we show in this paper, this conclusion is not quite right. We do not need common belief for joint behavior; weaker variants suffice. Indeed, we provide a variant of common belief that we call _action-stamped_ common belief that we show is, in a sense, necessary and sufficient for joint behavior. The key insight is that agents do not have to act simultaneously for there to be joint behavior. If agent 2 acts after agent 1, agent 1 does not have to believe, when he acts, that agent 2 currently believes that all agents will carry out their part of the joint behavior. Indeed, at the point that agent 1 acts, agent 2 might not even be aware of the joint action. It suffices that agent 2 believes _at the point that she carries out her part of the joint behavior_ that all the other agents will believe at the points where they are carrying out their parts of the joint behavior... that everyone will act as desired at the appropriate time. If actions must occur simultaneously, then common belief is necessary; the fact that we do not require simultaneous actions is what allows us to consider weaker variants of common belief. Why does this matter? Common belief may be hard to obtain (see [7]); it may be possible to obtain action-stamped common belief in circumstances where common belief cannot be obtained. Thus, if we assume that we need common belief for joint behavior, we may end up mistakenly giving up on cooperative behavior when it is in fact quite feasible. The rest of the paper is organized as follows. In the next section, we provide the background for the formal (Kripke-structure based) framework that we use throughout the paper. In Section 3, we give our first example showing that agents can have joint behavior without common belief, and define a variant of common belief that we call _time-stamped common belief_ which enables it to happen. In Section 4, we give a modified version of the example where time-stamped common belief does not suffice for joint behavior, but _action-stamped common belief_, which is yet more general, does. In general, the group of agents involved in a joint behavior need not be static; it may change over time. For example, we would like to view the firefighters at the scene of a fire as acting jointly, but this group might change over time as additional firefighters arrive and some firefighters leave. In Section 5, we show how action-stamped (and time-stamped) common belief can be extended to deal with the group of agents changing over time. In Section 6, we go into more detail regarding the significance of these results. In Section 7, we show that there is a sense in which action-stamped common belief is necessary and sufficient for joint behavior. Finally, in Section 8, we conclude. ## 2 Background To make our claims precise, we need to be able to talk formally about beliefs and time. To do so, we draw on standard ideas from modal logics and the runs-and-systems framework of Fagin et al. [7]. Our models have the form \(M=(R,\Phi,\pi,\mathcal{B}_{1},\ldots,\mathcal{B}_{n})\). \(R\) is a _system_, which, by definition, is a set of _runs_, each of which describes a way the system might develop over time. Given a run \(r\in R\) and a time \(n\in\mathbb{N}_{\geq 0}\) (for simplicity, we assume that time ranges over the natural numbers), we call \((r,n)\) a _point_ in the model; that is, it describes a point in time in one way the system might develop. \(\Phi\) is the set of variables. In general, we will denote variables in \(\Phi\) with uppercase letters (e.g., \(P\)) and values of those variables with lowercase ones (e.g., \(p\)). \(\pi\) is an _interpretation_ that maps each point in the model and each variable \(P\in\Phi\) to a value, denoting the value of \(P\) at that point. (Thus, the analogue of a primitive proposition for us is a formula of the form \(P=p\): variable \(P\) takes on value \(p\).) Finally, for each agent \(i\), there is a _binary relation_\(\mathcal{B}_{i}\) over the points in the model. Two points \((r_{1},n_{1})\) and \((r_{2},n_{2})\) are related by \(\mathcal{B}_{i}\) (i.e., \((r_{1},n_{1}),(r_{2},n_{2}))\in\mathcal{B}_{i}\)) if the two points are indistinguishable to agent \(i\); that is, if, at the point \((r_{1},n_{1})\), agent \(i\) cannot tell if the true point is \((r_{1},n_{1})\) or \((r_{2},n_{2})\). We assume throughout that the \(\mathcal{B}_{i}\) relations satisfy the standard properties of a belief relation: specifically, they are _serial_ (for all points \((r,n)\), there exists a point \((r^{\prime},n^{\prime})\) such that \(((r,n),(r^{\prime},n^{\prime}))\in\mathcal{B}_{i}\)), _Euclidean_ (if \(((r_{1},n_{1}),(r_{2},n_{2}))\) and \(((r_{1},n_{1}),(r_{3},n_{3}))\) are in \(\mathcal{B}_{i}\), then so is \(((r_{2},n_{2}),(r_{3},n_{3}))\)), and transitive. These assumptions ensure that the standard axioms for belief hold; see [7] for further discussion of these issues. Syntactically, we can talk about these models using a language generated by the following context-free grammar: \[\varphi:=P=p\mid\neg\psi\mid\psi_{1}\wedge\psi_{2}\mid B_{i}\psi\mid E_{G} \psi\mid C_{G}\psi,\] where \(P\) is a variable in \(\Phi\), \(p\) is a possible value of \(P\), and \(G\) is a non-empty subset of the agents. The intended reading of \(B_{i}\psi\) is that agent \(i\) believes \(\psi\); for \(E_{G}\psi\) it is that \(\psi\) is believed by everyone in the group \(G\); and for \(C_{G}\psi\) it is that \(\psi\) is common belief among the group \(G\). We can inductively give semantics to formulas in this language relative to points in the above models. The propositional operators \(\neg\) and \(\wedge\) have the standard propositional semantics. The other operators are given semantics as follows: * \((M,r,n)\vDash P=p\) if \(\pi((r,n),P)=p\), * \((M,r,n)\vDash B_{i}\psi\) if \((M,r^{\prime},n^{\prime})\vDash\psi\) for all points \((r^{\prime},n^{\prime})\) such that \(((r,n),(r^{\prime},n^{\prime}))\in\mathcal{B}_{i}\), * \((M,r,n)\vDash E_{G}\psi\) if \((M,r,n)\vDash B_{i}\psi\) for all \(i\in G\) * \((M,r,n)\vDash C_{G}\psi\) if \((M,r,n)\vDash E_{G}^{k}\psi\) for all \(k\geq 1\), where \(E_{G}^{1}\psi:=E_{G}\psi\) and \(E_{G}^{k+1}\psi:=E_{G}(E_{G}^{k}\psi)\). There are a number of axioms that are valid in these models. Since they are not relevant for the points we want to make here, we refer the reader to [8] for a discussion of them. ## 3 Time-Stamped Common Belief We now give our first example showing that joint behavior does not require common belief. We do not define joint behavior here; indeed, as we said, there are a number of competing definitions in the literature [15, 4, 9, 11]. But we hope the reader will agree that, however we define it, the example gives an instance of it. General \(Y\) and her forces are standing on the top of a hill. Below them in the valley, the enemy is encamped. General \(Y\) knows that her forces are not single-handedly strong enough to defeat the enemy. But she also knows that General \(Z\) and his troops are expected to arrive on the hill on the opposite side the next day at noon, though she and her troops must move on before then. Thankfully, all generals are trained for how to deal with this situation. Just as her training recommends, General \(Y\) sets up traps that will delay the enemy's retreat, and leaves one soldier behind to go to the opposite hill and inform General \(Z\) of the traps upon his arrival. At 11:30 the next morning, General \(Y\) receives a (false) message informing her that General \(Z\) and his troops have been captured, and thus (incorrectly) surmises that the enemy will live to fight another day. What in fact happens is that General \(Z\)'s troops arrive at noon and attack the enemy, the enemy attempts to retreat and is stopped by General \(Y\)'s traps, and the enemy is successfully defeated. Clearly, Generals \(Y\) and \(Z\) jointly defeated the enemy. Yet they never achieved common belief of what they were doing. Before noon, General \(Z\) didn't even think that the enemy was there, and from 11:30 on, General \(Y\) thought that General \(Z\) would never arrive. It follows that there was no point at which they could have had common belief. So what is going on here? What this example suggests is that there are times when a type of _time-stamped common belief_ (cf., [8, 12]) suffices to enable joint behavior. Intuitively, on the first day, General \(Y\) believed that at noon on the second day General \(Z\) would act, attacking the enemy. Similarly, at noon on the second day, General \(Z\) believed that General \(Y\) had acted the day before, setting up the necessary traps. They also hold higher-order beliefs; for example, at the time she set the traps, general \(Y\) believed that at noon the next day general \(Z\) would believe that she had set the traps, otherwise she wouldn't have wasted the resources to set them, and so on. Much as in the usual case of common belief, these nested beliefs extend to arbitrary depths. What sets this example apart from those considered by earlier work is that, whereas in the earlier work agents needed to believe others would act as desired _at the same point_, here the agents need to believe only that others will act as desired _at the points where they're supposed to act for the joint behavior_. This suggests that time-stamped common belief can suffice for joint behavior. Our notion of time-stamped common belief can be viewed as a generalization of the notion of _time-stamped common knowledge_ developed by Halpern and Moses [8, 12] for scenarios where agents have internal clocks that may not be synchronized. We discuss the exact relationship between these notions at the end of the section. Formally, we can capture this type of time-stamped common belief with the following additions to the logic and semantic models above. Syntactically, we add two more operators to the language, \(E_{G}^{t}\psi\) and \(C_{G}^{t}\psi\), where \(G\) is a set of agents. We then add to the semantic model a function \(t\) that maps each agent and run to a non-negative integer. The intended reading of these is "each agent \(i\in G\) believes at the time \(t(i,r)\) that \(\psi\)" and "it is time-stamped-by-\(t\) common belief among the agents in \(G\) that \(\psi\)", respectively. We give semantics to these operators as follows: * \((M,r,n)\vDash E_{G}^{t}\psi\) if \((M,r,t(i,r))\vDash B_{i}\psi\) for all \(i\in G\) * \((M,r,n)\vDash C_{G}^{t}\psi\) if \((M,r,n)\vDash E_{G}^{t,k}\psi\) for all \(k\geq 1\), where \(E_{G}^{t,1}\psi:=E_{G}^{t}\psi\) and \(E_{G}^{t,k+1}\psi:=E_{G}^{t}(E_{G}^{t,k}\psi)\). These definitions are clearly very similar to the (standard) definitions given above for \(E_{G}\psi\) and \(C_{G}\psi\), except that the beliefs of each agent \(i\in G\) in run \(r\) is considered at the time \(t(i,r)\). It follows from the semantic definitions that \(E_{G}^{t}\psi\) and \(C_{G}^{t}\psi\) hold at either all points in a run or none of them. In the example above, this notion of time-stamped common belief _is_ achieved if we take \(t(Y,r)\) to be the time in run \(r\) that \(Y\) laid the traps (which may be different times in different runs) and take \(t(Z,r)\) to be the time that \(Z\) arrived in run \(r\) (which was noon in the actual run, but again, may be different times in different runs), provided that it is (time-stamped) common belief that both \(Y\) and \(Z\) will follow their training. That is, when \(Y\) lays the traps, \(Y\) must believe that \(Z\) will believe when he arrives that \(Y\) laid the traps, \(Z\) will believe when he arrives that \(Y\) believed when she laid the traps that he would believe when he arrived that \(Y\) laid the traps, and so on. The key point here is that time-stamped common belief can sometimes suffice for achieving cooperative behavior, even without standard common knowledge. As we suggested above, our notion of time-stamped common belief is a generalization of (and was inspired by) Halpern and Moses' notion of (time-\(T\)) time-stamped common knowledge. Roughly speaking, for them, time-\(T\) time-stamped common knowledge of \(\phi\) holds among the agents in a group \(G\) if every agent \(i\) in \(G\) knows \(\phi\) at time \(T\) on her clock, all agents in \(G\) know at time \(T\) on their clock that all agents in \(G\) know \(\phi\) at time \(T\) on their clock, and so on (where \(T\) is a fixed, specific time). If it is common knowledge that clocks are synchronized, then time-stamped common knowledge reduces to common knowledge. If we take \(t(i,r)\) to be the time in run \(r\) that \(i\)'s clock reads time \(T\) (and assume that it is commonly believed that each agent's clock reads time \(T\) at some point in every run), then their notion of time-stamped common knowledge becomes a special case of our time-stamped common belief. But note that with time-stamped common belief, we have the flexibility of referring to different times for different agents, and the time does not have to be a clock reading; it can be, for example, the time that an event like laying traps occurs. ## 4 Action-Stamped Common Belief There is an even more general variant of common belief that can suffice for joint behavior. What really mattered in the previous example is that everyone had the requisite beliefs at the times that they were acting. But there need not necessarily be only one such point per agent per run; an agent might act multiple times as part of the plan, as the following modified version of the story illustrates: General \(Y\) and her forces arrive to the south of the town where the enemy forces are encamped. General \(Y\) knows that her forces are not single-handedly strong enough to defeat the enemy. But she also knows that General \(Z\) and his troops are expected to arrive to the north of the city some time in the near future, though she and her troops must move on before then. The swiftly-coursing river prevents the enemy from escaping to the east. But unfortunately, they can still escape inland to the west. Thankfully, all generals are trained for how to deal with this situation as well. Just as her training recommends, General \(Y\) sets up traps that will delay the enemy's southward retreat and then, as she heads inland, also sets up traps to the west, finally leaving one soldier behind to go north and inform General \(Z\) of the traps upon his arrival. The next morning, General \(Y\) receives a (false) message informing her that General \(Z\) and his troops have been captured, and thus (incorrectly) surmises that the enemy will live to fight another day. What in fact happens is that General \(Z\)'s troops arrive later that day and are informed by the remaining soldier that, not too long ago, General \(Y\)'s troops set traps to the south and west. They attack the enemy, the enemy attempts to retreat and is stopped by General \(Y\)'s traps, and the enemy is successfully defeated. Again, Generals \(Y\) and \(Z\) jointly and collaboratively defeated the enemy, but time-stamped common belief doesn't suffice for this version of the story, because we cannot specify a single time for General \(Y\)'s actions. Instead, what really matters is that when they are acting as part of a joint plan they hold the requisite (common) beliefs. The joint plan need not be known upfront; General \(Z\) does not know what he will need to do to achieve the common goal until he arrives at the scene. To capture this new requirement, we define a notion of _action-stamped common belief_. We begin by adding a special Boolean variable \(ACTING_{i,G}\) for any group \(G\) and agent \(i\in G\). This variable is true (i.e., takes value \(1\), as opposed to \(0\)) at a point \((r,n)\) if the agent \(i\) is acting towards the group plan of \(G\) at \((r,n)\) and false otherwise. So for the generals, \(ACTING_{Y,G}=1\) would be true when she lays the traps, \(ACTING_{Z,G}=1\) would be true at the point when he attacks, and they'd both be false otherwise (where \(G=\{Y,Z\}\)). We often write \(ACTING_{i,G}\) and \(\neg ACTING_{i,G}\) instead of \(ACTING_{i,G}=1\) and \(ACTING_{i,G}=0\), and similarly for other Boolean variables. By using \(ACTING_{i,G}\), we can abstract away from what actions are performed; we just care that some action is performed by agent \(i\) towards the group plan, without worrying about what that action is. As in the case of time-stamped common belief, we add two modal operators to the language (in addition to the variables \(ACTING_{i,G}\)). Let \(G\) be a set of agents. \(E^{\mathbf{a}}_{G}\psi\) then expresses that, for each agent \(i\in G\), whenever \(ACTING_{i,G}\) holds (it may hold several times in a run, or never), \(i\) believes \(\psi\). \(C^{\mathbf{a}}_{G}\psi\) then defines the corresponding notion of common belief for the points at which agents act at part of the group. We give semantics to these modal operators as follows: * \((M,r,n)\vDash E^{\mathbf{a}}_{G}\psi\) if for all \(n^{\prime}\) and all \(i\in G\) such that \((M,r,n^{\prime})\vDash ACTING_{i,G}=1\), it is also the case that \((M,r,n^{\prime})\vDash B_{i}\psi\). * \((M,r,n)\vDash C^{\mathbf{a}}_{G}\psi\) if \((M,r,n)\vDash E^{\mathbf{a},k}_{G}\psi\) for all \(k\geq 1\), where \(E^{\mathbf{a},1}_{G}\psi:=E^{\mathbf{a}}_{G}\psi\) and \(E^{\mathbf{a},k+1}_{G}\psi:=E^{\mathbf{a}}_{G}(E^{\mathbf{a},k}_{G}\psi)\). Returning to the example, although the agents do not have time-stamped common belief at all the points when they act, they do have action-stamped common belief. General \(Z\) acted believing that General \(Y\) had acted as expected, and also believing that General \(Y\) acted believing that he would act as expected, and so on. It is easy to see that time-stamped common belief can be viewed as a special case of action-stamped common belief: Given a time-stamping function \(t\), we simply take \(ACTING_{i,G}\) to be true at those points \((r,n)\) such that \(t(i,r)\) holds. It is worth noting that, in both this and the previous section, the agents having a protocol in advance for how to deal with the situation is not really necessary for them to succeed. In the examples, consider a scenario where generals are in fact not trained for how to handle the situation, but instead General \(Y\) has the brilliant idea to lay traps and send a messenger to meet General \(Z\) upon arrival. As long as message delivery is reliable, action-stamped common belief can be achieved and they can successfully defeat the enemy. ## 5 Joint Behaviors Among Changing Groups In practice, the members of groups change over time. For example, a group of firefighters may work together to safely clear a burning building, but (thankfully!) they don't need to wait until all the firefighters are on the scene, or even until it is known which firefighters are coming, in order for the first firefighters to begin. Instead, structures and guidelines allow the set of firefighters who are on the scene to act cooperatively, even without each firefighter knowing who else will show up. The formalisms of the two previous sections assumed a fixed group \(G\), so cannot capture this kind of scenario. But the changes necessary to do so are not complicated. Rather than considering (some variant of) common belief with respect to a fixed set \(G\) of agents, we consider it with respect to an _indexical_ set \(S\), one whose interpretation depends on the point. More precisely, an indexical set \(S\) is a function from points to sets of agents; intuitively, \(S(r,n)\) denotes the members of the indexical group \(S\) at the point \((r,n)\). We assume that a model is extended so as to provide the interpretation of \(S\) as a function. Our semantics for action-stamped common belief with indexical sets are now a straightforward generalization of the semantics for rigid (non-indexical) sets: * \((M,r,n)\models E^{\mathbf{a}}_{S}\psi\) if for all \(n^{\prime}\) and all \(i\in S(r,n^{\prime})\) such that \((M,r,n^{\prime})\models ACTING_{i,S}\), it is also the case that \((M,r,n^{\prime})\models B_{i}\psi\). * \((M,r,n)\models C^{\mathbf{a}}_{S}\psi\) if \((M,r,n)\models E^{\mathbf{a},k}_{S}\psi\) for all \(k\geq 1\), where \(E^{\mathbf{a},1}_{S}\psi:=E^{\mathbf{a}}_{S}\psi\) and \(E^{\mathbf{a},k+1}_{S}\psi:=E^{\mathbf{a}}_{G}(E^{\mathbf{a},k}_{S}\psi)\). The only change here is that in the semantics of \(E^{\mathbf{a}}_{S}\), we need to check the agents in \(S(r,n)\) for each point. Of course, we can also allow indexical sets in time-stamped common belief in essentially the same way. Whereas in the semantics of \(E^{t}_{G}\psi\), we required that \((M,r,n)\models E^{t}_{G}\psi\) if, for all \(i\in G\), \((M,r,t(i,r))\models B_{i}\psi\), now we require that \((M,r,n)\models E^{t}_{S}\psi\) if, for all agents \(i\), if \(i\in S(r,t(i,r))\), then \((M,r,t(i,r))\models B_{i}\psi\). We care about what agent \(i\) believes at \((M,r,t(i,r))\) only if \(i\) is actually in group \(S\) at the point \((r,t(i,r))\). ## 6 Significance In Sections 3-5 we showed that action-stamped common belief can suffice to enable joint behavior, whereas the prior work on the topic had assumed common belief was necessary. So what? Why does this matter? We argue that it is important for two reasons: 1) because misunderstanding the type of belief necessary can lead to mis-evaluation of cooperative capabilities, and 2) because requiring common belief can unnecessarily make cooperation impossible in scenarios where it is in fact possible and could be quite beneficial. As part of the recent push for more research on cooperative AI, some have argued that we should "construct more unified theory and vocabulary related to problems of cooperation" [7]. One important step in this program is (in our opinion) formalizing the requirements for various types of cooperation, including joint behavior. This, in turn, requires understanding the level and type of (common) belief needed for joint behavior. As our examples have shown, full-blown common belief is not necessary; weaker variants that are often easier to achieve can suffice. Relatedly, there has been a push to develop methods for _evaluating_ the cooperative capabilities of agents, as a way of developing targets and guideposts for the community [5]. Again, this will require understanding (among other things) what type of beliefs are necessary for cooperation. Incorrect assumptions about the types of beliefs necessary can lead to incorrect conclusions about the feasibility of cooperation. For example, if an evaluation system takes as given the assumption that it is impossible for agents that cannot achieve common belief to behave cooperatively, it may in fact lead to effective cooperative agents being scored badly, leading to misdirected research. A second reason that it is important to clarify the types of beliefs necessary for joint behavior is that misunderstanding them can lead to systems unnecessarily aborting important cooperative tasks. As is well known, achieving true common knowledge can be remarkably difficult in real-world systems, often requiring either a communication system that guarantees truly synchronous delivery or guaranteed bounded delivery time together with truly synchronized clocks [8]. Action-stamped common belief can sometimes be achieved when common belief cannot. To demonstrate the importance of this, we consider an example from the domain of urban search and rescue, a domain where 1) the use of multi-agent systems consisting of humans and AI agents has long been considered and advocated for, 2) the types of teamwork necessary can be complex, and 3) there is some evidence of potential adoption, having been used, for example, at a small scale in the aftermath of September 11th [3, 2, 14, 16, 17, 19]. Though the example we give is a simple, stylized case, the domain is sufficiently complex that we would expect these types of issues to arise in practice if systems were deployed at scale. **Example 1**.: _An earthquake occurs, causing a large building to collapse. The nearest search and rescue team arrives on scene, and the incident commander has to decide how to proceed. After evaluating the scene, the incident commander determines that there are two reasonable options:_ 1. _Wait for a heavy piece of machinery that will certainly be able to safely lift the roof of the collapsed building on its own and allow rescuers safe access to the building, but because there are only a few of them it will take a week for the machine to be available and brought to the scene._ 2. _Take a joint-behavior-based approach: While sizing up the situation, the team has determined that the structure is stable and will not collapse, and so is safe to enter. However, attempting to exit the building may disrupt the structure and cause harm. This allows for a team to safely enter the building and restabilize parts of the roof. The restabilization would not be enough to make it safe to exit--in fact, it would require adjusting the structure in ways that would make an attempt to exit even more risky--but it would be enough that a more easily accessible robotic system would be able to safely remove the roof piece by piece, allowing the rescuers and anyone trapped inside to safely escape._ _Out of concern for the safety of anyone trapped inside, the incident commander decides it is best not to wait, and so takes the joint-behavior-based approach. He sends the team of rescuers in to begin the necessary process, and tells them to do the one part that is visible from the outside last, so that when the robot arrives it is possible to tell from the outside that the team of rescuers has finished restabilizing the roof and it is safe to proceed. He also tells them the full plan for the robot to come and lift the roof piece by piece, and that he expects it will be 2-3 hours before the robot arrives on scene._ _The group enters the wreckage and secures it in the necessary ways, completing the part that is visible from the outside last, as planned. But it turns out that the earthquake affected many buildings, so the robot is in high demand. It ends up taking close to 8 hours for the robot to arrive on scene. For safety reasons, the robot is designed so that, before it undertakes a task, it automatically assesses whether the plan is safe, and will not proceed unless it determines that it is. Because these types of plans are sometimes necessary in search and rescue, the robot has a built-in joint-plan module with a theory of joint behavior. When the robot arrives, the incident commander can therefore easily enter the information for the robot to start assessing and undertaking the specified joint behavior._ _If the robot's model of joint behavior requires common belief, a problem will arise. At no point is there ever common belief of the joint behavior. Before the robot arrives, the robot certainly has no belief about the joint behavior. And when the robot arrives, it must consider the possibility that, because of the delay, the rescuers have given up hope of the robot arriving and concluded that they may have to wait a full week until the larger piece of machinery is available. Even if this isn't actually the case, the robot will consider it possible that they are in a run where it is the case, and so common belief will not be achieved. And because its theory of joint behavior assumes common belief, the robot will determine that the joint behavior cannot be carried out. Thus, everyone will have to wait a week for the heavier machinery, risking the lives of anyone trapped inside with the six extra days of delay._ _If, on the other hand, the robot's theory of joint behavior is based on action-stamped common belief, the task will be able to be properly and safely carried out. When the rescuers perform their part, they believe that the robot will arrive soon and perform its part of the task. Similarly, the robot, seeing the part of the work that is visible from the outside, believes that the rescuers held those beliefs when acting (and therefore performed the required adjustments). The rescuers believed that the robot would hold these beliefs when it arrived, the robot believed they would, and so on. The fact that the robot arrived later than expected and the rescuers may have started to have uncertainty about the plan doesn't affect the requisite beliefs because all that matters are the beliefs of the agents at the points where they act. Having the theory of joint behavior require action-stamped common belief instead of common belief allows the rescuers and the robot to carry out their joint task in a safe manner, as they clearly ought to, saving anyone trapped far earlier than might otherwise be possible._ This example highlights the value of getting the types of beliefs necessary right; getting the theory right, and basing it on action-stamped common belief instead of standard common-belief, can enable cooperation in a range of important scenarios where standard common belief is impossible or difficult to achieve, whereas action-stamped common belief may be easily attainable. ## 7 On the Necessity and Sufficiency of Action-Stamped Common Belief for Joint Behavior We've argued in this paper that the prior work was incorrect in asserting that common belief was necessary for joint behavior, and shown by example that action-stamped common belief can suffice. We now argue that an even stronger statement is true: there is a sense in which action-stamped common belief is necessary and sufficient for joint behavior. We say "in a sense" here, because much depends on the conception of joint behavior being considered. So what we do in this section is give a property that we would argue is one that we would want to hold of joint behavior, and then show that action-stamped common belief is necessary and sufficient for this property to hold. What does it take to go from a collection of individual behaviors to a joint behavior? The following example may help illuminate some of the relevant issues. Jasper and Horace are both crooks, though neither is an evil genius by any stretch of the imagination. Having never met each other, they both happen to decide to rob the Great Bank of London on exactly the same day. As it turns out, neither of them did a good job preparing, and they each knew about only half of the bank's security systems, and so made plans to bypass only that half. By sheer dumb luck, between them they know about all the bank's security systems. So when each bypasses the part that they know about (at roughly the same time), the bank's security systems go down. They each make it in, steal a small fortune, and escape, none the wiser as to the other's behavior or that their plan was doomed to fail on its own. Is Jasper and Horace robbing the bank an instance of joint behavior? We think not. One critical component that distinguishes this from a joint behavior is the beliefs of the agents. Joint behaviors are collective actions where people do their part because they believe that everyone else will do their part as well. Here, Jasper and Horace have no inkling that the other will help disable the system. We now want to capture these intuitions more formally. We start by adding another special Boolean variable \(SHOULD\_ACT_{i,s}\) for each agent \(i\) and indexical group \(S\), specifying the points in each run where agent \(i\) is supposed to act towards the plan of group \(S\). We then add a special formula \(\chi_{S}\) to the language:1 Footnote 1: As long as the set of agents is finite (which we implicitly assume it is), we can express \(\chi_{S}\) in a language that includes a standard modal operator \(\square\), where \(\square\varphi\) is true at a point \((r,n)\) iff \(\varphi\) is true at all points \((r,n^{\prime})\) in the run. For ease of exposition, we do not introduce the richer modal logic here. * \((M,r,n)\vDash\chi_{S}\) if for all \(n^{\prime}\) and all agents \(i\in S(r,n^{\prime})\), \((M,r,n^{\prime})\vDash\)\(SHOULD\_ACT_{i,s}\to ACTING_{i,S}\) The formula \(\chi_{S}\) is thus true at a point \((r,n)\) if, at all points in run \(r\), each agent \(i\) in the indexical group \(S\) plays its part in the group plan whenever it is supposed to. If we think of \(ACTING_{i,S}\) as "\(i\) is taking part in the joint behavior of the group \(S\)", then the property COOP\({}_{S}\) that we now specify essentially says that to have truly joint behavior, each agent in \(S\) must believe when she acts that all of the members of the (indexical) group \(S\) will do what they're supposed to; if they don't all have that belief, then it's not really cooperative behavior. Formally, COOP\({}_{S}\) is a property of an indexical group \(S\) in a model \(M\): [COOP\({}_{S}\):] For all points \((r,n)\) and agents \(i\in S(r,n)\), \((M,r,n)\vDash ACTING_{i,S}\to B_{i}\chi_{S}\). Requiring COOP\({}_{S}\) for joint behavior makes action-stamped common belief of \(\chi_{S}\) necessary for joint behavior. **Theorem 7.1**.: _If COOP\({}_{S}\) holds in a model \(M\), then \((M,r,n)\vDash C_{S}^{a}\chi_{S}\) for all points \((r,n)\)._ Proof.: We begin by defining a notion of \(a\)_-reachability_: A point \((r^{\prime},n^{\prime})\) is \(S\)-\(a\)-reachable from \((r,n)\) in \(k\) steps if there exists a sequence \((r_{0},n_{0}),\ldots,(r_{k},n_{k})\) of points such that \((r_{0},n_{0})=(r,n)\), \((r_{k},n_{k})=(r^{\prime},n^{\prime})\), and for all \(0\leq l<k\), there exists a point \((r_{l},n^{\prime}_{l})\) and an agent \(i\in S(r_{l},n^{\prime}_{l})\) such that \((M,r_{l},n^{\prime}_{l})\vDash ACTING_{i,S}\) and \(((r_{l},n^{\prime}_{l}),(r_{l+1},n_{l+1}))\in\mathcal{B}_{i}\). By the semantics of \(C_{G}^{a}\), \(C_{G}^{a}\chi_{S}\) holds at \((r,n)\) iff \(\chi_{S}\) holds at every point \((r^{\prime},n^{\prime})\) that is \(S\)-\(a\)-reachable from \((r,n)\) in \(1\) or more steps. Consider any such point \((r^{\prime},n^{\prime})\). Then, by the definition of reachability, there exists some point \((r^{\prime\prime},n^{\prime\prime})\) and some agent \(i\in S(r^{\prime\prime},n^{\prime\prime})\) such that \((M,r^{\prime\prime},n^{\prime\prime})\vDash ACTING_{i,S}\) and \(((r^{\prime\prime},n^{\prime\prime}),(r^{\prime},n^{\prime}))\in\mathcal{B}_ {i}\). Because \((M,r^{\prime\prime},n^{\prime\prime})\vDash ACTING_{i,S}\), we get by COOP\({}_{S}\) that \((M,r^{\prime\prime},n^{\prime\prime})\vDash B_{i}\chi_{S}\). Then by the semantics of \(B_{i}\) and the fact that \(((r^{\prime\prime},n^{\prime\prime}),(r^{\prime},n^{\prime}))\in\mathcal{B}_ {i}\) we get that \((M,r^{\prime},n^{\prime})\vDash\chi_{S}\). But \((r^{\prime},n^{\prime})\) was an arbitrary point \(S\)-\(a\)-reachable from \((r,n)\) in \(1\) or more steps, so \(\chi_{S}\) holds at all such points, and we have that \((M,r,n)\vDash C_{S}^{a}\chi_{S}\). But \((r,n)\) was also arbitrary, so \(C_{S}^{a}\chi_{S}\) holds at all points. The converse to Theorem 7.1 also holds; that is, action-stamped common belief of \(\chi_{S}\) suffices for COOP\({}_{S}\) to hold. Put another way, action-stamped common belief is exactly the ingredient that we need to meet the belief requirements of the property that we used to characterize joint behavior. **Theorem 7.2**.: _If \((M,r,n)\vDash C_{S}^{a}\chi_{S}\) for all points \((r,n)\), then COOP\({}_{S}\) holds in \(M\)._ Proof.: Consider an arbitrary point \((r,n)\) and agent \(i\in S(r,n)\) such that \((M,r,n)\vDACTING_{i,S}\). By assumption, \((M,r,n)\vD C^{a}_{S}\mathcal{X}_{S}\). So, by the semantics of \(C^{a}_{S}\), it follows that \((M,r,n)\vD E^{a}_{S}\mathcal{X}_{S}\). In turn, it follows from the semantics of \(E^{a}_{S}\) that \((M,r,n)\vD B_{i}\mathcal{X}_{S}\) (because \((M,r,n)\vDACTING_{i,S}\)). But \(r\), \(n\), and \(i\) were arbitrary, so we have that \((M,r,n)\vDACTING_{i,S}\to B_{i}\mathcal{X}_{S}\) for all such points and agents. Thus, \(\mathrm{COOP}_{S}\) holds in \(M\). The astute reader will have noticed that the proofs of Theorem 7.1 and 7.2 did not depend in any way on \(\mathcal{X}_{S}\). The formula \(\mathcal{X}_{S}\) in these theorems can be replaced by an arbitrary formula \(\varphi\). In other words, if all the agents in \(S\) believe \(\varphi\) at the point when they act, then \(\varphi\) is action-stamped common belief, and if \(\varphi\) is action-stamped common belief, then all agents in \(S\) must believe \(\varphi\) at the point when they act. Formally, the proofs of Theorem 7.1 and 7.2 also show the following: **Theorem 7.3**.: _If \((M,r,n)\vDACTING_{i,S}\to B_{i}\varphi\) for all points \((r,n)\) and agents \(i\in S(r,n)\), then \((M,r,n)\vD C^{a}_{S}\varphi\) for all points \((r,n)\)._ **Theorem 7.4**.: _If \((M,r,n)\vD C^{a}_{S}\varphi\) for all points \((r,n)\), then \((M,r,n)\vDACTING_{i,S}\to B_{i}\varphi\) for all points \((r,n)\) and agents \(i\in S(r,n)\)._ These results show that if we viewed a property other than \(\mathrm{COOP}_{S}\) as characterizing joint behavior, as long as that property required agents to hold a certain belief when acting, action-stamped common belief will be necessary and sufficient for that property to hold. ## 8 Conclusion and Future Work We have argued here that, contrary to what was suggested in earlier work, common belief is not necessary for joint behavior. We have presented a new notion, _action-stamped_ common belief, and shown that it is, in a sense, necessary and sufficient for joint behavior, and can be achieved in scenarios where standard common belief cannot. This is important because modelling the conditions needed for joint behavior correctly can enable cooperation in important scenarios, such as search and rescue, where it might not otherwise be possible. We chose to use the term _joint behavior_ in this paper because it sounded to our ears like it most accurately captured the notion we were considering; no doubt to some readers other terms will sound like a better fit. As we showed in Section 7, action-stamped common belief will in some sense be the right type of belief for a group behavior where individuals do their part only if they believe others will do the same, whatever terminology we use. We suspect that, for some readers, the idea that action-stamped common belief is sufficient for joint behavior will seem obvious. In a certain sense, we agree; in retrospect, it _does_ feel like the obviously correct notion for joint behaviors. That said, despite there being thousands of papers following up on those we cited in the introduction, no one seems to have had that realization. Similarly, while action-stamped common belief seems quite natural, it has not been explored before in the literature. This work suggests two areas that are ripe for future work. The first is to more fully explore the logical aspects of action-stamped common belief. Can a sound and complete axiomatization be provided? What is the complexity of various questions one might ask, such as model checking whether agents have action-stamped common belief in a model? How can we practically engineer systems that rely on action-stamped common belief? The second area we think worth exploring is other aspects of joint behavior, as well as other types of cooperation. We've argued that one property of joint behavior is based on the beliefs of the agents, and zoomed in to closely examine that belief aspect, revealing a nuanced but important error in earlier thinking. We think that there may well be other aspects of cooperation that are worth digging into in this fine-grained way. Given the importance of cooperative AI, we hope that others will join us in exploring these questions.
2306.02145
Origami-Inspired Composite Springs with Bi-directional Translational-Rotational Functionalities
Many of the patterns seen in Origami are currently being explored as a platform for building functional engineering systems with versatile characteristics that cater to niche applications in various technological fields. One such pattern is the Kresling pattern, which offers unconventional mechanical properties with rich coupled translation and rotational kinematics. In this paper, we design and manufacture a composite spring inspired by the Kresling Origami pattern, which is capable of simultaneously behaving as an axial and torsional restoring element with bi-directional functionalities. We study, numerically and experimentally, the restoring behavior of this spring, its equilibria, and their bifurcations for different combination of the design parameters. We show that the fabricated springs can have fixed, quasi-zero, or variable stiffness, and can be customized to exhibit single- or multi-stable states (symmetric and asymmetric) as needed. The proposed spring demonstrates how combining additive manufacturing with Origami principles can offer a new pathway towards the design of new structural and machine elements with versatile functionalities.
Ravindra Masana, Mohammed F. Daqaq
2023-06-03T16:18:29Z
http://arxiv.org/abs/2306.02145v1
# Origami-Inspired Composite Springs with Bi-directional Translational-Rotational Functionalities ###### Abstract Many of the patterns seen in Origami are currently being explored as a platform for building functional engineering systems with versatile characteristics that cater to niche applications in various technological fields. One such pattern is the Kresling pattern, which offers unconventional mechanical properties with rich coupled translation and rotational kinematics. In this paper, we design and manufacture a composite spring inspired by the Kresling Origami pattern, which is capable of simultaneously behaving as an axial and torsional restoring element with bi-directional functionalities. We study, numerically and experimentally, the restoring behavior of this spring, its equilibria, and their bifurcations for different combination of the design parameters. We show that the fabricated springs can have fixed, quasi-zero, or variable stiffness, and can be customized to exhibit single- or multi-stable states (symmetric and asymmetric) as needed. The proposed spring demonstrates how combining additive manufacturing with Origami principles can offer a new pathway towards the design of new structural and machine elements with versatile functionalities. ## 1 Introduction Origami is the art of folding paper to create aesthetically pleasing three-dimensional designs. Long before its practice as a craft[1, 2], forms of origami existed in nature[3, 4, 5], and were recently uncovered by various researchers who turned to the fields of biology and physiology of plants and animals to gain further insights into building multi-functional engineering systems[6, 7, 8, 9]. The appearance of origami patterns in nature inspired such researchers to explore origami as a platform for building functional engineering systems with versatile characteristics that cater to niche applications in various technological fields[10, 11, 4, 12]. This includes the design and construction of structures with auxeticity[13], multi-stability[14], and programmable stiffness[15, 16]. Such structures have already found their way into the design of solar arrays[17], inflatable booms[4], vascular stents[18], viral traps[19, 20], wave guides[21, 22, 23], and robotic manipulators[24, 25]. Among the many different available origami patterns, some designs have attracted more attention in engineering applications. For example, the _Miura-Ori_, a rigid origami pattern1, has been used to construct three-dimensional deployable structures that have been studied and utilized in applications including space exploration[26, 17, 27], deformable electronics[28, 29], artificial muscles[30], and reprogrammable mechanical metamaterials[11, 15, 31]. The _Ron Resch_, which is a non-periodic rigid origami with unusually high buckling strength has been used for energy absorption[12, 32]. Footnote 1: In rigid origami structures only creases exhibit deformation during deployment. The _Yoshimura_ pattern[33, 34] and the _Kresling_ pattern[35, 36, 37, 38, 39, 40] are leading examples of non-rigid origami patterns 2 which have been utilized to engineer structures with unique properties. For instance, the _Kresling_ pattern, has inspired the design of flexible tunable antennas [42], robot manipulators [43], wave guides [44], selectively-collapsible structures [45], vibration isolators [46], fluidic muscles [30], mechanical bit memory switches [47, 48, 49], reconfigurable antenna [50] and crawling and peristaltic robots [51, 52, 53]. Footnote 1: The reader is referred to [40] for a review of the literature on the subject of this paper. The Kresling pattern has also been used to build and construct coupled linear-torsional springs coined as Kresling Origami Springs (KOSs) [54]. Such springs which take the shape of a cylindrical bellow-type structure are created by tessellating similar triangles in cyclic symmetry and connecting them as shown in Figure 1(a). The triangles in the KOS are connected in a circular arrangement, with each triangle connected to two other triangles along two of its edges. One edge forms a mountain fold, \(b_{0}\), and the other a valley fold, \(c_{0}\). The third edges, \(a_{0}\), of the connected triangles form two parallel polygonal end planes (top and bottom planes). The design of the KOS is characterized by geometric parameters, that include the number of sides, \(n\), of the parallel polygons, the radius, \(R\), of the circle that encloses them, the preloading height, \(u_{0}\), and rotation angle, \(\phi_{0}\) between the end planes. Figure 1(a) illustrates these parameters. When a Kresling Origami Spring (KOS) is subjected to an axial load or a torque, it undergoes compression or expansion, depending on the direction of the load. As a result, the two parallel polygon planes, while staying rigid, move and rotate relative to each other along a centroidal axis, as shown in Fig. 1(b). This motion causes the triangular panels to deform and store energy in the form of strain energy. Upon removal of the external load, the KOS springs back to its initial configuration, releasing the stored energy. The behavior of this Origami-inspired KOS is that of a unique restoring element with axial and torsional functionalities, which can form the foundation for many exciting engineering structures, especially those Figure 1: (a) Schematic representation of the KOS with \(n=6\) polygon, and (b) schematic representation of the operation of KOS under applied load. in the field of rotating machinery machinery [55], haptics [56], and soft robotics [52, 53]. However, because of the nature of its coupled kinematics, a single KOS always results in a coupled translational-rotational motion regardless of the type of load applied to it. In other words, the two coordinates, \(u\), and, \(\phi\), are always kinematically constrained resulting in a single degree of freedom. Thus, when one end of the KOS is fixed while the other (free end) is subject to a load, either axial or torsional, the free end undergoes coupled translational-rotational motion. This coupled motion of the free end is not desirable since, in most applications, the free end is usually constrained from rotating when the load is axial, and from translating when the load is torsional. As such, it is desired that the two motions of the free end be decoupled so that the restoring force is independent for different types of loads. One way to achieve this goal is to join two KOSs end-to-end in series to form a Kresling Origami Spring Pair (KOSP). Unlike a single KOS, where the two coordinates of the free end are always coupled, the KOSP can either have coupled or decoupled motion, depending on the angle of the creases in its constituent KOSs. When the constituent KOSs are joined in a way that their creases have opposite sign slopes with respect to the horizontal connecting surface, the motion at the free end is decoupled. On the other hand, if the constituent KOSs are connected in a way that their creases have similar sign slopes, motion at the free end remains coupled, but with an extended range of operation. For brevity, we will refer to KOSPs with decoupled motion at the free end as \(d\)-KOSPs, while those with coupled motion will be denoted as \(c\)-KOSPs. The decoupling of the motion at the free end can be observed by inspecting the \(d\)-KOSP in Fig. 2 (a) (Green). When the bottom polygon is fixed while the upper end is subjected to a prescribed translational motion, \(u_{T}\), the top polygon does not rotate as the height of the stack is increased (Note the circular blue marker placed on the top polygon does not rotate under the axial loading). The underlying kinematics of the KOSP allows the translational motion of the free end to occur free of rotation by forcing the center polygon connecting both KOSs to undergo rotation and translation as the translation of the free end is taking place. More specifically, the kinematics is such that the module whose crease orientation matches with the direction of the external rotation undergoes compression, while the other module undergoes expansion. On the other hand, when a prescribed translational motion, \(u_{T}\), applied on the the top polygon of the \(c\)-KOSP (Orange), the top end undergoes coupled-rotational translational motion with extend range of operation as compared to a single KOS. Similarly, as shown in Fig. 2 (b), when a prescribed rotational motion \(\phi_{T}\), is applied at the top end of the \(d\)-KOSP, the connecting polygon undergoes coupled translational-rotational motion that maintains that total height of the KOSP constant. Videos demonstrating the different scenarios can be seen in supplementary video S1. Another issue with the torsional behavior of a single KOS is that it is uni-directional. This is because the KOS is much stiffer when the applied torque opposes the folding direction of the panels than when it is applied in the same direction. As such, a single KOS always has an asymmetric restoring torque around its equilibrium state, which is not a desirable attribute. On the other hand, the \(d\)-KOSP is bi-directional and can be designed to have a symmetric restoring torque around its equilibrium, which is a key advantage over the single KOS. It is therefore the goal of this paper to design and additively manufacture bi-directional tunable springs that have decoupled translational rotational degrees of freedom at their free end. The restoring force and torque behavior of those springs will be analyzed both numerically and experimentally using functional 3D-printed springs. The number of equilibria, their stability, and bifurcations will also be analyzed as the precompression height of the \(d\)-KOSP is varied. The rest of the paper is organized as follows: Section 2 introduces a simplified truss model, which can be used to study the qualitative quasi-static behavior of the KOS and uses it to analyze the equilibria of a single KOS. Section 3 uses a truss model to investigate the quasi-static response behavior of \(d\)-KOSPs and analyzes the KOSPs possible equilibria and their bifurcations as the stack is pre-compressed to different heights. Section 4 presents an experimental study of the quasi-static behavior of the proposed \(d\)-KOSPs and illustrates that the responses are in qualitative agreement with the numerical findings. Finally, section 5 presents the key conclusions. ## 2 Restoring behavior of a single KOS The restoring behavior (force and torque) of a KOS can vary greatly depending on the values of its geometric design parameters. In some cases, this can result in a single equilibrium configuration, while in others, there may be two equilibria. A qualitative understanding of the general behavior of KOSs can be obtained by using an axial truss model, in which each triangle in the KOS is represented by axially-deformable truss elements located at its edges [37, 47], as shown in Fig. 3 (a).3 Footnote 3: It is important to note that the truss model is only used in this paper as a qualitative guide for the choice of design parameters that result in different behaviors. More accurate, yet computationally expensive models can be found in [57]. In the truss model, the relative position and orientation of the two end planes during deployment can be described by the length of the three edges of the triangle, \(a\), \(b\), and \(c\), in terms of the other design parameters as, \[a=2R\sin\frac{\pi}{n},\qquad b=\sqrt{4R^{2}\sin^{2}\left(\frac{\phi-\frac{\pi }{n}}{2}\right)+u^{2}},\qquad c=\sqrt{4R^{2}\sin^{2}\left(\frac{\phi+\frac{ \pi}{n}}{2}\right)+u^{2}}. \tag{1}\] where \(\phi\) and \(u\) are, respectively, the relative angle and vertical distance between the end planes under loading. Assuming that the base of each triangle remains undeformed during deployment, and that the panels do not buckle under compression or encounter self-avoidance at small values of \(u\), the total strain Figure 2: Deformation of the top end of a pair of KOSPs. (a) Under uni-axial loading, and (b) under torsional loading. _(Supplmentary video, S1 demonstrates this operation.)_ energy stored due to panel deformation can be approximated by \[\Pi=\frac{nEA}{2}\left[\frac{(b-b_{0})^{2}}{b_{0}}+\frac{(c-c_{0})^{2}}{c_{0}} \right], \tag{2}\] where \(EA\) is the axial rigidity of the truss elements. Here, \(E\) is the elastic modulus, and \(A\) is the cross-sectional area of the truss elements. Moving forward, all the results using the truss model are normalized with respect to the axial rigidity, or simply \(EA\) is set to unity. The equilibrium states \((u_{e},\phi_{e})\) of the KOS are determined by minimizing the strain energy with respect to \(u\) and \(\phi\). Specifically, \(\Pi_{u}|(u_{e},\phi_{e})=\Pi_{\phi}|(u_{e},\phi_{e})=0\) at any equilibrium state, where, \(\Pi_{u}\) and \(\Pi_{\phi}\) represents \(\partial\Pi/\partial u\) and \(\partial\Pi/\partial\phi\), respectively. An equilibrium configuration is considered physically stable only if it corresponds to a minimum in the strain energy, which is satisfied when \(\Pi_{uu}\Pi_{\phi\phi}|(u_{e},\phi_{e})-\Pi_{u\phi}^{2}|(u_{e},\phi_{e})>0\) and \(\Pi_{uu}>0\). Figure 3 (b) illustrates the normalized potential energy function, \(\Pi\), for a KOS with the design parameters, \(\phi_{0}=15^{o}\), \(u_{0}/R=1.65\) and \(n=6\). In the figure, the solid lines represent the condition \(\Pi_{\phi}=0\) and \(\Pi_{\phi\phi}>0\); i.e. the local minima lines, while the dotted lines represent the condition \(\Pi_{\phi}=0\) and \(\Pi_{\phi\phi}<0\); i.e., the local maxima lines. Similarly, the solid line with circular markers represents the condition \(\Pi_{u}=0\) and \(\Pi_{uu}>0\), and the line with square markers represent the condition \(\Pi_{u}=0\) and \(\Pi_{uu}<0\). These curves are important because they determine the route taken by the KOS during Figure 3: (a) Schematic representation of a truss model. (b) Normalized potential energy function \(\Pi\) using truss model of a KOS with design parameters, \(u_{0}/R=1.65\), \(\phi=15^{o}\) and \(n=6\). (c) Typical potential energy plots- monostable and bistable. (d) Design map for \(n=6\). deployment. When the KOS is subjected to uni-axial loading without any external torque, the KOS follows the curve \(\Pi_{\phi}=0\), whereas it follows the \(\Pi_{u}=0\) curve when the KOS is subjected to a torque under no axial loading. Figure 3 (c) shows the typical potential energy functions of the KOS plotted across an independent variable, \(u\) or \(\phi\). The circular markers at the bottom of the potential energy curves represent the stable equilibria. The quasi-static behavior of the KOSs is highly dependent on the design parameters. Even a small variation in these parameters can lead to significant changes in the behavior of the KOS. The KOS is capable of exhibiting various qualitative restoring force characteristics. These include linear, nonlinear, and quasi-zero stiffness, among others. Depending on the number of stable equilibria that exist, KOSs are typically classified as mono-stable (one stable equilibrium) or bi-stable (two stable equilibria), see Fig. 3 (c). Figure 3 (d) shows the design map that demarcates the design space \((u_{0}/R,\phi_{0})\) of the KOS into mono-stable and bi-stable regions for a KOS with \(n=6\). ## 3 Kresling Origami Pairs With the goal of expanding their utilizable space of application, we focus on understanding the quasi-static behavior of a pair of KOSs connected in series as shown previously in Fig. 2. The springs can be connected in two ways: either with the slope of the creases having the same sign (\(c\)-KOSP) or with the slope of the creases having opposite signs (\(d\)-KOSP). When an external torque is applied to the top end of a \(c\)-KOSP while the other is fixed, the top end twists and compresses resulting in coupled translational-rotational motion. On the other hand, when the same load is applied to a \(d\)-KOSP, the top end only undergoes rotational motion without any translation effectively decoupling the translational from the rotational motions. To better understand the quasi-static behavior of the KOSP, we consider the truss model of a general, \(N\)-module KOS stack. It should be noted that the equations governing the mechanics of the truss model of the KOS with its assumptions are still applicable for the constituent KOSs. Having said that, the total strain energy stored in \(N\)-module stack of KOSs can be written as, \[\Pi_{T}=\sum_{i}^{N}\Pi_{i}=\sum_{i}^{N}\frac{n_{i}E_{i}A_{i}}{2}\left[\frac{ (b_{i}-b_{i0})^{2}}{b_{i0}}+\frac{(c_{i}-c_{i0})^{2}}{c_{i0}}\right], \tag{3}\] where, \[b_{i}=\sqrt{4R_{i}^{2}\sin^{2}\left(\frac{\phi_{i}-\frac{\pi}{n_{i}}}{2} \right)+u_{i}^{2}},\qquad c_{i}=\sqrt{4R_{i}^{2}\sin^{2}\left(\frac{\phi_{i}+ \frac{\pi}{n_{i}}}{2}\right)+u_{i}^{2}}. \tag{4}\] Here, the subscript '\(i\)' refers to the different constituent KOSs in the stack. The variables \(\phi_{i}\) and \(u_{i}\) are, respectively, the relative angle and the vertical distance between the end planes of the \(i^{th}\) KOS module; \(u_{T}=\Sigma u_{i}\) and \(\phi_{T}=\Sigma\phi_{i}\), are, respectively, the total height of the stack and net relative rotation between the stack's two ends; and finally, \(\Pi_{T}\) is the potential energy of the stack. In this model, clockwise rotations are considered to be positive and vice versa. In response to any external loading along or about the longitudinal axis of the stack, the \(N\) constituent KOS modules of the stack deform adjusting the \(\phi_{i}\)'s and \(u_{i}\)'s to counter balance the external loading with the net restoring force/torque. The new arrangement of the KOS modules in response to the imposed external loads is such that it minimizes the total potential energy. The mathematical formulation of the optimization problem can be posed in the following way: \[\begin{split}\underset{u_{i},\phi_{i}}{\text{minimize}}& \Pi_{T}=\sum_{i=1}^{N}\Pi_{i}(u_{i},\phi_{i}).\\ \text{subject to}&\sum_{i=1}^{N}u_{i}=u_{T},\qquad\sum_ {i=1}^{N}\phi_{i}=\phi_{T},\qquad u_{i}^{min}\leq u_{i}\leq u_{i}^{max},\qquad \phi_{i}^{min}\leq\phi_{i}\leq\phi_{i}^{max}.\end{split} \tag{5}\] where \(u_{i}^{min}\), \(u_{i}^{max}\),\(\phi_{i}^{min}\), \(\phi_{i}^{max}\) are the minimum and maximum possible coordinates of the \(i^{th}\) constituent. We use Matlab optimization tools to solve this problem, in which, at each iteration for a new set of \((u_{T},\phi_{T})\), the program predicts the set of \(\phi_{i}\)'s and \(u_{i}\)'s that satisfies the constraints and evaluates the total potential energy, \(\Pi_{T}\), using Equation 3. ### Restoring behavior of \(d\)-KOSPs KOSPs formed by stacking KOSs with opposite orientation of their individual creases; i.e. \(d\)-KOSPs are of importance since, as aforedescribed, they offer bi-directional functionalities and decouple the motion at the free end. Thus, we dedicate this section to study their quasi-static restoring torque behavior, equilibria, and the bifurcation of those equilibria as the pre-compressed height of the stack, \(u_{T}\), is varied. At each step of the analysis, the total height of the stack is changed and the potential energy function, restoring Figure 4: (a) Normalized restoring torque and normalized potential energy plots of a KOS module, \(u_{0}/R=1.1\), \(\phi_{0}=65^{o}\), and \(n=6\). (b) The bifurcation diagram of the twin \(d\)-KOSP. Here, the solid green lines represent the stable equilibria, the red square markers represent the unstable equilibria and the dotted lines represent the folding limit of each KOS. (c), (d), (e), (f), (g) and (h) Normalized restoring torque and normalized potential energy plots of the \(d\)-KOSP for precompressed heights of (c) \(u_{T}=2.2/R\), (d) \(u_{T}/R=1.97\), (e) \(u_{T}/R=1.5\), (f) \(u_{T}/R=1.1\), (g) \(u_{T}/R=0.85\), and (h) \(u_{T}/R=0.6\). torque and equilibria of the KOSP are calculated using the algorithm described in Section 3. Figure 4 depicts such results for a twin \(d\)-KOSP consisting of two similar KOSs, each having the design parameters, \(u_{0}/R=1.1\), \(\phi_{0}=65^{o}\), and \(n=6\), and the restoring torque behavior shown in Fig. 4 (a). As can be clearly seen, the restoring torque of the single KOS is asymmetric bi-stable with uni-directional tendency. As evident in the bifurcation diagram shown in Fig. 4 (b), the \(d\)-KOSP has a single equilibrium point at \(\phi_{T}=0\) for pre-compressed heights \(1.98<u_{T}/R<2.5\) (solid green line). Thus, the potential energy function is mono-stable and symmetric as shown in Fig. 4 (c) for \(u_{T}/R=2.2\). The restoring torque of the \(d\)-KOSP is nearly linear, which is ideal for applications where linearity and symmetry under loading are key for performance. At \(u_{T}/R=2\), the potential energy becomes almost flat for any prescribed rotation near the equilibrium point, Fig. 4 (c). Thus, the stiffness becomes nearly zero around the equilibrium point resulting in a quasi-zero-stiffness (QZS) behavior. Such spring characteristics are ideal for the design of broadband vibration absorbers and energy harvesters [58, 59]. Near \(u_{T}/R=1.95\), the only stable equilibrium point of the \(d\)-KOSP loses stability through a super-critical pitchfork bifurcation (Sup-crit. P) and gives way to two stable equilibria on either side of the original equilibrium. Thus, the KOSP becomes of the symmetric bi-stable type. This is evident in the shape of the potential energy function shown in Fig. 4 (e) for \(u_{T}/R=1.5\). It can be clearly seen that the potential energy function has two minima separated by a local maximum at \(\phi_{e}=0\), which are characteristics of a symmetric bi-stable potential. The associated restoring force exhibits a negative stiffness at \(\phi_{e}=0\) and positive stiffness for large values of \(\phi_{T}\). Such bi-stable springs are key to the design of bi-stable mechanical switches and energy harvesters. As \(u_{T}\) is decreased further, the two stable equilibrium branches diverge and the potential wells get deeper causing the magnitude of the negative local stiffness to increase. As such, it becomes more difficult to force the spring to move from one of its equilibria to the other. At precisely, \(u_{T}/R=1.1\), one of the KOSs in the stack becomes fully compressed, while the other is at its undeformed state. This is usually referred to in the literature as a self-contact point or as panel self-locking. The result is that the potential energy and the stiffness increase sharply and suddenly at these points as can be clearly seen in Fig. 4 (f). In essence, these points represent the limit of the \(d\)-KOSP operation. Any prescribed rotation of the \(d\)-KOSP beyond this point would only deform the panels; a process which requires high strain energy. Decreasing \(u_{T}\) further below \(u_{T}/R=1.1\), the non-trivial equilibria continue to follow the orange dotted curve which marks the self-contact points of the KOSP. At about \(u_{T}/R\approx 0.97\), the unstable equilibrium points represented by sqaure marker on Fig. 4 (b) regains stability through a sub-critical pitchfork bifurcation (Sub-crit. P), and the KOSP becomes tri-stable as can be seen in Fig. 4 (g). The KOSP remains tri-stable for a very short range of \(u_{T}\), before the two non-zero stable solutions collide with the unstable solution and destruct each other in a fold bifurcation (Fold) at \(u_{T}/R\approx 0.74\). Beyond this point, the KOSP becomes mono-stable again with the trivial position, \(\phi_{e}=0\), being the only equilibrium point, as can be seen in Fig. 4 (h). In Fig. 5 (a,b), we generalize the bifurcation diagram shown in Fig. 4 (b) into bifurcation maps that demarcate the design space of \(u_{T}/R\) and \(u_{0}/R\) into different domains based on the number of stable equilibria that the twin \(d\)-KOSP possesses; i.e., which design parameters lead to mono-stable behavior, and which ones lead to a bi- or tri-stable behavior. The different colored regions with numbered labels represent the different number (as labeled in the figure) of the stable equilibria in that configuration. Those maps are generated for modules with two different values of \(\phi_{0}\), namely \(\phi_{0}=45^{o}\) and \(\phi_{0}=65^{o}\). For the most part, we can see that the mono-stable behavior is the easiest to realize followed by the bi-stable behavior, then the tri-stable one, and that the region of design parameters leading to the tri-stable behavior shrinks when \(\phi\) is increased. Moreover, in both of the cases, tri-stability is very difficult to achieve when the KOS modules are mono-stable as compared to when the KOSP is designed using bi-stable KOS modules. Figure 5 (c) and (d), show similar maps for \(u_{0}/R=1.1\) and \(1.65\), respectively, with \(\phi_{0}\) being the bifurcation parameters. It can be clearly seen that larger values of \(u_{0}/R\) allow for larger regions in the design space to construct bi- and tri-stable KOSPs. One interesting observation resulting from the numerical analysis is that the \(d\)-KOSP can be designed to become mono-stable, bi-stable or even tri-stable irrespective of the type of the stability of its constituents. The symmetric response of the twin module \(d\)-KOSP is a feature that is most often desired in designing springs, but is not a constraint, if otherwise, an asymmetric restoring force response is desired. Asymmetry can be easily achieved by constructing the \(d\)-KOSP using two different KOSs. For instance, in Fig. 6 (b), we plot the bifurcation diagram for a \(d\)-KOSP constructed by combining the two different bi-stable KOSs whose restoring behavior is shown in Figs. 6 (a,c); namely, KOS1: \(u_{0}/R=1.65\), \(\phi_{0}=45^{o}\), and \(n=6\) and, KOS2: \(u_{0}/R=1.875\), \(\phi_{0}=60^{o}\), and \(n=6\). A first glance reveals that the bifurcation diagram is more complex and is no longer symmetric around \(\phi_{e}=0\). At the uncompressed height; i.e. \(u_{T}/R=1.65+1.875=3.525\), there is a net offset in the equilibrium rotation angle which is \(60^{o}-45^{o}=15^{o}\), relative to the other end of the KOSP. The potential energy function is mono-stable despite both constituents being bi-stable and the restoring force is of the nonlinear hardening type, as shown in Fig. 6 (d). When the KOSP is pre-compressed, the force deforms the two springs differently since they have different stiffnesses. The softer spring, here KOS2, undergoes compression and rotation first under the applied load. In the process of compression, KOS2, gains stiffness up to the point \(u_{T}/R=3.2\), where it Figure 5: Influence of the KOS module design parameters on the number of stable equilibria of the twin \(d\)-KOSP, (a) \(\phi_{0}=45^{o}\), (b) \(\phi_{0}=65^{o}\), (c) \(u_{0}/R=1.1\) and (d) \(u_{0}/R=1.65\). becomes stiffer than KOS1. At this point, KOS1 starts to deform and a new equilibrium point is born causing the potential energy to become bi-stable and asymmetric with two equilibrium angles occurring at \(\phi_{e1}=-28^{o}\), and, \(\phi_{e2}=62^{o}\); see Fig. 6 (e) obtained at \(u_{T}/R=3.152\). The bi-stable asymmetric behavior continues to persist down to \(u_{T}/R\approx 2.5\), where the spring behavior becomes nearly of the asymmetric quasi-zero stiffness type; see Fig. 6 (f). Subsequently, the behavior of the springs becomes very complex as shown in Fig. 6 (g) and (h) for \(u_{T}/R=2.17\) and \(u_{T}/R=1.67\), respectively. The bifurcation diagram also reveals that the tri-stable behavior cannot be achieved using this combination of spring modules. ## 4 Experiments ### Fabrication Numerical simulations have revealed that the modularity of the KOS can be used to construct functional KOSPs with unique and made-to-order restoring characteristics. The operating range can be increased and the restoring force/torque further tuned by stacking a larger number of unit springs \(N\geq 2\). To employ these desirable characteristics in a realistic environment, such springs must be durable, and their manufacturing process be systematic and repeatable. Thus, relying on paper folding is obviously not the optimal approach. In a recent article [54], we used 3D printing to produce KOS modules, demonstrating exceptional functionality, repeatability, and high durability. In the proposed design, the basic triangles of Figure 6: (a) Normalized restoring torque and normalized potential energy plots of KOS1: \(u_{0}/R=1.65\), \(\phi_{0}=45^{o}\), and \(n=6\). (b) Bifurcation diagram of the \(d\)-KOSP (solid green line represents stable solutions while square markers represent unstable solutions). (c) Normalized restoring torque and normalized potential energy plots of KOS2: \(u_{0}/R=1.875\), \(\phi_{0}=60^{o}\), and \(n=6\). (d), (e), (f), (g) and (h) Normalized restoring torque and normalized potential energy plots of the \(d\)-KOSP for precompressed heights of (d) \(u_{T}/R=3.525\), (e) \(u_{T}/R=3.152\), (f) \(u_{T}/R=2.50\), (g) \(u_{T}/R=2.17\), and (h) \(u_{T}/R=1.67\). each KOS were modified to allow for easy folding and stretching at the panel junctions while still retaining enough stiffness to conform to the Kresling origami pattern and withstand loading. The fabrication process used the Stratasys J750 3D printer with the polyjet method, which utilized two different materials for each panel. The central rigid core of each panel was made of a rigid plastic polyjet material called _Vero_, while the outer frame was made of a flexible rubber-like polyjet material called _TangoBlackPlus_. The flexibility of the outer frame enables folding and stretching at the interfaces. Fig. 7 (a) shows an example of the fabricated KOS module. The new design further introduced additional geometric parameters, namely the width of the flexible material, \(w\) and the thickness of the panels, \(t\). These are important parameters that provide additional freedom in designing the KOS modules. Each KOS is reinforced using two end plates with circular holes that are concentric with the longitudinal axis of the KOS. These plates are added to increase the stability of the KOS, and to prevent damages under unwarranted non-axial loads. The holes allow air to escape during deployment. Using the manufacturing approach proposed in our previous article[54], we construct KOSPs for experimental testing. Similar and/or different KOSs are used interchangeably to form various stack combinations, and different crease orientations. ### Experimental Testing The experimental portion of this study involves two phases: axial testing and torsional testing. The axial tests are performed to determine the restoring force behavior of the KOS under compressive and tensile loads. These tests are conducted using an Instron Dual Column 5960 universal testing machine. A controlled fixed rate displacement of 0.2 mm/s is applied to the top end of the KOS while the bottom end is placed on a specially designed platform that can freely rotate about a common centroidal axis, as shown in Fig. 7 (a). The restoring force is measured using a load cell, and the rotation of the bottom end is tracked using digital image correlation (DIC) tools. During the torsional tests, the KOS is subjected to a controlled rate of cyclical torque, including clockwise and anticlockwise rotations, using an Instron MicroTorsion MT1 machine, as depicted in Fig. 7 (b). One end of the KOS is clamped using a chuck that is connected to a motor, while the other end is placed on a sliding bearing that allows longitudinal motion to be free while the rotary motion is restrained. For the torsional tests of the KOSPs, we require the total length of the KOSP, \(u_{T}\), to be fixed. Thus, we remove the sliding bearing and prevent it from rotating or translating. A torque cell is used to measure the applied torque at the fixed end, and the relative longitudinal displacement is monitored using DIC tools. It is important to note that the results presented in this work are based on a controlled rotational rate of \(20^{o}/min\), although other rotational rates ranging from \(10^{o}/min\) to \(100^{o}/min\) were also tested, but the effect of the rate on the response was found to be negligible. Furthermore, the entire test setup is placed in Figure 7: Experimental setup for uni-axial testing of KOS. (b) Experimental setup for torsional testing of KOS. (c) Experimental setup for torsional testing of KOSP. the horizontal plane to eliminate the influence of gravity on the quasi-static responses, which is crucial since the KOS is free to slide during axial testing. It is important to mention that five samples with the same design parameters are tested in both the axial and torsional testing of the KOS modules, and the average of the responses under compression, tension, clockwise and anticlockwise rotation are recorded. The total potential energy is calculated by integrating the measured restoring force across the prescribed displacement in the case of uni-axial testing and integrating the measured torque over the prescribed rotation in the case of torsional testing. ### Experimental Results We start by testing the quasi-static torsional behavior of a KOS with the geometric parameters \(u_{0}/R=1.875\), \(\phi_{0}=60^{o}\), \(R=15\)mm, \(n=6\), \(t=0.75\) mm and \(w=1.5\) mm as depicted in Fig. 8 (a). Our goal is to first understand the behavior of the unit cell forming the KOSP. To achieve this objective, we tested the KOS samples using the Instron torsion testing machine, which was configured to maintain zero axial loading on the KOSs throughout the test. During the test, we prescribed the rotation angle, \(\phi\), at one end of the module and recorded both the torque and the instantaneous height of the structure, \(u\). Positive rotation caused compression in the KOS module, while negative rotation resulted in expansion. To prevent the module from suffering permanent deformation or damage, we determined the limits of rotation, including the stretch limit and compression limit. Here, the stretch limit, \(\phi_{s}\), refers to the point at which \(\partial u/\partial\phi\) becomes large in uni-axial testing, while the compression limit, \(\phi_{c}\), is the point before delamination begins to occur. For the considered KOS, \(\phi_{s}=48.5^{o}\) while \(\phi_{c}=147.5^{o}\) clearly demonstrating one of the key disadvantages of the single KOS design, which lies in the fact that it reaches its stretch limit much faster than its compression limit due to its inherent kinematics. Figure 8 (a) illustrates the restoring torque and the calculated potential energy function for this KOS. It is evident that the restoring force is asymmetric with the KOS exhibiting a single equilibrium point at the undeformed state \(\phi_{e}=60^{o}\), where the restoring torque is zero. As the angle is increased, the restoring torque monotonously increases up to \(\phi=71^{o}\), after which it starts to decrease resulting in negative torsional stiffness up to \(\phi=89^{o}\). Thereafter, the KOS loses much of its load carrying ability and the potential energy forms a plateau which extends up to \(\phi=110^{o}\). Note that the restoring torque approaches but never crosses zero, thus there is no other equilibrium point and the spring is mono-stable. On rotating further, the KOS begins to get stiffer and the triangular panels begin to interact and avoid each other near \(\phi=130^{o}\). On the other hand, upon expansion the KOS from its undeformed state, the stiffness rapidly increases and the KOS quickly reaches its stretch limit of \(\phi_{s}=48.5^{o}\). Next, we investigate the restoring behavior of a twin \(d\)-KOSP constructed using a pair of the tested KOS, \(u_{T}/R=2\times 1.875=3.75\). Following the testing procedure described in Section 4.2, the restoring torque is measured as shown in Fig. 8 (b). It is clearly evident that, for this value of \(u_{T}/R\), the restoring torque of the spring is nearly symmetric and linear around \(\phi=0\) despite each KOS forming the stack being highly asymmetric and nonlinear. When the spring is precompressed to a height of \(u_{T}/R=3.36\), the spring becomes bi-stable with two equilibrium points (\(\phi_{e1}=-53^{o}\) and \(\phi_{e2}=52^{o}\)) as shown in Fig. 8 (c). The restoring force is bi-stable and nearly symmetric despite the constituents being highly asymmetric and mono-stable. Further decrease of the height to \(u_{T}/R=3.05\) decreases the depth of the potential wells, and the restoring force becomes nearly of the QZS type as shown in Fig. 8 (d). To generate the full bifurcation diagram for different values of \(u_{T}/R\) without having to repeat this experiment a large number of times, we use a numero-experimental interpolation approach. The approach employs an algorithm similar to the one described in the numerical analysis, Equation 5, to minimize the change in the total potential energy of the KOSP structure during its operation. However, here instead of evaluating the potential energy at the iteration variables (\(u_{T},\phi_{T}\)), we interpolate the potential energy of the modules from the experimental data used in Fig. 9 (a). As with the simulation of truss KOSPs, we initially set the length of the stack, \(u_{T}\) to a certain value and then iteratively change the rotation angle, \(\phi_{T}\). Then the algorithm makes an initial calculated guess (feasible values) of \(\phi_{1}\) and \(\phi_{2}\), such that they are bounded within the stretch and compression limits of the modules and also satisfy \(\phi_{1}+\phi_{2}=\phi_{T}\). Using the experimental data, the program evaluates \(u_{1}\), and \(u_{2}\) and verifies that \(u_{1}+u_{2}=u_{T}\). Once it is verified, the program interpolates \(\Pi_{1}\) and \(\Pi_{2}\) for \(\phi_{1}\) and \(\phi_{2}\) and evaluates the net potential energy \(\Pi_{T}\). The optimization tool then uses fmincon in Matlab to iteratively searche for the minimum of \(\Pi_{T}\) by moving towards the feasible region of the objective function. The potential energy functions of the KOSP obtained using the algorithm are compared to their experimental counterparts as shown in Fig. 9 (a) for different values of \(u_{T}/R\). Here, the solid lines represent the actual experimental findings while the dotted lines represent those attained using the aforedescribed algorithm. It can be clearly seen that there is a general qualitative agreement, which could be used to predict an experimental bifurcation diagram of the KOSP as a function of \(u_{T}/R\). This diagram is shown in Fig. 9 (b) demonstrating the ranges of \(u_{T}/R\) for which the KOSP has a mono- versus bi- or even tri-stable behavior. The surface plot shows the potential energy (strain energy), and the stable and unstable equilibria are represented by a green line and red colored square markers, respectively. The white region represents the values of \(u_{T}\) and \(\phi_{T}\) that lie outside the operation range of the KOSP; i.e. the stretch and compression limits of the two constituent modules. Generally, we can see that a \(d\)-KOSP constructed from an identical pair of KOSs possesses a symmetric potential function that allows for a similar response Figure 8: (a) Experimental restoring torque and the associated potential energy plots under torsion tests of a single KOS module with design parameters: \(u_{0}/R=1.875\), \(\phi_{0}=60^{o}\), \(n=6\). (b,c,d ) Restoring torque and potential energy plots of a twin \(d\)-KOSP with (d) \(u_{T}/R=3.75\), (e) \(u_{T}/R=3.36\), and (f) \(u_{T}/R=3.05\). to both clockwise and counter-clockwise rotations, which is not possible with a single KOS. Additionally, \(d\)-KOSPs exhibit an increased range of operability in both rotation and longitudinal deployment. The stiffness of \(d\)-KOSPs can be tuned or programmed, including quasi-zero stiffness, providing great control over their behavior for specific applications. Figure 10 shows the quasi-static response behavior for a KOSP consisting of two different KOSs. KOS 1 is the mono-stable spring studied in Fig. 8, while KOS 2 is bi-stable with \(u_{0}/R=1.65\), \(\phi_{0}=45^{o}\), \(R=15\) mm and \(n=6\). The restoring torque and potential energy of KOS2 are as shown in Fig. 10 (a). The two equilibria of KOS 2 occur at \(\phi_{e1}=45^{o}\) and \(\phi_{e2}=116^{o}\). The \(d\)-KOSP constructed using those springs is experimentally tested at different values of \(u_{T}/R\) and the restoring torque responses are recorded. Fig. 10 (b) shows those responses for the uncompressed height \(u_{T}/R=3.525\) where the potential energy function is mono-stable but asymmetric with a single equilibrium point occurring near \(\phi_{e}=15^{o}\). The restoring force is weakly nonlinear with a slight hardening behavior near the stretch and compression limits. When the KOSP is precompressed to a height of \(u_{T}/R=3.2\), the KOSP becomes bi-stable asymmetric with the left potential well being much deeper than the right one. The right potential well becomes shallower as the KOSP is precompressed further up to the point where the second minimum in the potential energy function disappears near \(u_{T}/R=2.7\), Fig. 10 (d). In this case, the restoring force becomes nearly QZS around \(\phi_{T}=25^{o}\). As the KOSP is precompressed further to \(u_{T}/R=2.5\), the spring becomes of the QZS type but with asymmetric characteristics Fig. 10 (e). Figure 10 (f) depicts the interpolated bifurcation diagram which reveals that the \(d\)-KOSP will always exhibit an asymmetric response with the characteristics being of the mono-stable type for large and small precompression heights and bi-stable for intermediate ones. Figure 9: (a) Potential energy function of the KOSP at different precompressed heights. Solid lines represent experimental results, dotted lines represent simulated results. (b) Bifurcation diagram of the twin module \(d\)-KOSP simulated using the experimental data from Fig. 8(a). Solid green lines represent the stable equilibria, while the square markers represent the unstable ones. The contour map represents the potential energy of the stack measured in N-mm. ## 5 Conclusion This paper focuses on the use of serially connected Kresling Origami Springs (KOS) to design bi-directional programmable springs that offer decoupled translational and rotational degrees of freedom. The behavior of these springs is investigated both numerically, using a truss model, and experimentally, using functional 3D printed springs. The study reveals that, by varying the precompressed height of the KOSPs, interesting bifurcations of the static equilibria emerge leading to mono-, bi-, tri-stable, and QZS restoring elements with either symmetric or asymmetric restoring behavior. This is unlike single KOSs whose translational and rotational degrees of freedom are always coupled and cannot be designed to have a tri-stable behavior or to possess a symmetric restoring force behavior. The availability of such springs open up new avenues for developing restoring elements with programmable responses to external stimuli, which could lead to innovative applications in various fields, such as robotics and energy storage [47, 48]. The results of this study contribute to the expanding body of knowledge on the potential use of origami principles for engineering applications. The findings also provide new insights into the behavior of KOSPs and offer a foundation for future research in developing multifunctional materials using the Kresling origami pattern. ## Acknowledgements We thank Core Technology Platforms at New York University Abu Dhabi (NYUAD) for providing their resources and support during fabrication and testing of the KOSs. Parts of this work was supported by Figure 10: (a) Restoring torque and the potential energy plots under torsion tests (experimental) of a KOS module with design parameters, KOS2:: \(u_{0}/R=1.65\), \(\phi_{0}=45^{o}\), \(n=6\); (b,c,d,e ) Restoring torque and potential energy plots (experimental) of the KOSP at precompressed heights (b) \(u_{T}/R=3.525\), (c) \(u_{T}/R=3.2\), (d) \(u_{T}/R=2.7\), (e) \(u_{T}/R=2.5\); (f) Bifurcation of the \(d\)-KOSP. Solid green lines represent the stable equilibria, while the square markers represent the unstable ones. The contour map represents the potential energy of the stack measured in N-mm. the NYU-AD Center for Smart Engineering Materials which is under the full support of Tamkeen under NYUAD RRC Grant No. CG011. ## Data Availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
2305.04064
Bayesian sample size determination for multi-site replication studies
An ongoing "reproducibility crisis" calls into question scientific discoveries across a variety of disciplines ranging from life to social sciences. Replication studies aim to investigate the validity of findings in published research, and try to assess whether the latter are statistically consistent with those in the replications. While the majority of replication projects are based on a single experiment, multiple independent replications of the same experiment conducted simultaneously at different sites are becoming more frequent. In connection with these types of projects, we deal with testing heterogeneity among sites; specifically, we focus on sample size determination suitable to deliver compelling evidence once the experimental data are gathered.
Konstantinos Bourazas, Guido Consonni, Laura Deldossi
2023-05-06T14:54:11Z
http://arxiv.org/abs/2305.04064v1
###### Abstract ###### Abstract An ongoing "reproducibility crisis" calls into question scientific discoveries across a variety of disciplines ranging from life to social sciences. Replication studies aim to investigate the validity of findings in published research, and try to assess whether the latter are statistically consistent with those in the replications. While the majority of replication projects are based on a single experiment, multiple independent replications of the same experiment conducted simultaneously at different sites are becoming more frequent. In connection with these types of projects, we deal with testing heterogeneity among sites; specifically, we focus on sample size determination suitable to deliver compelling evidence once the experimental data are gathered. _Keywords_: analysis prior, Bayesian design, Bayes factor, design prior, heterogeneity. **Bayesian sample size determination for multi-site replication studies** Konstantinos Bourazas\({}^{1}\), email: [email protected] Guido Consonni\({}^{2}\), email: [email protected] Laura Deldossi\({}^{2}\), email: [email protected] \({}^{1}\)Department of Mathematics and Statistics and KIOS Research and Innovation Center of Excellence - University of Cyprus, Nicosia, Cyprus \({}^{2}\)Department of Statistical Sciences - Universita Cattolica del Sacro Cuore, Milan, Italy. ## 1 Introduction Over the last few decades it has emerged that a significant proportion of published results could not be reproduced (Ioannidis, 2005), so that a "reproducibility crisis" (Baker, 2016) in empirical science has ensued. This has been pointed out and investigated both in the social sciences, especially psychology and economics (see among others Hensel, 2021), as well as in the life-sciences (Zwanenburg, 2019). There are multiple reasons for this, ranging from publication bias (Francis, 2012) to poor experimental designs (Pashler and Harris, 2012) and questionable statistical methodology (Wasserstein and Lazar, 2016). Simons (2014) stated that "reproducibility is the cornerstone of science", as it establishes the validity of published research. Over the years there have been numerous contributions to the analysis of replication studies; see for instance Bonett (2012), Zwaan et al. (2018), Hedges and Schauer (2019a), Hou et al. (2020). Attempts from the Bayesian perspective include Wagenmakers et al. (2015), Etz and Vandekerckhove (2016) and Marsman et al. (2017). Still in the Bayesian approach, Verhagen and Wagenmakers (2014) introduced the replication Bayes factor, which was implemented for fixed ANOVA designs by Harms (2019). Additionally, Pawel and Held (2020) investigated replication from a predictive point of view, while Held (2020) and Held et al. (2022) exploited reverse Bayes ideas to assess replication success. Finally, Muradchanian et al. (2021) provided a comparative study of frequentist and Bayesian indicators for replication success. Most Replication studies (RS) refer to a single replication, but interest in conducting _multiple_ RS simultaneously is growing, together with new metrics as in Mathur and VanderWeele (2020). In this setting, the analysis often focuses on between-study variation, especially when the issue of heterogeneity across multiple experimental replications is of primary interest; see Klein (2014) and Hedges and Schauer (2019a, 2019b and 2021). In a more general framework, Gronau et al. (2021) proposed a Bayesian model-averaged procedure to test both the presence of an effect size as well as that of the heterogeneity. An important aspect to underline is that very seldom do investigations offer statistical arguments regarding the choice of their design, especially whether it was adequate to yield _ex ante_ compelling conclusions, e.g. in terms of power (hypothesis tests) or standard error (estimation). Sample size determination (SSD), both in terms of the number of sites to be included in the study as well as the number of subjects within each site, is an important issue for a successful replication design. Wong et al. (2021) and Bonett (2021) provided reviews regarding the design and analysis of RS, while Simon (1999) and Bayarri and Mayoral (2002a and 2002b) investigated the design of a replication study using a Bayesian approach. In general, one can identify two types of design for RS: _constrained_, where the original study is taken into consideration and the problem is typically framed as a comparison between the original and the subsequent replication study; or _unconstrained_, where the original study is excluded from analysis. Hedges and Schauer (2021) proposed an unconstrained multi-site design for SSD; see also Fedorov and Jones (2005) and Harden and Friede (2018) in the context of clinical trials. In this paper we consider a Bayesian unconstrained design of multi-site replication experiments to test the presence of heterogeneity among sites, and provide a method for SSD using the Bayes factor (BF) as a measure of evidence (Kass and Raftery, 1995). One can regard the BF as the Bayesian analogue of the frequentist likelihood ratio test for hypothesis testing, where marginal likelihoods, as opposed to maximized likelihoods, are used. An important feature of the BF is that it can compare any pair of hypotheses (models), and is thus not restricted to nested models. Additionally, it provides evidence for each of the two competing hypotheses, unlike the frequentist approach, where non-significant results cannot be translated as support for the null hypothesis. For further insights into the BF and its use see Dienes (2014) and Hoijtink et al. (2019). This paper is structured as follows. In Section 2 we present a hierarchical model which accounts for heterogeneity and identify the sub-model representing no variation; next we discuss the important distinction between _analysis_ and _design_ prior in Bayesian SSD, derive the Bayes factor for testing heterogeneity among sites, and show how to approximate its prior predictive distribution using suitable algorithms. In Section 3 we introduce three categories of evidence based on the Bayes factor and the corresponding prior probabilities, then we introduce our SSD criterion and produce optimal sample sizes under a few scenarios chosen to allow a comparison with an alternative method. Finally, Section 4 provides a short discussion, highlighting a few points that deserve further work. Technical details on the derivation of the Bayes factor are provided in the Appendix. Sensitivity analysis and further results on alternative simulation scenarios are available as online Supplementary material to this paper along with R-code. ## 2 Models, priors and Bayes factor In this section we describe a hierarchical model suitable to describe heterogeneity, along with the analysis adopted to derive the BF for testing variation across sites. In addition, we discuss the design prior required to obtain the (prior) predictive distribution of the BF under the competing hypotheses, and finally present two algorithms to simulate values of the BF from its predictive distribution. ### A hierarchical model to account for heterogeneity Consider \(m\) independent sites and let \(t_{j}\) denote the effect size estimator for site \(j\), \(j=1,2,...,m\). Adopting a meta-analytic framework, we assume that \(t_{j}\) is approximately normally distributed, centred on the site specific effect size \(\mu_{j}\), with variance equal to \(\sigma_{j}^{2}=\sigma^{2}/n_{j}\), where \(\sigma^{2}\) is the unit variance. When the sample sizes are moderately large, \(\sigma^{2}\) can be assumed known because it can be accurately estimated. To simplify the exposition for SSD we assume a fully _balanced design_ (see for instance Fedorov and Jones, 2005), wherein the same number of subjects is enrolled in each site, so that \(n_{j}=n,\ \forall j\). Independently for each \(j=1,\ldots,m\), we consider the hierarchical model. \[\begin{split}& t_{j}|\mu_{j}\sim N(\mu_{j},\sigma^{2}/n)\\ &\mu_{j}|\mu,\tau^{2}\sim N(\mu,\tau^{2}),\end{split} \tag{1}\] where \(\mu\) is the overall mean effect size and \(\tau^{2}\) represents heterogeneity among sites; see Figure 1 for a visualisation based on a Directed Acyclic Graph (DAG). We will deal with priors for \(\mu\) and \(\tau^{2}\) in Subsection 2.2. Integrating out \(\mu_{j}\) in (1) we obtain \[\mathcal{M}_{1}:t_{j}|\mu,\tau^{2}\sim N(\mu,\tau^{2}+\sigma^{2}/n). \tag{2}\] Figure 1: DAG of hierarchical model (1) Setting \(\tau^{2}=0\) in (2) gives rise to the model of no-heterogeneity \[\mathcal{M}_{0}:t_{j}|\mu\sim N(\mu,\sigma^{2}/n). \tag{3}\] We will compare models \(\mathcal{M}_{0}\) and \(\mathcal{M}_{1}\) in Subsection 2.3 using the Bayes factor (BF). ### Prior distributions Consider first \(\tau^{2}\). Its plausible range of values is better appreciated in relation to the unit variance \(\sigma^{2}\). As a consequence, and reverting to standard deviation, we work in terms of the _relative heterogeneity_\(\gamma=\tau/\sigma\). Recall that an experimental design is a _prospective_ enterprise and is meant to achieve a desired level of inferential performance before the data come in. This translates to an adequate sample size that has to be determined. Unlike frequentist power analysis, which is _conditional_ on a fixed value of the parameter \(\gamma\) to be tested, the Bayesian approach requires a full prior on the parameter space. In this context it is common to distinguish between two types of priors: the analysis prior, which we label as \(h_{a}(\gamma)\), and the design prior denoted by \(h_{d}(\gamma)\); see O'Hagan and Stevens (2001) and O'Hagan et al. (2005), who used them in the setting of clinical trials. The _analysis_ prior is used to make inference once the data come in and, in our case, it will be used to evaluate the Bayes factor. In principle, it should be weakly informative, so that it can be broadly acceptable, and yet proper so that the BF can be unambiguously evaluated. On the other hand, the _design_ prior should be an informative prior, representing the position of the researcher about the size of the heterogeneity that is expected or is deemed interesting to detect. We first consider the analysis prior \(h_{a}(\gamma)\). Based on Rover et al. (2021), who provided a wide review on weakly informative priors for heterogeneity, we assume a Half-\(t(\nu_{\gamma},\ \sigma_{\gamma})\) distribution, where \(\nu_{\gamma}\) is the degrees of freedom and \(\sigma_{\gamma}\) is the scale parameter. The Half-\(t(\nu_{\gamma},\ \sigma_{\gamma})\) is the distribution of the absolute value of a Student-t variate centered at zero with degrees of freedom \(\nu_{\gamma}\) and scale \(\sigma_{\gamma}\). Its density is monotonically decreasing with heavy upper tail for small values of \(\nu_{\gamma}\). Regarding the choice of the hyper-parameters, we set \(\nu_{\gamma}=4\); see Rover et al. (2021). Moreover we set \(\sigma_{\gamma}=1/7\), so that the 95% quantile of the Half-t distribution is about 0.4, a value broadly in line with the suggestions of Hedges and Schauer (2021) for a variety of applied domains. We now address the issue of the design prior \(h_{d}(\gamma)\), which we take to be a Folded-t distribution (Psarakis and Panaretoes, 1990). The Folded-t is the distribution of the absolute value of a variate having a non-standardized t-distribution with location \(\mu_{\gamma}\), scale \(\sigma_{\gamma}\) and degrees of freedom \(\nu_{\gamma}\); when \(\mu_{\gamma}=0\) it reduces to the Half-t. Regarding the hyper-parameters, we let \(\nu_{\gamma}=4\) and set \(\mu_{\gamma}=0.2\); the latter choice is based on the evaluation of plausible values for the relative heterogeneity \(\gamma\) discussed in Hedges and Schauer (2021). Adding a moderate amount of uncertainty around the location such as \(\pm 0.05\), we tune \(\sigma_{\gamma}=1/55\) to achieve a credible interval of 95% coverage for the region \([0.15,0.25]\). Figure 2 provides the density plot of the analysis and the design prior used in the current set-up. Sensitivity analysis is carried out in the Supplementary material for alternative configurations of the hyper-parameters. Consider now the prior for \(\mu\) appearing both in model \(\mathcal{M}_{1}\) and \(\mathcal{M}_{0}\); see equations (2) and (3) respectively. The overall mean represents a nuisance parameter when testing heterogeneity, and we take it to be independent of \(\gamma\)_a priori_. We suggest using the Jeffreys prior \(\pi(\mu)\propto 1\). Despite being improper, it represents a suitable choice for the derivation of the Bayes factor in our case, because \(\mu\) is a parameter common to both models. ### The Bayes factor and its predictive distribution We compare models \(\mathcal{M}_{0}\) and \(\mathcal{M}_{1}\) through the Bayes factor \(BF_{01}\), given by the ratio between the marginal data distribution under \(\mathcal{M}_{0}\) and \(\mathcal{M}_{1}\). The measure \(BF_{01}\) quantifies the support for the null over the alternative model; for instance the value \(BF_{01}=3\) states that \(\mathcal{M}_{0}\) is three times more likely than \(\mathcal{M}_{1}\). Jeffreys (1961) proposed a heuristic classification scheme to interpret the evidence provided by the BF, grouping values into a few categories; for subsequent elaborations see Kass and Raftery (1995) and Schonbrodt and Wagenmakers (2018). Based on (3) and (2) one obtains the marginal data distribution under Figure 2: The analysis prior \(h_{a}(\gamma)\) (dotted line) and the design prior \(h_{d}(\gamma)\) (solid line), along with their credible interval 95% (highlighted regions). model \(\mathcal{M}_{0}\), respectively \(\mathcal{M}_{1}\), \[m_{0}(\mathbf{t_{r}})=\int\limits_{-\infty}^{+\infty}f(\mathbf{t_{r}}|\mu,\mathcal{M}_{0} )\pi(\mu)d\mu=\left(\frac{1}{m}\right)^{1/2}\cdot\left(\frac{2\pi\sigma^{2}}{n} \right)^{(1-m)/2}\cdot\exp\left\{-\frac{Q}{2}\right\}, \tag{4}\] \[m_{1}(\mathbf{t_{r}}) =\int\limits_{0}^{+\infty}\int\limits_{-\infty}^{+\infty}f(\mathbf{t _{r}}|\mu,\gamma,\mathcal{M}_{1})h_{a}(\gamma)\pi(\mu)d\mu d\gamma\] \[=\left(\frac{1}{m}\right)^{1/2}\cdot\left(2\pi\sigma^{2}\right)^{ (1-m)/2}\int\limits_{0}^{+\infty}\left(\frac{1}{n}+\gamma^{2}\right)^{(1-m)/2 }\exp\left\{-\frac{Q}{2}\cdot\left(1+n\gamma^{2}\right)^{-1}\right\}h_{a}( \gamma)d\gamma, \tag{5}\] where \(\mathbf{t_{r}}=(t_{1},\...,\ t_{m})\), \(\bar{t}\) is the sample mean of the \(t_{j}\)'s and \(Q=n\cdot\sum_{j=1}^{m}\left(t_{j}-\bar{t}\right)^{2}/\sigma^{2}\) is a quantity whose distribution does not depend on \(\sigma^{2}\). It appears that \(m_{0}(\mathbf{t_{r}})\) is available in closed form. On the other hand the evaluation of \(m_{1}(\mathbf{t_{r}})\) can be approximated using Monte Carlo simulation. The resulting BF is: \[BF_{01}(\mathbf{t_{r}})=\frac{m_{0}(\mathbf{t_{r}})}{m_{1}(\mathbf{t_{r}})}= \frac{n^{(m-1)/2}\cdot\exp\left\{-\frac{Q}{2}\right\}}{\int\limits_{0}^{+ \infty}\left(\frac{1}{n}+\gamma^{2}\right)^{(1-m)/2}\cdot\exp\left\{-\frac{Q} {2}\cdot\left(1+n\gamma^{2}\right)^{-1}\right\}h_{a}(\gamma)d\gamma}, \tag{6}\] which depends on the data only through \(Q\). We emphasise that, at the design stage, the observations \(\mathbf{t}_{r}\) are not yet available. As a consequence, planning for unambiguous results in terms of the BF requires the _prior predictive_ distribution of \(BF_{01}(\mathbf{t_{r}})\), which in turn depends on the (prior) predictive distribution of \(Q\). Two well known facts are: i) under \(\mathcal{M}_{0}\), \(Q\sim\chi_{m-1}^{2}\), where \(\chi_{p}^{2}\) denotes a chi-squared distribution with \(p\) degrees of freedom; ii) under \(\mathcal{M}_{1}\), and conditionally on \(\gamma\), \((1+n\gamma^{2})^{-1}\cdot Q\sim\chi_{m-1}^{2}\)(Hedges and Pigott, 2001). To obtain the unconditional distribution of \(Q\) under \(\mathcal{M}_{1}\), a further mixing wrt \(\gamma\sim h_{d}(\gamma)\) is required. Finally, to obtain a realization of \(BF_{01}(\mathbf{t_{r}})\) from its prior predictive distribution, items i) and ii) above must be coupled with the evaluation of the integral appearing in the denominator of (6). The above computational program will be carried out using a Monte Carlo approximation. Algorithm 1 describes the procedure when the assumed model is \(\mathcal{M}_{0}\), while Algorithm 2 refers to \(\mathcal{M}_{1}\). ``` Input: number of sites \(m\), number of subjects per site \(n\), random sample of size \(S\) generated from the analysis prior \(\mathbf{\gamma}_{a}\), number of iterations \(T\) Output: vector \(\mathbf{BF}_{01}\) of size \(T\) from the predictive distribution of BF\({}_{01}\) under \(\mathcal{M}_{0}\) for\(t=1,\ldots,T\)do 1 Generate \(q\sim\chi_{m-1}^{2}\)\(g_{0}\gets n^{(m-1)/2}\cdot\ \exp\left\{-q/2\right\}\)\(g_{1}\leftarrow\dfrac{1}{S}\sum_{j=1}^{S}\left[1/n+\gamma_{a,j}^{2} \right]^{(1-m)/2}\cdot\ \exp\left\{-q/2\cdot\left(1+n\gamma_{a,j}^{2}\right)^{-1}\right\}\)\(BF_{01,t}\gets g_{0}/g_{1}\) 2 end for ``` **Algorithm 1**Simulating from the prior predictive distribution of \(BF_{01}\) under \(\mathcal{M}_{0}\) Figure 3 reports the prior predictive distribution of the BF (in logarithmic scale) under model \(\mathcal{M}_{0}\) (top panels) and \(\mathcal{M}_{1}\) (bottom panels) for a few pairs \((n,m)\) whose product \(n\cdot m\) is kept constant. Specifically, in the two left panels we set \(n=80\), with \(m\in\{4,\ 8,\ 12\}\); while in the right panels we fixed \(m=8\) with \(n\in\{40,\ 80,\ 120\}\). Reading row-wise (i.e. for a fixed true model), each distribution on the left has a corresponding distribution on the right with the same \(n\cdot m\). In this way one can better appreciate the effect of reallocating a given total number of subjects between number of sites \(m\) and number of subjects per site \(n\). For visualization purposes, the distributions for the pair \((n,m)=(80,8)\) are highlighted (in blue under \(\mathcal{M}_{0}\) and in red under \(\mathcal{M}_{1}\)), as they are common in the two settings. Figure 3 reveals the strong imbalance in the learning rate of the BF distribution under each of the two models. It is apparent that under \(\mathcal{M}_{0}\) the maximal \(\log(BF_{01})\), representing evidence in _favour_ of the true model, never attains the value 10 while under \(\mathcal{M}_{1}\) the corresponding evidence can be as large as \(10^{6}\) or beyond (recall that \(BF_{10}=1/BF_{01}\), where \(BF_{10}\) measures evidence in favour of \(\mathcal{M}_{1}\)). This phenomenon has been investigated from a theoretical perspective in Dawid (2011), where it is essentially shown that for nested models, as in our setup, the BF grows with the square root of the sample size under the null model, while its growth is exponential under the larger encompassing model; see also Johnson and Rossell (2010). This result however remains relatively neglected in papers, a few notable exceptions being Schonbrodt and Wagenmakers (2018) and Ly and Wagenmakers (2022). ## 3 Sample size determination In this section we consider probabilities of correct, misleading and undetermined evidence when the true model is either \(\mathcal{M}_{0}\) or \(\mathcal{M}_{1}\). Based on these probabilities, we provide a design framework to determine configurations for the pair \((n,m)\) capable to deliver compelling evidence when testing heterogeneity. Figure 3: Prior predictive distribution of BF under \(\mathcal{M}_{0}\) (top row) and \(\mathcal{M}_{1}\) (bottom row) for selected pairs of \((n,m)\) with \(n\in\{40,\ 80,\ 120\}\) and \(m\in\{4,\ 8,\ 12\}\). ### Bayes factor thresholds and classification of model evidence For given positive thresholds \(k_{0}\) and \(k_{1}\), if \(BF_{01}\) is greater than \(k_{0}\) then data suggest evidence in favor of \(\mathcal{M}_{0}\) (at level \(k_{0}\)), while if it is less than \(1/k_{1}\), then data suggest evidence in favor of \(\mathcal{M}_{1}\) (at level \(k_{1}\)). Finally, if the BF lies in the interval \((1/k_{1},k_{0})\), evidence is undetermined. If \(k_{0}\) is set at a high value such as 10 or higher, then \(BF_{01}>k_{0}\) can be regarded as _strong_ evidence in favor of \(\mathcal{M}_{0}\)(Kass and Raftery, 1995; Schonbrodt and Wagenmakers, 2018) or, in the words of De Santis (2004), _decisive_ evidence for \(\mathcal{M}_{0}\). Similar considerations apply for \(k_{1}\) as far as evidence for \(\mathcal{M}_{1}\) is concerned. Smaller values of \(k_{0}\) and \(k_{1}\) such as 3 or 5 - which are sometimes used (Weiss, 1997) - would instead only suggest _moderate_ evidence in either direction. Notice that distinct thresholds \(k_{0}\) and \(k_{1}\) are allowed, usually with \(k_{0}<k_{1}\) because learning the true model is slower under \(\mathcal{M}_{0}\) than under \(\mathcal{M}_{1}\); see Figures 3 and 4, and our comments at the end of Subsection 2.3. Rather than fixing upfront \(k_{0}\) and \(k_{1}\) as evidence thresholds, one can specify them _indirectly_ through design-based considerations such as the probability of Type I error \(\alpha\) and power \((1-\beta)\) both interpreted from the Bayesian perspective. In this way the BF acts as a mere test statistic, so that its intrinsic meaning, along with the substantive interpretation of the evidence cut-offs \(k_{0}\) and \(k_{1}\), are forfeited. On the other hand, the resulting cut-off values satisfy more conventional design goals and thus might be more broadly acceptable to practitioners. Given thresholds \(k_{0}\) and \(k_{1}\), and assuming that either \(\mathcal{M}_{0}\) or \(\mathcal{M}_{1}\) is in turn the true data-generating model, we define the following events (omitting for brevity dependence on the chosen thresholds): * _Correct_ evidence (\(C\)): evidence is in favour of the correct model * _Misleading_ evidence (\(M\)): evidence is in favour of the incorrect model * _Undetermined_ (\(U\)) evidence: neither \(C\) nor \(M\) hold. The _conditional_ (prior predictive) probability of each of the above events is naturally evaluated under the assumption that either model in turn holds true. This can be approximated, separately under \(\mathcal{M}_{0}\) and \(\mathcal{M}_{1}\), applying Algorithm 1 and 2, respectively. One can also evaluate the _unconditional_, or _overall_, probability of the events \(\{C,M,U\}\) by averaging the corresponding conditional probabilities across the two models, using prior model probabilities \(\pi_{0}=p(\mathcal{M}_{0})\) and \(\pi_{1}=p(\mathcal{M}_{1})=1-\pi_{0}\); see Table 1 for a summary. Figure 4 reports the distribution of \(BF_{01}\) under \(\mathcal{M}_{0}\) (left panel) and \(\mathcal{M}_{1}\) (right panel) highlighting the corresponding probabilities of Correct, Misleading and Undetermined evidence for \((n,m)=(80,8)\) and \(k_{0}=k_{1}=3\). It is apparent that the conditional probability of Correct evidence is appreciably higher under \(\mathcal{M}_{1}\) (78%) than under \(\mathcal{M}_{0}\) (4%); correspondingly that of Undetermined evidence is higher under \(\mathcal{M}_{0}\) (94%) than under \(\mathcal{M}_{1}\) (21%). This reinforces the fact that learning under \(\mathcal{M}_{0}\) proves to be harder. \begin{table} \begin{tabular}{l l l l} & _Correct evidence_ & _Misleading evidence_ & _Undetermined evidence_ \\ \hline \hline \(\mathcal{M}_{0}\) & \(p_{0}^{C}=p(BF_{01}>k_{0}|\mathcal{M}_{0})\) & \(p_{0}^{M}=p(BF_{01}<1/k_{1}|\mathcal{M}_{0})\) & \(p_{0}^{U}=p(1/k_{1}<BF_{01}<k_{0}|\mathcal{M}_{0})\) \\ \(\mathcal{M}_{1}\) & \(p_{1}^{C}=p(BF_{01}<1/k_{1}|\mathcal{M}_{1})\) & \(p_{1}^{M}=p(BF_{01}>k_{0}|\mathcal{M}_{1})\) & \(p_{1}^{U}=p(1/k_{1}<BF_{01}<k_{0}|\mathcal{M}_{1})\) \\ _overall_ & \(p^{C}=\pi_{0}\cdot p_{0}^{C}+\pi_{1}\cdot p_{1}^{C}\) & \(p^{M}=\pi_{0}\cdot p_{0}^{M}+\pi_{1}\cdot p_{1}^{M}\) & \(p^{U}=\pi_{0}\cdot p_{0}^{U}+\pi_{1}\cdot p_{1}^{U}\) \\ \end{tabular} \end{table} Table 1: The probabilities of Correct, Misleading and Undetermined evidence. ### Conditional approach to sample size determination #### 3.2.1 The conditional criterion for sample size determination In the _conditional_ approach to SSD the usual requirement is to guarantee a desired level of power (typically 0.8 or higher) at a given value of the parameter of interest. In the Bayesian framework, the fixed value is replaced by an entire distribution, namely the design prior, leading to an _unconditional_ power (yet depending on the prior itself). Using the BF as a measure of evidence, Weiss (1997) considered SSD for hypothesis testing based on type I error rate as well as conditional and unconditional power (without distinguishing between analysis and design prior which had not yet been developed at the time). Alternative criteria to select a sample size in order to achieve separation of two models with a reasonable _a priori_ guarantee were presented in Wang and Gelfand (2002). In the setup of multisite replications, which is the focus of this work, detection of the presence of heterogeneity (model \(\mathcal{M}_{1}\)) is of primary importance. Accordingly we single out \(p_{1}^{C}\), that is the probability of correctly identifying \(\mathcal{M}_{1}\), as most relevant. This leads to the following design criterion Figure 4: The probabilities of Correct, Misleading and Undetermined evidence under \(\mathcal{M}_{0}\) and \(\mathcal{M}_{1}\) for \((n,m)=(80,8)\). for the optimal selection of the number of subjects \(n^{*}\) for a given number of sites \(m\): \[n^{*}=\min\{n\in\mathbb{N}:p_{1}^{C}\geq(1-\beta)\;\;and\;\;p_{0}^{M}=\alpha\}, \tag{7}\] where \(\mathbb{N}\) is the set of natural numbers, \((1-\beta)\) is the unconditional power of successfully detecting heterogeneity, while \(\alpha\) is the Type I error rate. In practice to obtain \(n^{*}\) for a fixed value \(m\) one can proceed as follows. Fix \(m\), which is omitted for simplicity from our notation, set \(n=n_{0}\) and let \(1/k_{1}^{\alpha}(n_{0})\) be equal to the \(\alpha\)-quantile of the distribution of \(BF_{01}(n_{0})\) under \(\mathcal{M}_{0}\), so that \(p_{0}^{M}(n_{0})=p(BF_{01}(n_{0})<1/k_{1}^{\alpha}(n_{0})|\mathcal{M}_{0})=\alpha\). Next evaluate \(p(BF_{01}(n_{0})<1/k_{1}^{\alpha}(n_{0})|\mathcal{M}_{1})\); if this value is less than \((1-\beta)\), increase \(n\) to \(n_{1}>n_{0}\). It appears from Figure 3 that the distribution of \(BF_{01}(n)\) under \(\mathcal{M}_{0}\) shifts to the right as \(n\) increases (for fixed \(m\)). Informally we can then conclude that \(k_{1}^{\alpha}(n_{1})<k_{1}^{\alpha}(n_{0})\) in order to satisfy the constraint \(p_{0}^{M}(n_{1})=p(BF_{01}(n_{1})<1/k_{1}^{\alpha}(n_{1})|\mathcal{M}_{0})=\alpha\). As a consequence, \(p(BF_{01}(n_{1})<1/k_{1}^{\alpha}(n_{1})|\mathcal{M}_{1})>p(BF_{01}(n_{0})<1/k _{1}^{\alpha}(n_{0})|\mathcal{M}_{1})\) because the distribution of \(BF_{01}(n)\) under \(\mathcal{M}_{1}\) shifts to the left as \(n\) increases. If \(p(BF_{01}(n_{0})<1/k_{1}^{\alpha}(n_{0})|\mathcal{M}_{1})>(1-\beta)\), simply choose \(n_{1}<n_{0}\). The optimal solution \(n^{*}\) can then be obtained _via_ the _Regula Falsi_ (False Position) method (Burden et al., 2015), suitably modified to account for the discreteness of \(n\). R-code to evaluate \(n^{*}\) in (7) is available in the Supplementary material. #### 3.2.2 Results for the conditional approach In this subsection we implement the _conditional_ approach under four scenarios, corresponding to the combinations of \(p_{1}^{C}\in\{0.80,0.90\}\) and \(p_{0}^{M}\in\{0.01,0.05\}\), each analysed for fifteen possible values of the number of sites \(m\in\{3,4,...,17\}\). Since interest centres on variability across sites, we ex cluded the value \(m=2\) which would also lead to much higher values of \(n\); see also Figure 5. Priors for \(\mu\) and \(\gamma\) were chosen as described in Section 2.2; additionally we set \(S=10,000\) and \(T=50,000\) in Algorithm 1 and 2. Results are presented in Table 2 where thresholds \(1/k_{1}\) are reported along pairs \((n^{*},m)\). The same results can be visually inspected in Figure 5. Clearly, for each \(m\), a stronger requirement on the probability of correct identification of heterogeneity \(p_{1}^{C}\) produces a higher value \(n^{*}\), and the same happens when \(p_{0}^{M}\) is lowered. For _given_\(p_{0}^{M}\) the thresholds \(1/k_{1}\) appear robust to the choice of \(p_{1}^{C}\) and also across configurations \((n^{*},m)\), reflecting low sensitivity in the lower tail of the distribution of \(BF_{01}\) in the range of values under investigation. Notice that a five-fold reduction of \(p_{0}^{M}\) from \(0.05\) to \(0.01\) leads to an approximate three-fold reduction in \(1/k_{1}\) (roughly from \(0.6\) to \(0.2\)). On the scale of evidence \begin{table} \begin{tabular}{c c|c c|c c|c c} \hline \multicolumn{3}{c|}{\(p_{1}^{C}=0.8\)} & \multicolumn{3}{c}{\(p_{1}^{C}=0.9\)} \\ \hline \multicolumn{2}{c|}{\(p_{0}^{M}=0.01\)} & \multicolumn{2}{c|}{\(p_{0}^{M}=0.05\)} & \multicolumn{2}{c|}{\(p_{0}^{M}=0.01\)} & \multicolumn{2}{c}{\(p_{0}^{M}=0.05\)} \\ \hline \((n^{*},m)\) & \(1/k_{1}\) & \((n^{*},m)\) & \(1/k_{1}\) & \((n^{*},m)\) & \(1/k_{1}\) & \((n^{*},m)\) & \(1/k_{1}\) \\ \hline \((518,3)\) & \(0.220\) & \((328,3)\) & \(0.639\) & \((1138,3)\) & \(0.253\) & \((730,3)\) & \(0.756\) \\ \((268,4)\) & \(0.214\) & \((178,4)\) & \(0.603\) & \((478,4)\) & \(0.234\) & \((323,4)\) & \(0.675\) \\ \((184,5)\) & \(0.215\) & \((126,5)\) & \(0.591\) & \((302,5)\) & \(0.230\) & \((211,5)\) & \(0.649\) \\ \((143,6)\) & \(0.215\) & \((99,6)\) & \(0.584\) & \((221,6)\) & \(0.227\) & \((157,6)\) & \(0.634\) \\ \((117,7)\) & \(0.213\) & \((82,7)\) & \(0.577\) & \((176,7)\) & \(0.223\) & \((127,7)\) & \(0.623\) \\ \((100,8)\) & \(0.215\) & \((71,8)\) & \(0.575\) & \((147,8)\) & \(0.224\) & \((107,8)\) & \(0.616\) \\ \((88,9)\) & \(0.212\) & \((62,9)\) & \(0.582\) & \((127,9)\) & \(0.220\) & \((92,9)\) & \(0.622\) \\ \((79,10)\) & \(0.214\) & \((56,10)\) & \(0.582\) & \((113,10)\) & \(0.222\) & \((82,10)\) & \(0.619\) \\ \((72,11)\) & \(0.213\) & \((51,11)\) & \(0.580\) & \((101,11)\) & \(0.219\) & \((74,11)\) & \(0.617\) \\ \((67,12)\) & \(0.210\) & \((48,12)\) & \(0.582\) & \((93,12)\) & \(0.216\) & \((68,12)\) & \(0.613\) \\ \((62,13)\) & \(0.213\) & \((45,13)\) & \(0.581\) & \((85,13)\) & \(0.219\) & \((63,13)\) & \(0.610\) \\ \((58,14)\) & \(0.215\) & \((42,14)\) & \(0.579\) & \((79,14)\) & \(0.220\) & \((59,14)\) & \(0.613\) \\ \((55,15)\) & \(0.215\) & \((40,15)\) & \(0.581\) & \((74,15)\) & \(0.220\) & \((55,15)\) & \(0.609\) \\ \((52,16)\) & \(0.217\) & \((38,16)\) & \(0.580\) & \((69,16)\) & \(0.221\) & \((52,16)\) & \(0.609\) \\ \((49,17)\) & \(0.217\) & \((36,17)\) & \(0.581\) & \((66,17)\) & \(0.222\) & \((49,17)\) & \(0.607\) \\ \hline \hline \end{tabular} \end{table} Table 2: The collection of pairs \((n^{*},m)\) for \(p_{1}^{C}\in\{0.8,0.9\}\) and \(p_{0}^{M}\in\{0.01,0.05\}\), along with the corresponding thresholds \(1/k_{1}\) in the conditional approach. in favour of \(\mathcal{M}_{1}\), i.e. \(BF_{01}<1/k_{1}\), based on Schonbrodt and Wagenmakers (2018), this translates to an upgrade from "anecdotal" (\(1/3<BF_{01}<1\)) to "moderate" (\(1/10<BF_{01}<1/3\)) evidence. This implies that the higher price inherent in a higher \(n^{*}\) might be worth paying not only to achieve a smaller Type I error rate but also to achieve more convincing evidence to detect heterogeneity when it is actually present. Another feature worth mentioning is the interplay between \(m\) and \(n\). Granted that they are inversely related, it emerges that the decrease in \(n\) is particularly steep only for small values of \(m\), and becomes relatively modest thereafter; see Figure 5. The optimal choice \(n^{*}\) is also sensitive to the _design_ prior. If the latter moves toward zero (i.e. closer to the null model), higher values for \(n^{*}\) are required. Specifically, keeping \(\nu_{\gamma}\) and \(\sigma_{\gamma}\) constant, and recalling that \(\mu_{\gamma}=0.2\), we need on average a sample size four times larger if we move to \(\mu_{\gamma}=0.1\), whereas the sample size can be halved if we set \(\mu_{\gamma}=0.3\). The simulated results are analytically provided in the Supplementary material A. It is instructive to compare our results with those presented in Hedges and Schauer (2021, Table 2). Notice that in their paper a cost function is introduced so that their results depend on \(c_{2}/c_{1}\), the ratio of per-laboratory cost to per-subject cost. They consider a grid of five values for \(c_{2}/c_{1}\) and, for each of them, determine the optimal sample size \((n_{O},m_{O})\) for selected values of the relative variance heterogeneity (\(\tau^{2}/\omega\) in their notation) which corresponds to our \(\gamma^{2}\). The above is replicated at two levels of (conditional) power, namely \(0.8\) and \(0.9\). To compare their results with ours, we first identified a value \(\gamma_{0}^{2}=0.04\) of the relative variance which approximately "matches" our prior expectation \(E(\gamma)=0.2\) in the design prior. Next we computed the collection of optimal sample sizes \((n_{Ok},m_{Ok})\) for each \(c_{2}/c_{1}=r_{k}\) with \(k=1,\ldots,5\). For given \(m_{Ok}\), a comparison of our optimal \(n_{k}^{*}\) with \(n_{Ok}\) shows that the former is at most only \(10\%\) higher than the latter; remarkably, this represents a small increase in view of the fact that our analysis fully incorporates uncertainty on the parameter \(\gamma\) through an entire distribution on \(\gamma\). By leting the variance of the design prior decrease to zero, we recover sample size results similar to those which hold under the conditional power approach. ### Unconditional approach to sample size determination #### 3.3.1 The unconditional criterion for sample size determination In the _unconditional_ approach to SSD, we consider the overall probabilities \(p^{C}\) and \(p^{M}\) defined in Table 1. In this way we modify the design criterion (7) leading to \(n^{*}\) by replacing the conditional probabilities \(p^{C}_{1}\) and \(p^{M}_{0}\), with their Figure 5: Pairs \((m,n^{*})\) (solid circles) for \(p^{C}_{1}\in\{0.8,0.9\}\) and \(p^{M}_{0}\in\{0.01,0.05\}\), along with corresponding thresholds \(1/k_{1}\) (empty diamonds) in the conditional approach. respective overall probabilities \(p^{C}\) and \(p^{M}\). Using the notation of Subsection 3.2.1, given \((1-\beta)\), \(\alpha\) and the number of sites \(m\), the optimal sample size \(n^{*}\) is defined following (7) as \[n^{*}=\min\{n\in\mathbb{N}:p^{C}\geq(1-\beta)\;\;and\;\;p^{M}=\alpha\}, \tag{8}\] where \(\mathbb{N}\) is the set of natural numbers, and for simplicity we set \(p_{0}^{M}=p_{1}^{M}=\alpha\). Note that now, differently from the conditional approach, prior model probabilities are required. For the results in Table 3 we additionally fixed \(\pi_{0}=\pi_{1}=0.5\). Applying calculations similar to those used for the conditional approach, we obtain \(1/k_{1}\), and moreover we derive \(k_{0}\) as the \((1-\alpha)-\)quantile of the distribution of \(BF_{01}\) conditionally on \(\mathcal{M}_{1}\), under the constraint \(p_{1}^{M}=\alpha\). Once again the effective calculation of \(n^{*}\) is carried out _via_ the _Regula Falsi_ method, as in the conditional approach. #### 3.3.2 Results for the unconditional approach We implement the _unconditional_ approach under the combinations of \(p^{C}\in\{0.80,\,0.90\}\) and \(p^{M}\in\{0.01,0.05\}\), while keeping unchanged the other settings reported in Subsection 3.2.2. The results are tabulated in Table 3 and graphically represented in Figure 6. Similarly to the conditional approach, the required sample size is dependent on the predetermined probabilities \(p^{C}\) and \(p^{M}\), while \(m\) and \(n\) are again inversely related. Regarding the scale of evidence implied by the thresholds \(k_{0}\) and \(1/k_{1}\), it lies in the range "anecdotal" to "moderate" for both models. Due to its stricter nature, the unconditional approach requires larger sample sizes to reach the desired probability of Correct evidence compared to the conditional approach for each \(m\), although the differences become smaller as \(m\) increases. It seems that the higher sample size required for the unconditional approach is worth paying because \begin{table} \begin{tabular}{c c|c c c|c c c|c c c} \hline \multicolumn{6}{c|}{\(p^{C}=0.8\)} & \multicolumn{6}{c}{\(p^{C}=0.9\)} \\ \hline \multicolumn{2}{c|}{\(p^{M}=0.01\)} & \multicolumn{2}{c|}{\(p^{M}=0.05\)} & \multicolumn{2}{c|}{\(p^{M}=0.01\)} & \multicolumn{2}{c}{\(p^{M}=0.05\)} \\ \hline \multicolumn{2}{c|}{\((n^{*},m)\)} & \(1/k_{1}\) & \(k_{0}\) & \((n^{*},m)\) & \(1/k_{1}\) & \(k_{0}\) & \((n^{*},m)\) & \(1/k_{1}\) & \(k_{0}\) \\ \hline \((2689,3)\) & \(0.321\) & \(4.090\) & \((611,3)\) & \(0.724\) & \(2.049\) & \((4596,3)\) & \(0.386\) & \(3.301\) & \((1018,3)\) & \(0.827\) & \(1.518\) \\ \((735,4)\) & \(0.258\) & \(2.891\) & \((266,4)\) & \(0.647\) & \(1.761\) & \((1130,4)\) & \(0.292\) & \(2.257\) & \((406,4)\) & \(0.714\) & \(1.289\) \\ \((404,5)\) & \(0.244\) & \(2.525\) & \((171,5)\) & \(0.622\) & \(1.632\) & \((590,5)\) & \(0.269\) & \(1.930\) & \((250,5)\) & \(0.675\) & \(1.190\) \\ \((265,6)\) & \(0.234\) & \(2.273\) & \((127,6)\) & \(0.609\) & \(1.542\) & \((374,6)\) & \(0.254\) & \(1.726\) & \((180,6)\) & \(0.654\) & \(1.121\) \\ \((197,7)\) & \(0.227\) & \(2.128\) & \((102,7)\) & \(0.597\) & \(1.506\) & \((274,7)\) & \(0.244\) & \(1.603\) & \((142,7)\) & \(0.637\) & \(1.103\) \\ \((158,8)\) & \(0.227\) & \(2.015\) & \((85,8)\) & \(0.590\) & \(1.463\) & \((214,8)\) & \(0.241\) & \(1.537\) & \((117,8)\) & \(0.627\) & \(1.066\) \\ \((133,9)\) & \(0.221\) & \(1.972\) & \((74,9)\) & \(0.598\) & \(1.419\) & \((179,9)\) & \(0.234\) & \(1.504\) & \((100,9)\) & \(0.632\) & \(1.055\) \\ \((117,10)\) & \(0.223\) & \(1.918\) & \((66,10)\) & \(0.597\) & \(1.411\) & \((155,10)\) & \(0.234\) & \(1.445\) & \((89,10)\) & \(0.629\) & \(1.048\) \\ \((103,11)\) & \(0.220\) & \(1.886\) & \((59,11)\) & \(0.592\) & \(1.413\) & \((136,11)\) & \(0.231\) & \(1.421\) & \((80,11)\) & \(0.626\) & \(1.005\) \\ \((91,12)\) & \(0.216\) & \(1.819\) & \((55,12)\) & \(0.593\) & \(1.364\) & \((119,12)\) & \(0.225\) & \(1.370\) & \((73,12)\) & \(0.622\) & \(1.019\) \\ \((83,13)\) & \(0.219\) & \(1.762\) & \((51,13)\) & \(0.591\) & \(1.361\) & \((108,13)\) & \(0.228\) & \(1.351\) & \((67,13)\) & \(0.618\) & \(1.023\) \\ \((78,14)\) & \(0.220\) & \(1.764\) & \((47,14)\) & \(0.588\) & \(1.381\) & \((101,14)\) & \(0.229\) & \(1.337\) & \((63,14)\) & \(0.621\) & \(0.972\) \\ \((72,15)\) & \(0.219\) & \(1.731\) & \((44,15)\) & \(0.587\) & \(1.364\) & \((92,15)\) & \(0.227\) & \(1.338\) & \((58,15)\) & \(0.615\) & \(1.000\) \\ \((68,16)\) & \(0.221\) & \(1.718\) & \((42,16)\) & \(0.586\) & \(1.360\) & \((87,16)\) & \(0.229\) & \(1.293\) & \((55,16)\) & \(0.613\) & \(1.007\) \\ \((64,17)\) & \(0.222\) & \(1.691\) & \((40,17)\) & \(0.587\) & \(1.358\) & \((82,17)\) & \(0.230\) & \(1.275\) & \((53,17)\) & \(0.618\) & \(0.960\) \\ \hline \hline \end{tabular} \end{table} Table 3: The collection of pairs \((n^{*},m)\) for \(p^{C}\in\{0.80,\,0.90\}\) and \(p^{M}\in\{0.01,0.05\}\) along with the decision thresholds \(1/k_{1}\) and \(k_{0}\) in the unconditional approach. the probability of misleading evidence under each of the two models, but we also have a lower bound on the probability of overall correct evidence. The latter however indirectly provides bounds also on the probability of correct evidence under each of the two models especially when \((1-\beta)\) is high and the two model probabilities are not at the extreme of the range \((0,1)\). Sensitivity of SSD to changes in the _design_ prior appears more pronounced than in the _conditional_ approach. Specifically, relative to the benchmark \(\mu_{\gamma}=0.2\), setting \(\mu_{\gamma}=0.1\) we need on average a sample size six times larger, whereas setting \(\mu_{\gamma}=0.3\) the sample size is more than halved (approximately 40%). Detailed results are available in Supplementary material B. Figure 6: Pairs \((m,n^{*})\) (solid circles) for \(p^{C}\in\{0.8,0.9\}\) and \(p^{M}\in\{0.01,0.05\}\), along with corresponding thresholds \(1/k_{1}\) and \(k_{0}\) (empty diamonds and empty squares respectively) in the unconditional approach. Discussion In this work we have dealt with multiple replication studies, focusing on the _variation_ (heterogeneity) of the effect sizes across sites. Specifically, we considered the comparison of two models: one without heterogeneity (\(\mathcal{M}_{0}\)) and another one incorporating heterogeneity (\(\mathcal{M}_{1}\)). Within this setting, we developed a Bayesian procedure for sample size determination (SSD) capable of delivering compelling evidence. For the two models under consideration, evidence was defined in terms of the Bayes Factor (BF), which was derived using a suitably defined _analysis_ prior. Our design criterion was specified through a conditional, as well as an unconditional, approach. In the former the goal is to correctly obtain evidence for \(\mathcal{M}_{1}\) (presence of heterogeneity) with high probability, while assuring that the Type I error rate based on the BF is kept low. In the unconditional approach instead, the aim is to achieve high probability of correct evidence _overall_ (i.e. averaged across the two models), while keeping the probability of misleading evidence low, again overall. The evaluation of our criterion relies on the prior predictive distribution of the BF which was derived based on the elicitation of a _design_ prior (separate from the analysis prior). The Bayesian methodology presented in this paper represents a flexible alternative to frequentist based designs for SSD in replication studies. A major feature of our approach is the incorporation of uncertainty through prior distributions, both at the analysis and the design stage, which significantly extends the standard practice of conditioning on a fixed value of the parameter as in conventional power analysis. More generally, we can reap the advantages inherent in the use of the BF for evidence assessment as described in Wagenmakers et al. (2016), and in particular the possibility of evaluating the posterior probability both for the presence and the absence of heterogeneity. We did not include cost considerations in our design in order to simplify the exposition and focus on the most relevant aspects of our methodology. They could be however incorporated in our framework in a rather straightforward way. Expressing the total cost \(C\) in terms of the _per_ subject cost \(c_{1}\), and _per_ laboratory cost \(c_{2}\), one can simply select among all pairs \((n^{*},m)\) satisfying the design criterion (7) or (8) (see Table 2 or Table 3), that specific pair \((n^{\prime},m^{\prime})\) which minimises the total cost \(C\). An important issue in Bayesian inference, and hence design, is sensitivity of the results to prior specifications. We performed sensitivity with respect to the _design_ prior for relative heterogeneity. As expected, the required sample size was inversely related with the location of the relative heterogeneity parameter; details and further results are available in the Supplementary material. Despite careful planning, once the actual experiment is performed, it may happen that the outcome will be able to deliver only an undetermined evidence (see Table 1), so that neither hypothesis is supported. A natural option at this stage is to plan a follow-up design until compelling evidence in either direction, that is in favour of \(\mathcal{M}_{0}\) or \(\mathcal{M}_{1}\), is reached. This is in the spirit of the Sequential Bayes Factor described in Schonbrodt et al. (2017), which can be implemented either in the open-ended, or maximal sample size, mode. ## Software R-code to reproduce the simulated results in the paper along with R-functions to implement our approach using settings different from those employed in our work are available at [https://github.com/bourazaskonstantinos/Bayesian-SSD](https://github.com/bourazaskonstantinos/Bayesian-SSD). ## Acknowledgments This research was partially supported by UCSC (D1 research grants). ## Disclosure Statement The authors report there are no competing interests to declare.
2303.09816
Footprint of a topological phase transition on the density of states
For a generalized Su-Schrieffer-Heeger model the energy zero is always critical and hyperbolic in the sense that all reduced transfer matrices commute and have their spectrum off the unit circle. Disorder driven topological phase transitions in this model are characterized by a vanishing Lyapunov exponent at the critical energy. It is shown that the integrated density of states away from a transition has a pseudogap with an explicitly computable H\"older exponent, while it has a characteristic divergence (Dyson spike) at the transition points. The proof is based on renewal theory for the Pr\"ufer phase dynamics and the optional stopping theorem for martingales of suitably constructed comparison processes.
Joris De Moor, Christian Sadel, Hermann Schulz-Baldes
2023-03-17T07:52:55Z
http://arxiv.org/abs/2303.09816v2
# Footprint of a topological phase transition ###### Abstract For a generalized Su-Schrieffer-Heeger model the energy zero is always critical and hyperbolic in the sense that all reduced transfer matrices commute and have their spectrum off the unit circle. Disorder driven topological phase transitions in this model are characterized by a vanishing Lyapunov exponent at the critical energy. It is shown that the integrated density of states away from a transition has a pseudogap with an explicitly computable Holder exponent, while it has a characteristic divergence (Dyson spike) at the transition points. The proof is based on renewal theory for the Prufer phase dynamics and the optional stopping theorem for martingales of suitably constructed comparison processes. \({}^{1}\)Friedrich-Alexander-Universitat Erlangen-Nurnberg Department Mathematik, Cauerstr. 11, D-91058 Erlangen, Germany \({}^{2}\)Pontifica Universidad Catholica de Chile Facultad de Matematicas, Av. Vicuna Mackaenna 4860, Santiago 7820436, Chile ## 1 Context and main result The SSH model (Su-Schrieffer-Heeger [22]) is the prototype of a chiral topological insulator in dimension one. Here a slightly generalized and disordered or _dirty_ version of it will be considered. In such systems, one can associate a noncommutative winding number as a topological invariant to the Fermi projection, provided that the Fermi level lies in a spectral region of Anderson localization. If one modifies the parameters of the system (such as the strength of the disorder in the hopping and on-site masses, see below), the topological invariant may change and the transition points make up the so-called topological phase boundary. It is known (Section 6.6 in [17] and Section 5.5 in [19]) that there is no dynamical Anderson localization for models on the phase boundary. In the disordered SSH model one can determine the phase boundary as those points at which the (smallest non-negative) Lyapunov exponent at energy \(E_{c}=0\) vanishes [14]. Away from these points, one can prove Anderson localization throughout the whole spectrum [21]. The novel contribution of this work is that the integrated density of states (IDOS) has a pseudo-gap at zero energy for parameters away from the phase boundary, while the density of states (DOS) has a characteristic divergence at the phase boundary. To formulate the main result, let us write out the generalized dirty SSH Hamiltonian \(H\) to be considered here. Over each site of the lattice \(\mathbb{Z}\), the system has a quantum cavity with \(2L\) orbitals so that the total Hilbert space is \(\ell^{2}(\mathbb{Z},\mathbb{C}^{2L})\). On each site acts a chiral symmetry operator \(J=\operatorname{diag}(\mathbf{1}_{L},-\mathbf{1}_{L})\) which naturally extends to a symmetry on \(\ell^{2}(\mathbb{Z},\mathbb{C}^{2L})\). Within the cavity over site \(n\), the Hamiltonian is off-diagonal in the grading of \(J\) with entry given by a random invertible matrix \(M_{n}\). Furthermore, all sites are supposed to be connected by rank one operators \(B\) with random couplings \(t_{n}\). Hence the action of \(H\) on \(\psi=(\psi_{n})_{n\in\mathbb{Z}}\in\ell^{2}(\mathbb{Z},\mathbb{C}^{2L})\) is given by \[(H\psi)_{n}\;=\;-\,t_{n+1}\begin{pmatrix}0&B\\ 0&0\end{pmatrix}\psi_{n+1}\,+\,\begin{pmatrix}0&M_{n}\\ M_{n}^{*}&0\end{pmatrix}\psi_{n}\,-\,\overline{t_{n}}\begin{pmatrix}0&0\\ B^{*}&0\end{pmatrix}\psi_{n-1}\,. \tag{1}\] Here, \(t_{n}\in\mathbb{C}\setminus\{0\}\) with complex conjugate \(\overline{t_{n}}\), and \(t_{n}\), \(M_{n}\) are random variables of the form \(t_{n}=e^{\imath\phi_{n}}(1+\lambda\omega_{n})\) and \(M_{n}=\frac{1}{2}(m\,\mathbf{1}_{L}+\mu\omega_{n}^{\prime})\) where \(\phi_{n}\in[0,2\pi)\), \(\omega_{n}\in[-\frac{1}{2},\frac{1}{2}]\), \(\omega_{n}^{\prime}\in\mathbb{C}^{L\times L}\). Then \(\sigma_{n}=(\omega_{n},\omega_{n}^{\prime})\) are random i.i.d. variables according to some compactly supported distribution. For sake of concreteness, let us suppose that \(B=e_{1}e_{L}^{*}\) where \(\{e_{1},\ldots,e_{L}\}\) is an orthonormal basis of \(\mathbb{C}^{L}\). Moreover, \(\lambda\), \(\mu\) are coupling constants. Finally \(m\) is a fixed mass parameter that assures that \(M_{n}\) is invertible with a uniform lower bound on \(M_{n}^{*}M_{n}\) (for \(\mu\) sufficiently small) and it allows to drive the system into a topological phase. Note that \(H\) also has the chiral symmetry \(JHJ=-H\) so that the spectrum is symmetric around the center of band \(E_{c}=0\). For reasons explained below, this energy will also be called critical. Just as any random Schrodinger operator, the generalized SSH model has a well-defined integrated density of states (IDOS) defined as the non-decreasing function \(E\in\mathbb{R}\mapsto\mathcal{N}(E)\) given by \[\mathcal{N}(E)\;:=\;\lim_{N\to\infty}\frac{1}{N}\;\frac{1}{2L}\;\#\{\text{ eigenvalues of }H_{N}\,\leq\,E\}\,,\] where \(H_{N}\) is the restriction of \(H\) to \(\ell^{2}(\{1,\ldots,N\},\mathbb{C}^{2L})\). The limit is known to exist almost surely [16]. Furthermore, such a one-dimensional random model has a (smallest non-negative) Lyapunov exponent \(\gamma(E)\geq 0\) for every energy \(E\in\mathbb{R}\). Further down this will be introduced more carefully and it will also be shown that at the critical energy \(\gamma(0)=|\mathbb{E}(\log(\kappa))|\) where \(\kappa_{\sigma}\) is a positive random variable defined by \[\kappa_{\sigma}\;=\;\frac{1}{|e_{1}^{*}M_{\sigma}^{-1}e_{L}t_{\sigma}|}\quad \text{ where}\quad t_{\sigma}=1+\lambda\omega\,,\quad M_{\sigma}=\frac{1}{2}(m \mathbf{1}_{L}+\omega^{\prime})\,,\] with \(\sigma=(\omega,\omega^{\prime})\in\mathbb{R}\times\mathbb{C}^{L\times L}\) as in the model above. By definition (as in [14]), the parameters \(\lambda,\mu\) at which the Lyapunov exponent at the center of band vanishes make up the phase boundary \(\mathcal{P}\) of the SSH model, namely \[\mathcal{P}\;=\;\left\{(\lambda,\mu)\in\mathbb{R}^{2}\;:\;\gamma(0)=|\mathbb{ E}(\log(\kappa))|=0\right\}.\] The main result of this work now states that one can read off the IDOS whether a model lies on \(\mathcal{P}\) or not. **Theorem 1**: _For \((\lambda,\mu)\not\in{\cal P}\) lying off the phase boundary, i.e. \(\mathbb{E}(\log(\kappa))\neq 0\), the IDOS of the dirty SSH has a pseudo-gap at \(0\) in the sense that_ \[\lim_{E\to 0}\,\frac{\log\big{|}\,{\cal N}(E)\,-\,{\cal N}(0)\,\big{|}}{\log(E)} \;=\;\nu\,, \tag{2}\] _where \(\nu>0\) is determined as the unique positive solution of \(\mathbb{E}(\kappa^{\nu})=1\) if \(\mathbb{E}(\log(\kappa))<0\), and the solution of \(\mathbb{E}(\kappa^{-\nu})=1\) otherwise. On the other hand, for \((\lambda,\mu)\in{\cal P}\) on the phase boundary, i.e. \(\mathbb{E}(\log(\kappa))=0\), with some constant \(C\), the DOS has a characteristic divergence at \(0\) specified by_ \[\Big{|}\,{\cal N}(E)\,-\,{\cal N}(0)\,-\,\tfrac{1}{4L}\,\mathbb{E}\big{(}\big{(} \log(\kappa)\big{)}^{2}\big{)}\,\big{(}\log(E)\big{)}^{-2}\,\Big{|}\;\leq\;C \;|\log(E)|^{-3}\,. \tag{3}\] Let us compare Theorem 1 with the literature on random hopping models which, as will be explained in Section 2, is essentially the particular case \(L=1\) of the generalized SSH model. For the random hopping model, the upper bound \(|{\cal N}(E)-{\cal N}(0)|\leq C_{\delta}|E|^{\nu-\delta}\) was proved in [2] for all \(\delta>0\). Hence (2) provides also the corresponding lower bound. For the random hopping model and points on \({\cal P}\), the characteristic divergence (3) is referred to as Dyson's spike, due to his work [6] showing this for a particular distribution of the random hopping terms. Apart from Dyson's work, there are several non-rigorous works on both regimes covered by Theorem 1. Section V.E in the review [8] contains relevant references. The behavior of the integrated density of states as in (3) was more recently proved rigorously to hold under even weaker assumptions by Kotowski and Virag [12] (no independence is assumed in their work, merely a sufficiently rapid correlation decay). However, no explicit error bound of order \(|\log(E)|^{-3}\) was provided, merely a bound of order \(o(E)\log(E)^{-2}\). Here we place all of these results in the joint context of topological phases and provide considerable technical improvements in the proofs. In order to justify this last statement, let us provide a more detailed technical comparison with the work of Kotowski and Virag [12]. First of all, both works analyze the perturbation of the rotation number of the induced dynamics on projective space \(\mathbb{P}(\mathbb{R}^{2})\cong\mathbb{S}^{1}\) driven by the transfer matrices. The unperturbed dynamics at \(E=0\) leaves two critical points and semicircles in between invariant. The energy-dependent perturbation adds some rotation around the critical points (all in the same direction) and one needs to analyze the number of passages by the critical points into the next semi-circle. In the long-time limit one then obtains the rotation number which is equivalent to the density of states. In [12] the free dynamics (which does not rotate) is subtracted by conjugations which involve products of the \(\kappa\)'s and these products can push the \({\cal O}(E)\) rotation induced by the perturbation. Now the process is compared to some family of different dynamics (with an additional parameter \(\delta\)) that are partially slower/faster (under certain conditions, and for number of steps \(n<\delta(\tfrac{1}{E})^{\delta/4}\) not too large). Then the number of crossings to the 'next' semi-circle is shown to be approximately equal to the number of times where the log-transformed free dynamics \(\sum_{l=1}^{n}\log(\kappa_{l})\) makes jumps of order \(|\log(E)|\). Playing with all the parameters one can get to some scaling limit for \(E=e^{-\sqrt{n}}\to 0\) and \(n\to\infty\). Finally, using probabilistic techniques (_cf._[12, Theorem 3.10]) the authors obtain bounds for the rotation number for \(E\) small, meaning they can let \(E\) fixed and \(n\to\infty\), namely a statement similar to (3), but only with an error \(o(E)\). In contradistinction, in this work the dynamics is analyzed directly, without conjugation of the free dynamics. It is then compared to suitable slower and faster dynamics which allow to estimate the crossings at fixed \(E\). In essence, the slower dynamics drops the \({\cal O}(E)\) rotation except for the region close to the critical points, and the faster dynamics essentially replaces the \({\cal O}(E)\) perturbations by a \(o(E)\) drift going forward. Formulating the rotation number in terms of an expectation of a certain stopping time, one can use the optional stopping theorem to get the claimed estimates. The present approach is more direct and a lot less technical than the one of [12]. Moreover, the constructions work immediately for both cases treated in Theorem 1; only the constructed martingales and the consequent usage of the optional stopping theorem are of different nature, leading to the different behavior at \(E=0\). It is hard to see how to modify the techniques of [12] for the case \(\mathbb{E}(\log(\kappa))\neq 0\) (which does not mean that it cannot be done). To conclude this introductory section, let us mention that we are currently investigating several interesting open questions on the generalized SSH model. First of all, one would like to have a controlled perturbation theory for the Lyapunov exponent in the vicinity of the critical energy of models on the transition (as in [16, 11, 3]). This ponds on a good understanding of the Furstenberg measure. As illustrated numerically in Figure 1 below, the Lyapunov exponent (or inverse localization length) has a similar singular behavior as the IDOS, as predicted by theoretical physicists (see [8]). Second of all, we expect that all these states at energies with large localization length (as exhibited in Theorem 1) lead to a quantitative lower bound on the quantum dynamics (going beyond the statements of [17, 19] showing that models at the topological phase boundaries cannot be dynamically Anderson localized). The mechanism behind this quantitative delocalization phenomenon is similar as in the random dimer model [4], random polymer model [11] or the random Kronig-Penney model [3], but a proof is much more subtle due to the presence of the singularities of the DOS and the Lyapunov exponent. Finally, another question concerns the fate of the (likely enhanced) area law in these models [15]. Let us not that the nature of the level statistics near the critical energy for models on the transition was already determined in [12]. ## 2 Transfer matrices and critical energies The proof of Theorem 1 uses the transfer matrix formalism for the study of quasi-one-dimensional Jacobi operators. Clearly, the SSH Hamiltonian (1) is a such a block Jacobi matrix with \(2L\times 2L\) block entries on every site. However, the off-diagonal entries are not invertible so that one cannot define the \(4L\times 4L\) transfer matrices in the usual form (which involves working with the inverse of the off-diagonal terms). One rather has to pass to the so-called reduced transfer matrices [5, 18, 20]. In the present situation, the matrices \(B\) are of rank 1 and therefore the reduced transfer matrices will be of size \(2\times 2\) satisfying \[T^{*}\,I\,T\;=\;I\,,\qquad I\;:=\;\begin{pmatrix}0&-1\\ 1&0\end{pmatrix}\,. \tag{4}\] For their construction, let us denote by \(\{e_{1},\ldots,e_{L}\}\) a basis for \(\mathbb{C}^{L}\), chosen such that \(B=e_{1}e_{L}^{*}\). Then the ranges of the lower and upper entries in the block Jacobi matrices both have a one dimensional span \({\cal H}^{+}={\rm span}\{e_{1}\}\) and \({\cal H}^{-}={\rm span}\{e_{L}\}\) in \({\mathbb{C}}^{L}\), respectively. These two spaces are orthogonal as required in [20]. The relevant part of the resolvent of the diagonal part is \[\begin{pmatrix}e_{1}\\ e_{L}\end{pmatrix}^{*}\left(E\,{\bf 1}\,-\,\begin{pmatrix}0&M_{n}\\ M_{n}^{*}&0\end{pmatrix}\right)^{-1}\begin{pmatrix}e_{1}\\ e_{L}\end{pmatrix}\;=\;\begin{pmatrix}G_{n}^{E,-,-}&G_{n}^{E,-,+}\\ G_{n}^{E,+,-}&G_{n}^{E,+,+}\end{pmatrix}\,,\] by definition of the 4 scalar entries on the r.h.s. Therefore the reduced transfer matrices, given in (15) or (17) of [20], are \[T_{n}^{E}\;=\;-\,\begin{pmatrix}(G_{n}^{E,-,+})^{-1}&(G_{n}^{E,-,+})^{-1}G_{n}^ {E,-,-}\\ -G_{n}^{E,+,+}(G_{n}^{E,-,+})^{-1}&G_{n}^{E,+,-}-G_{n}^{E,+,+}(G_{n}^{E,-,+})^ {-1}G_{n}^{E,-,-}\end{pmatrix}\begin{pmatrix}\frac{1}{t_{n}}&0\\ 0&\overline{t_{n}}\end{pmatrix}\,. \tag{5}\] Let us note that this SSH model also fits into the scheme of one-channel operators and the transfer matrices above correspond exactly to (1.10) in [18]. The invertibility of \(M_{n}\) assures that there is an open interval around \(E_{c}=0\) in which \(\begin{pmatrix}0&M_{n}\\ M_{n}^{*}&0\end{pmatrix}\) has no eigenvalue and therefore all three hypothesis stated in [20] are satisfied and one can conclude that the reduced transfer matrices are analytic in \(E\) in this interval. As already stressed above, they also satisfy (4). This implies that their determinant is of unit modulus. In order to attain a determinant equal to 1, one can use an energy dependent gauge transformation \((W^{E}\psi)_{n}=e^{t\varphi_{n}}\psi_{n}\) with phases \(\varphi_{n}\in[0,2\pi)\) to be chosen next. In fact, the Hamiltonian \(W^{E}H(W^{E})^{*}\) is of the same block diagonal form as \(H\); the diagonal entries \(M_{n}\) are unchanged, but the the off-diagonal entries are obtained by replacing \(t_{n}\) by \(t_{n}e^{t\,\delta\varphi_{n}}\) with \(\delta\varphi_{n}=\varphi_{n}-\varphi_{n-1}\). According to (5) this changes \(\det(T_{n}^{E})\) to \(e^{-2t\,\delta\varphi_{n}}\det(T_{n}^{E})\). Therefore for every energy \(E\in{\mathbb{R}}\) the angles \(\varphi_{n}\) can be chosen iteratively in \(n\) such that all transfer matrices at \(E\) have unit determinant. Hence in the following, we will always assume \(\det(T_{n}^{E})=1\) without further mention of the gauge transformation (the necessary phases are simply absorbed in redefined \(t_{n}\)). This implies that the whole reduced transfer matrix \(T_{n}^{E}\) is real, as one readily deduces from the relation \(T^{*}=IT^{-1}I^{*}\) following from (4). Alternatively, one can simply assume that the \(t_{n}\) and matrices \(M_{n}\) are all real, then also the reduced transfer matrices in (5) are real with unit determinant (or equal to \(-1\), which can again be absorbed by a gauge transformation). When focusing on the behavior of the reduced transfer matrices at \(E_{c}=0\), one needs the relations \[G_{n}^{0,-,-}\;=\;0\,,\qquad G_{n}^{0,+,-}\;=\;-e_{1}^{*}(M_{n})^{-1}e_{L}\,, \qquad G_{n}^{0,-,+}\;=\;(G_{n}^{0,+,-})^{*}\,,\qquad G_{n}^{E,+,+}\;=\;0\,.\] They imply \[T_{n}^{0}\;=\;-\,\begin{pmatrix}\kappa_{n}&0\\ 0&\frac{1}{\kappa_{n}}\end{pmatrix}\,,\qquad\kappa_{n}\;=\;\frac{1}{G_{n}^{0, -,+}\,t_{n}}\,. \tag{6}\] Hence \(E_{c}=0\) is indeed a hyperbolic critical energy in the sense [2] that all transfer matrices commute and some of them are hyperbolic (if any of the distributions is non-trivial so that \(|\kappa_{n}|\) is not identically equal to 1). Based on this, one can further expand the reduced transfer in \(E\) which leads to the situation (7) studied in the remainder of the paper. Let us briefly specify how to obtain the model studied in [14] as well as the random hopping model from [6, 12, 2]. One chooses \(L=1\), \(B=1\) and then the matrices \(M_{n}\) are scalars \(m_{n}\). If one denotes \(\lambda=W_{1}\), \(\mu=W_{2}\), and \(\omega_{n}\) and \(\omega_{n}^{\prime}\) have a uniform distribution on \([-\frac{1}{2},\frac{1}{2}]\), one obtains after conjugation with a suitable Cayley transform exactly the random Hamiltonian of [14]. For these particular distributions, the Lyapunov exponent at \(E_{c}=0\) can be calculated explicitly [14], but the results of this paper do not depend on these particular choices. Let us also spell out the reduced transfer matrices in this case. One finds that \(G_{n}^{E,+,-}=\frac{m_{n}}{E^{2}-m_{n}^{2}}\) and \(G_{n}^{E,+,+}=G_{n}^{E,-,-}=\frac{E}{E^{2}-m_{n}^{2}}\). Then one readily checks \[T_{n}^{E}\;=\;\left(\begin{matrix}\frac{m_{n}}{t_{n}}&0\\ 0&\frac{t_{n}}{m_{n}}\end{matrix}\right)\,+\,E\left(\begin{matrix}0&-\,\frac{ t_{n}}{m_{n}}\\ \frac{1}{m_{n}\,t_{n}}&0\end{matrix}\right)\,-\,E^{2}\left(\begin{matrix}\frac{ 1}{m_{n}\,t_{n}}&0\\ 0&0\end{matrix}\right)\,.\] This is actually also connected to Dyson's random hopping model studied in [6, 12]. More precisely, set \(\hat{t}_{2n}=t_{n}\) and \(\hat{t}_{2n+1}=m_{n}\) and suppose that they are identically distributed, then one can check \[T_{n}^{E}\;=\;\left(\begin{matrix}-\,E\,\frac{1}{t_{2n+1}}&-\hat{t}_{2n+1}\\ \frac{1}{t_{2n+1}}&0\end{matrix}\right)\left(\begin{matrix}-\,E\,\frac{1}{t_{2 n}}&-\hat{t}_{2n}\\ \frac{1}{t_{2n}}&0\end{matrix}\right)\,,\] which is indeed the two-step transfer matrix of a random hopping Hamiltonian on \(\ell^{2}(\mathbb{Z})\) given by \[(H\psi)_{n}\;=\;-\,\hat{t}_{n+1}\psi_{n+1}\,-\,\hat{t}_{n}\psi_{n-1}\,,\qquad \psi=(\psi_{n})_{n\in\mathbb{Z}}\in\ell^{2}(\mathbb{Z})\,.\] Hence both the dirty SSH Hamiltonian from [14] and the random hopping model are particular cases of the generalized SSH Hamiltonian (1). ## 3 Prufer phase formalism near critical energy In the theory of products of random \(2\times 2\) matrices, the associated Lyapunov exponent can be accessed via the random action of the matrices on projective space, which in turn is bijectively mapped to a unit circle making up the so-called Prufer phases. By a Cayley transform, they are mapped to real numbers which are then called Dyson-Schmidt variables. In both cases, the action is implemented by a Mobius transformation. This way of approaching the Lyapunov exponent is particularly efficient for perturbative expansions [16, 13]. Furthermore, if the random matrices are the transfer matrices from a given one-dimensional random operator and the Prufer phases are suitably lifted to \(\mathbb{R}\), then oscillation theory also allows to extract the DOS from the Prufer phases and again this is a good way to tackle perturbative problems. Traditionally, perturbation theory is done in a coupling constant of the randomness, corresponding to a weak coupling limit of the randomness (_e.g._ in the one-dimensional Anderson model). However, there are other situations where the perturbative parameter is the energy distance to some critical energy, and is hence intrinsic to the model rather than an external parameter. The first example of this type is the random dimer model [4] and its generalization, the random polymer model [11]. In these models exists a so-called critical energy at which all (random) transfer matrices commute and, moreover, the spectrum of all these matrices lies on the unit circle so that they can simultaneously diagonalized into (random) rotations. Due to the latter fact, the critical energy of this type is called elliptic. On the other hand, in a random Kronig-Penney model there can be a critical energy at which the transfer matrices are all similar to a Jordan block [3], so that the critical energy is then called parabolic. Finally, it was pointed out in [2] that the random hopping model and the SSH model have a hyperbolic critical energy with transfer matrices having their spectra off the unit circle. Sections 5 and 6 treat the case in which the Lyapunov exponent is nonvanishing at the critical energy, which is then called unbalanced. Section 7 then concerns the so-called balanced case with a vanishing Lyapunov exponent. In order to cover other possible applications of hyperbolic critical energies and to stress structural aspects, let us consider the same set-up as in [2]. Suppose \((\Sigma,{\bf p})\) is a compact probability space and \(\sigma\in\Sigma\mapsto T^{E_{c}+\epsilon}_{\sigma}\in\mathrm{SL}(2,\mathbb{R})\) a family of transfer matrices over polymer blocks of length \(L_{\sigma}\in\mathbb{N}\) which is of the form \[T^{E_{c}+\epsilon}_{\sigma}\;=\;\pm\left[{\bf 1}\,+\,a_{\sigma}\epsilon\begin{pmatrix} 0&-1\\ 1&0\end{pmatrix}\,+\,b_{\sigma}\epsilon\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\,+\,c_{\sigma}\epsilon\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\,+\,{\cal O}(\epsilon^{2})\right]D_{\kappa_{\sigma}}\,. \tag{7}\] Here \(a_{\sigma},b_{\sigma},c_{\sigma}\) are real numbers, \(\kappa_{\sigma}>0\) and furthermore \[D_{\kappa}\;=\;\begin{pmatrix}\kappa&0\\ 0&\frac{1}{\kappa}\end{pmatrix}\,.\] The hyperbolic critical energy will be called unbalanced if \(\mathbb{E}(\log(\kappa))\neq 0\) and balanced if \(\mathbb{E}(\log(\kappa))=0\). In the former situation, we will always focus on the case \(\mathbb{E}(\log(\kappa))<0\), as otherwise one can simply conjugate by the matrix \(\begin{pmatrix}0&-1\\ 1&0\end{pmatrix}\). The particular form (7) covers the reduced transfer matrices \(T^{E}_{n}\) of the generalized SSH model given in (5) due to (6) and the analyticity at \(E_{c}=0\). It is also readily possible to deduce the random real coefficients for these models. In more general situations (such as random polymer models) one may use so-called modified transfer matrices to attain (7), see [11, 2] for details. Let us note that one can show (see Proposition 3 in [2]) that the inequalities \(a_{\sigma}\geq 0\) and \(a_{\sigma}^{2}\geq b_{\sigma}^{2}+c_{\sigma}^{2}\) hold for all \(\sigma\in\Sigma\). In all arguments below, it is possible to absorb the contribution of \(c_{\sigma}\) in the diagonal term by replacing \(\kappa_{\sigma}\) by \(\kappa_{\sigma}(1+\epsilon c_{\sigma})\). In order to somewhat simplify notations, we will suppose \(c_{\sigma}=0\) for all \(\sigma\in\Sigma\). It will be useful to rewrite (7) as \[T^{E_{c}+\epsilon}_{\sigma}\;=\;R^{\epsilon}_{\sigma}\,D_{\kappa_{\sigma}}\,, \tag{8}\] with the notations \[R^{\epsilon}_{\sigma}\;=\;{\bf 1}\,+\,a_{\sigma}\epsilon\begin{pmatrix}0&-1 \\ 1&0\end{pmatrix}\,+\,b_{\sigma}\epsilon\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\,+\,\epsilon^{2}\,A^{\epsilon}_{\sigma}\,,\qquad A^{ \epsilon}_{\sigma}\;=\;\begin{pmatrix}\alpha^{\epsilon}_{\sigma}&\beta^{ \epsilon}_{\sigma}\\ \gamma^{\epsilon}_{\sigma}&\delta^{\epsilon}_{\sigma}\end{pmatrix}\,.\] The overall sign in (7) is neglected as it merely leads to a shift by \(\pi\) in the Prufer phase dynamics below that is irrelevant for the Prufer phases relative to the critical energy. In the following, let us consider a random polymer Hamiltonian with hyperbolic critical energy so that the \(n\)th (possibly modified) transfer matrices are of the form (8) with coefficients drawn from the probability space \((\Sigma,{\bf p})\) (in which the \(\kappa\) coefficients are taken to be independently and identically distributed). Hence \(\omega=(\sigma_{n})_{n\in\mathbb{Z}}\) is a configuration from \(\Omega=\Sigma^{\mathbb{Z}}\). The expectations w.r.t. the probability measure \(\mathbb{P}\) on \(\Omega\) will be denoted by \(\mathbb{E}\). Associated are random coefficients and matrices \(a_{\sigma_{n}}\), \(b_{\sigma_{n}}\), \(T^{\epsilon}_{\sigma_{n}}\), \(\kappa_{\sigma_{n}}\), _etc._, which for sake of notational convenience will simply be denoted by \(a_{n}\), \(b_{n}\), \(T_{n}^{\epsilon}\), \(\kappa_{n}\), _etc._, unless there is some danger of misunderstanding. Associated to each configuration is a random sequence of Prufer phases \(\theta_{n}^{\epsilon}\in\mathbb{R}\) at \(\epsilon\) (and relative to the critical energy \(E_{c}\)) which can be introduced by \[e_{\theta_{n}^{\epsilon}}\ =\ \frac{T_{n}^{\epsilon}\,e_{\theta_{n-1}^{\epsilon}} }{\|T_{n}^{\epsilon}\,e_{\theta_{n-1}^{\epsilon}}\|}\,,\qquad e_{\theta}\ :=\ \begin{pmatrix}\cos(\theta)\\ \sin(\theta)\end{pmatrix}\,,\] a given (and irrelevant) initial condition \(\theta_{0}^{\epsilon}\) and the lifting condition \(\theta_{n+1}^{\epsilon}-\theta_{n}^{\epsilon}\in(-\frac{\pi}{2},\frac{3\pi}{ 2})\) fixing the branch. Note that this definition is induced by a group action of \(\mathrm{SL}(2,\mathbb{R})\) on \(\mathbb{R}\) and hence \((\theta_{n}^{\epsilon})_{n\in\mathbb{Z}}\) is a Markov process on \(\mathbb{R}\). As explained in detail in [11] and [2], the IDOS of the random polymer model is then given by \[\mathcal{N}(E_{c}+\epsilon)\ =\ \mathcal{N}(E_{c})\,+\,\frac{1}{\pi}\,\frac{1}{ \mathbb{E}(L_{\sigma})}\ \lim_{N\to\infty}\,\frac{1}{N}\,\mathbb{E}(\theta_{N}^{ \epsilon})\,.\] For the generalized SSH model this also holds by combining the arguments of [2] with the oscillation theory as described in [20]. The r.h.s. is the so-called rotation number, here relative to the critical energy. It is helpful to write it as a Birkhoff sum \[\mathcal{N}(E_{c}+\epsilon)\,-\,\mathcal{N}(E_{c})\ =\ \frac{1}{\pi}\,\frac{1}{ \mathbb{E}(L_{\sigma})}\ \lim_{N\to\infty}\,\frac{1}{N}\,\sum_{n=1}^{N}\mathbb{E}(\theta_{n}^{\epsilon} \,-\,\theta_{n-1}^{\epsilon})\,, \tag{9}\] because by the above each summand then lies in the interval \((-\frac{\pi}{2},\frac{3\pi}{2})\) and is called a phase shift. Before going into an intuitive description of the random dynamics of Prufer phases, let us furthermore recall from [11, 2] that the Lyapunov exponent can be expressed as a Birkhoff sum of the Prufer phases as well: \[\gamma(E_{c}+\epsilon)\ =\ \lim_{N\to\infty}\,\frac{1}{N}\,\sum_{n=1}^{N} \mathbb{E}\big{(}\log(\|T_{n+1}^{\epsilon}e_{\theta_{n}^{\epsilon}}\|)\big{)}\,. \tag{10}\] (One may include a factor \(\frac{1}{\mathbb{E}(L_{\sigma})}\) here.) The two formulas (9) and (10) allow to numerically compute the IDOS and the Lyapunov exponent for the random hoping model with great precision. As an example, both formulas are implemented in the balanced case of the random hopping model in Figure 1. In particular, this illustrates (3) and shows that the Lyapunov exponent has a similar behavior, as argued in the physics literature (see again Section V.E in [8]). For the convenience of the reader, let us briefly recall from [2] the intuitive description of the Prufer phase dynamics for \(\epsilon\geq 0\). According to (8) and the group action property, it is useful to split the dynamics into two steps, first one induced by \(D_{\kappa_{\sigma}}\) and the second by \(R_{\sigma}^{\epsilon}\). Thus let us set for half-integers \(n^{\prime}=n-\frac{1}{2}\) \[e_{\theta_{n}^{\epsilon}}\ =\ \frac{R_{n}^{\epsilon}\,e_{\theta_{n^{\prime}}^{ \epsilon}}}{\|R_{n}^{\epsilon}\,e_{\theta_{n^{\prime}}^{\epsilon}}\|}\,, \qquad e_{\theta_{n^{\prime}}^{\epsilon}}\ =\ \frac{D_{n}\,e_{\theta_{n-1}^{ \epsilon}}}{\|D_{n}\,e_{\theta_{n-1}^{\epsilon}}\|}\,, \tag{11}\] where \(D_{n}=D_{\kappa_{n}}\). The first step of the random dynamics induced by \(D_{n}\) has fixed points at \(\frac{\pi}{2}\,\mathbb{Z}\) and leaves each interval \((k\frac{\pi}{2},(k+1)\frac{\pi}{2})\) invariant. It will be explained and used below that in a logarithmic representation of the associated Dyson-Schmidt variables this dynamics becomes a random walk, with a supplementary drift in the unbalanced case. The second step induced by \(R_{n}^{\epsilon}\) is a right shift (or clockwise rotation on the projected circle) by random angles of order \(\epsilon\) because \(a_{n}-|b_{n}|\geq C_{1}>0\) a.s. and \[\theta_{n}^{\epsilon}\;=\;\theta_{n^{\prime}}^{\epsilon}\:+\:\epsilon\big{(}a_ {n}+b_{n}\cos(2\theta_{n^{\prime}}^{\epsilon})\big{)}\,+\,{\cal O}(\epsilon^{ 2})\,. \tag{12}\] Hence the combined dynamics passes through the fixed points \(\frac{\pi}{2}\,\mathbb{Z}\) only in the increasing direction. This is illustrated in Figure 2. Note that, in particular, the random dynamics passes in an alternating manner through a fixed point from \(\pi\mathbb{Z}\) and one from \(\pi(\frac{1}{2}+\mathbb{Z})\). Furthermore, one readily deduces a crucial order preserving property of the random dynamics, namely if one considers two further sequences \(\widehat{\theta}_{n}^{\epsilon}\) and \(\widetilde{\theta}_{n}^{\epsilon}\) constructed as in (11) with the same realization \(\omega\), then \[\widehat{\theta}_{0}^{\epsilon}\;<\;\theta_{0}^{\epsilon}\;<\;\widetilde{ \theta}_{0}^{\epsilon}\quad\Longrightarrow\quad\widehat{\theta}_{n}^{ \epsilon}\;<\;\theta_{n}^{\epsilon}\;<\;\widetilde{\theta}_{n}^{\epsilon}\,, \tag{13}\] for all \(n\in\frac{1}{2}\,\mathbb{Z}\). Based on this, it will be shown how to bound the dynamics above and below by two constructed processes. Then the rotation number in (9) can, via the elementary renewal theorem, be estimated by the inverse of their expected passage times through the intervals \((k\frac{\pi}{2},(k+1)\frac{\pi}{2})\). ## 4 Dyson-Schmidt variables and renewal processes The Dyson-Schmidt variable \(x_{n}^{\epsilon}\in\mathbb{R}\) for \(n\in\frac{1}{2}\mathbb{Z}\) associated to the Prufer phases is defined by \[x_{n}^{\epsilon}\;:=\;-\,\cot(\theta_{n}^{\epsilon})\,.\] This establishes an orientation preserving bijection of every interval \([k\pi,(k+1)\pi)\) with \(k\in\mathbb{Z}\) to \(\overline{\mathbb{R}}=\mathbb{R}\cup\{\infty\}\), with the central point \((k+\frac{1}{2})\pi\) being mapped to \(0\). Then (11) becomes for \(n\in\mathbb{Z}\) and \(n^{\prime}=n-\frac{1}{2}\) \[x_{n}^{\epsilon}\;:=\;-(R_{n}^{\epsilon}\cdot(-x_{n^{\prime}}^{\epsilon}))\;= \;Q_{n}^{\epsilon}\cdot x_{n^{\prime}}^{\epsilon}\,,\qquad x_{n^{\prime}}^{ \epsilon}\;:=\;D_{n}\cdot x_{n-1}^{\epsilon}\,, \tag{14}\] Figure 1: Numerical plot of the IDOS \({\cal N}(\epsilon)-{\cal N}(0)={\cal N}(\epsilon)-\frac{1}{2}\) relative to the center of band \(E_{c}=0\) and the Lyapunov exponent \(\gamma(\epsilon)\) for the balanced random hopping model, both in a log-log plot. All points on these curves are computed via the Birkhoff sums (9) and (10) over orbits of length \(N=10^{7}\). where the dot \(\cdot\) denotes the standard Mobius action and \[Q_{n}^{\epsilon}\;:=\;\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}R_{n}^{\epsilon}\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\,,\] namely \(Q_{n}^{\epsilon}\) is obtained from \(R_{n}^{\epsilon}\) by flipping the signs on the off-diagonals. Due to the explicit forms of \(Q_{n}^{\epsilon}\) and \(D_{n}\), the action here becomes \[x_{n}^{\epsilon}\;=\;\frac{(1+\epsilon^{2}\alpha_{n}^{\epsilon})x_{n^{ \prime}}^{\epsilon}+(a_{n}{\!-\!b_{n}-\epsilon}\beta_{n}^{\epsilon})\epsilon} {1+\epsilon^{2}\delta_{n}^{\epsilon}-(a_{n}{\!+\!b_{n}+\epsilon}\gamma_{n}^{ \epsilon})\epsilon x_{n^{\prime}}^{\epsilon}}\,,\qquad x_{n^{\prime}}^{ \epsilon}\;=\;\kappa_{n}^{2}\,x_{n-1}^{\epsilon}\,. \tag{15}\] Note that the interval \([k\pi,(k+1)\pi)\) of Prufer variables contains two fixed points \(k\pi\) and \((k+\frac{1}{2})\pi\) of the dynamics generated by \(D_{\kappa}\), so that one copy \(\overline{\mathbb{R}}\) of the Dyson-Schmidt variable also contains two such fixed points \(0\) and \(\infty\). For the moment, only the half-axis \([0,\infty)\subset\overline{\mathbb{R}}\) will be considered. Below, it will be justified that this is indeed sufficient to show the results in Theorem 1. On this interval, it will be useful to take the logarithm. For this purpose, let us now state the main technical assumptions throughout the remainder of the paper. **Hypothesis:**_The family \((\log(\kappa_{n}))_{n\geq 0}\) of random variables is supposed to be independent and identically distributed with a non-trivial distribution in the sense that \(\mathbb{P}\big{(}\{\log(\kappa_{0})>0\}\big{)}>0\). This distribution is also assumed to have compact support, that is,_ \[C_{0}\;:=\;\operatorname{ess\,sup}\;|\log(\kappa_{0})|\,\in\,(0,+\infty)\,,\] _where the essential supremum is taken over (the suppressed index) \(\sigma\in\Sigma\) w.r.t. the given distribution thereon. Furthermore the following constants are supposed to be positive and finite:_ \[C_{1}\;:=\;\operatorname{ess\,inf}\,(a_{\sigma}-|b_{\sigma}|)\;,\quad C_{2}\; :=\;\operatorname{ess\,sup}\big{(}a_{\sigma}+|b_{\sigma}|\big{)}\,,\quad C_{ 3}\;:=\;\sup_{|\epsilon|\leq 1}\operatorname{ess\,sup}\|A_{\sigma}^{ \epsilon}\|\,.\] This implies, in particular, that one has at least for \(\epsilon=0\) \[\log(x_{n+1}^{0})\;=\;\log(x_{n}^{0})\;+\;2\,\log(\kappa_{n})\;=\;\log(x_{n} ^{0})\;+\;2\,C_{0}\,\chi_{n}\,, \tag{16}\] where \(\chi_{n}:=\frac{1}{C_{0}}\log(\kappa_{n})\) is a random variable satisfying \(-1\leq\chi_{n}\leq 1\) almost surely. In the unbalanced case, \(\mathbb{E}(\chi_{n})<0\) while in the balanced \(\mathbb{E}(\chi_{n})=0\). In both cases, \(\log(x_{n}^{0})\) is a random walk on \(\mathbb{R}\). It will be shown in the next two sections that in this logarithmic representation the random walk roughly has to go from \(\log(\epsilon)\) to \(-\log(\epsilon)\). The central limit theorem indicates that it takes of order of \(\log(\epsilon)^{2}\) time steps to cross this distance of order \(|\log(\epsilon)|\) in the balanced case, and turns out that in the unbalanced case an order of \(\epsilon^{-\nu}\) time steps are needed. This provides an intuitive understanding for the behavior in Theorem 1. Controlling the \(\epsilon\)-dependent perturbations is quite delicate and the main technical endeavor of this work. The outcome will be bounds for the r.h.s. of (9). In order to control the rotation number, it is useful to look at the order statistics of the following set of random variables \[\left\{N\in\mathbb{N}\,:\,x_{N-1}^{\epsilon}<0\leq x_{N}^{\epsilon}\text{ or }x_{N}^{\epsilon}<0\leq x_{N-1}^{\epsilon}\right\}, \tag{17}\] which will be denoted by the random increasing times \(N_{(1)}^{\epsilon}<N_{(2)}^{\epsilon}<N_{(3)}^{\epsilon}<...\). These are the random passage times over the two points \(0\) and \(\infty\) (which are fixed points of the action induced by \(D_{n}^{0}\), so without \(\epsilon\)-perturbation). Recall that the two conditions in (17) are realized in an alternating manner. For sake of concreteness, let us fix the initial condition \(x_{0}^{\epsilon}\in(-\infty,0)\cup\{\infty\}\) such that for all \(k\) one has \(x_{N_{(2k-1)}^{\epsilon}}^{\epsilon}<0\leq x_{N_{(2k-1)}^{\epsilon}-1}^{\epsilon}\) and \(x_{N_{(2k)}^{\epsilon}-1}^{\epsilon}<0\leq x_{N_{(2k)}^{\epsilon}}^{\epsilon}\). The (random) differences \(N_{(k+1)}^{\epsilon}-N_{(k)}^{\epsilon}\) are the durations of the passages of \(x\) through the intervals \([0,+\infty)\) and \(\overline{\mathbb{R}}\backslash[0,+\infty)\). Clearly, these quantities depend on the precise value of the initial condition \(x_{0}^{\epsilon}\) and therefore they are not identically distributed (nor independent). To circumvent this difficulty, two families of random dynamical processes \(\widehat{x}_{k}^{\epsilon}=(\widehat{x}_{k,n}^{\epsilon})_{n\geq 0}\) and \(\widetilde{x}_{k}^{\epsilon}=(\widetilde{x}_{k,n}^{\epsilon})_{n\geq 0}\) on \([0,\infty)\) will be constructed for all \(k\in\mathbb{N}\) in the two following sections, providing lower and upper bounds on the original process respectively. It will be imposed that \((\widetilde{x}_{2k-1}^{\epsilon})_{k\in\mathbb{N}}\), \((\widehat{x}_{2k}^{\epsilon})_{k\in\mathbb{N}}\), \((\widetilde{x}_{2k-1}^{\epsilon})_{k\in\mathbb{N}}\) and \((\widetilde{x}_{2k}^{\epsilon})_{k\in\mathbb{N}}\) are all families of nonnegative i.i.d. random variables. Note that these processes will not exactly correspond to the notations \(\widehat{\theta}_{n}^{\epsilon}\) and \(\widehat{\theta}_{n}^{\epsilon}\) in (13). They will obey almost surely that for all \(k\in\mathbb{N}\) and then all \(n\in\{0,1,\ldots,N_{(k+1)}^{\epsilon}-N_{(k)}^{\epsilon}-1\}\) \[\widehat{x}_{k,n}^{\epsilon}\,\leq\,x_{N_{(k)}^{\epsilon}+n}^{\epsilon}\, \leq\,\widetilde{x}_{k,n}^{\epsilon}\qquad\text{or}\qquad\widetilde{x}_{k,n}^ {\epsilon}\,\leq\,-\big{(}x_{N_{(k)}^{\epsilon}+n}^{\epsilon}\big{)}^{-1}\, \leq\,\widetilde{x}_{k,n}^{\epsilon} \tag{18}\] and furthermore, for the next step \[\widetilde{x}_{k,N_{(k+1)}^{\epsilon}-N_{(k)}^{\epsilon}}^{\epsilon}\,=\, \infty\,. \tag{19}\] Note that the left condition in (18) always applies during passages through \((0,\infty)\) and the right condition for passages through \((-\infty,0)\). Moreover, the constructed comparison processes are only constrained on the first \(N_{(k+1)}^{\epsilon}-N_{(k)}^{\epsilon}\) times. Associated to these two families of processes, there are now two families of random passage times \[\widehat{T}_{k}^{\epsilon}\,:=\,\inf\{n\in\mathbb{N}_{0}\,:\,\widehat{x}_{k, n}^{\epsilon}=\infty\}\,,\qquad\widetilde{T}_{k}^{\epsilon}\,:=\,\inf\{n\in \mathbb{N}_{0}\,:\,\widetilde{x}_{k,n}^{\epsilon}=\infty\}\,. \tag{20}\] Then (18) and (19) imply that a.s. \(\widetilde{T}_{k}^{\epsilon}\leq N_{(k+1)}^{\epsilon}-N_{(k)}^{\epsilon}\leq \widehat{T}_{k}^{\epsilon}\). Furthermore, by construction the families \((\widehat{T}_{2k-1}^{\epsilon})_{k\in\mathbb{N}}\), \((\widehat{T}_{2k}^{\epsilon})_{k\in\mathbb{N}}\), \((\widetilde{T}_{2k-1}^{\epsilon})_{k\in\mathbb{N}}\) and \((\widetilde{T}_{2k}^{\epsilon})_{k\in\mathbb{N}}\) are i.i.d. random variables. As then \((\widehat{T}_{2k-1}^{\epsilon}+\widehat{T}_{2k}^{\epsilon})_{k\in\mathbb{N}}\) and \((\widetilde{T}_{2k-1}^{\epsilon}+\widetilde{T}_{2k}^{\epsilon})_{k\in\mathbb{N}}\) are interarrival times [10], both families specify a renewal process, given by \[\widehat{P}_{N}^{\epsilon}\,:=\,\max\left\{K\in\mathbb{N}\,:\sum_{k=1}^{K}( \widehat{T}_{2k-1}^{\epsilon}+\widehat{T}_{2k}^{\epsilon})\leq N\right\},\,\, \,\,\widetilde{P}_{N}^{\epsilon}\,:=\,\max\left\{K\in\mathbb{N}\,:\sum_{k=1}^{ K}(\widetilde{T}_{2k-1}^{\epsilon}+\widetilde{T}_{2k}^{\epsilon})\leq N\right\}.\] These random variables can be interpreted as the number of times the slower or faster process has passed through \(\overline{\mathbb{R}}\) up to the time \(N\). Each such passage corresponds to a passage of the Prufer variables through \([k\pi,(k+1)\pi)\). Thus it follows that \[\widehat{P}_{N}^{\epsilon}-1\;\leq\;\frac{\theta_{N}^{\epsilon}}{\pi}\;\leq\; \widetilde{P}_{N}^{\epsilon}+1\] a.s. for \(N\in\mathbb{N}_{0}\) and \(\theta_{0}^{\epsilon}\in\left[-\frac{\pi}{2},\frac{\pi}{2}\right)\). Finally, the elementary renewal theorem [10] yields \[\frac{1}{\mathbb{E}(\widehat{T}_{1}^{\epsilon})+\mathbb{E}(\widehat{T}_{2}^{ \epsilon})}\;=\;\lim_{N\to\infty}\frac{\widehat{P}_{N}^{\epsilon}}{N}\;\leq\; \lim_{N\to\infty}\frac{1}{N}\frac{\mathbb{E}(\theta_{N}^{\epsilon})}{\pi}\; \leq\;\lim_{N\to\infty}\frac{\widetilde{P}_{N}^{\epsilon}}{N}\;=\;\frac{1}{ \mathbb{E}(\widetilde{T}_{1}^{\epsilon})+\mathbb{E}(\widetilde{T}_{2}^{ \epsilon})}\,. \tag{21}\] These bounds hold for both the unbalanced and the balanced case. Let us now first address the unbalanced case. The opposite directions of the drifts in Figure 2 clearly show that \(\mathbb{E}(\widehat{T}_{1}^{\epsilon})\geq\mathbb{E}(\widehat{T}_{2}^{ \epsilon})\). Thus the inverse of \(2\,\mathbb{E}(\widehat{T}_{1}^{\epsilon})\) provides a lower bound on the l.h.s. of (21), while the r.h.s. can simply be estimated by the inverse of \(\mathbb{E}(\widetilde{T}_{1}^{\epsilon})\). These rough estimates are actually superfluous, since the contributions of \(\mathbb{E}(\widehat{T}_{1}^{\epsilon})\) and \(\mathbb{E}(\widetilde{T}_{1}^{\epsilon})\) turn out to dominate those of \(\mathbb{E}(\widehat{T}_{2}^{\epsilon})\) and \(\mathbb{E}(\widetilde{T}_{2}^{\epsilon})\) for \(\epsilon\) going to \(0\). For this reason, passages through \((-\infty,0)\) need not to be taken into account, so restricting the following to the case \(k=1\) is sufficient (this corresponds to the first passage with positive \(x\) variables). The proof of the following result will be provided in the next two sections. Combining it with (21) directly implies the claim (2) in Theorem 1. **Proposition 2**: _Given a family of random matrices of the form (8), suppose that the above Hypothesis holds and \(\mathbb{E}(\log(\kappa_{0}))<0\). Then there exists a unique positive solution \(\nu>0\) of \(\mathbb{E}(\kappa_{0}^{\nu})=1\). Moreover, there exist constants \(C_{-},C_{+}\in(0,\infty)\) such that for all \(\widetilde{\nu}<\nu\) there exists some \(\epsilon_{0}\) with then for all \(\epsilon\in(0,\epsilon_{0})\)_ \[\frac{1}{\mathbb{E}(\widehat{T}_{1}^{\epsilon})}\;\geq\;C_{-}\epsilon^{\nu} \left(1+\mathcal{O}(\epsilon^{\nu}|\log(\epsilon)|)\right),\qquad\frac{1}{ \mathbb{E}(\widetilde{T}_{1}^{\epsilon})}\;\leq\;C_{+}\epsilon^{\widetilde{ \nu}}\left(1+\mathcal{O}(\epsilon^{\widetilde{\nu}}|\log(\epsilon)|)\right).\] In the balanced case, \(\mathbb{E}(\widehat{T}_{k}^{\epsilon})=\mathbb{E}(\widetilde{T}_{k}^{ \epsilon})\) to lowest order. This value is independent of \(k\) and can be computed, as the next result shows. Together with (21) this shows (3) in Theorem 1. **Proposition 3**: _Given a family of random matrices of the form (8), suppose that the above Hypothesis holds and \(\mathbb{E}(\log(\kappa_{0}))=0\). Then for all \(k\) it holds that_ \[\frac{1}{\mathbb{E}(\widehat{T}_{k}^{\epsilon})}\;=\;\frac{\mathbb{E}(\log( \kappa_{0})^{2})}{\log(\epsilon)^{2}}\left(1+\mathcal{O}(|\log(\epsilon)|^{-1} )\right),\qquad\frac{1}{\mathbb{E}(\widetilde{T}_{k}^{\epsilon})}\;=\;\frac{ \mathbb{E}(\log(\kappa_{0})^{2})}{\log(\epsilon)^{2}}\left(1+\mathcal{O}(| \log(\epsilon)|^{-1})\right).\] ## 5 Lower bound on the rotation number The first task of this section is to construct the slower comparison process satisfying the first inequality of (18) for \(k=1\). To improve readability from this point on, we will suppress the indices \(\epsilon\), \(\sigma\), \(n\), etc., as long as no confusion can arise. Let us start by providing some basic properties of the dynamics, such as the following observation. **Lemma 4**: _For each realization and \(\epsilon\) small enough, \(x\in[0,\infty)\) and \(Q\cdot x\geq 0\) imply \(Q\cdot x\geq x\)._ **Proof.** The main Hypothesis implies for \(x\in[0,1]\) the estimates \[Q\cdot x\ =\ \tfrac{(1+\epsilon^{2}\alpha)x+(a-b-\epsilon\beta)\epsilon}{1+ \epsilon^{2}\delta-(a+b+\epsilon\gamma)\epsilon x}\,\geq\,x\,\tfrac{1+ \epsilon^{2}\alpha+(a-b-\epsilon\beta)\frac{\epsilon}{x}}{1+\epsilon^{2} \delta}\,\geq\,x\,\tfrac{1+C_{1}\,\epsilon-2\,C_{3}\,\epsilon^{2}}{1+C_{3}\, \epsilon^{2}}\,,\] while for \(x\in(1,\infty)\) \[Q\cdot x\,\geq\,x\,\tfrac{1+\epsilon^{2}\alpha}{1+\epsilon^{2}\delta-(a+b+ \epsilon\gamma)\epsilon x}\,\geq\,x\,\tfrac{1+\epsilon^{2}\alpha}{1+\epsilon ^{2}\delta-(a+b+\epsilon\gamma)\epsilon}\,\geq\,x\,\tfrac{1-C_{3}\,\epsilon^ {2}}{1-C_{1}\,\epsilon+2\,C_{3}\,\epsilon^{2}}\,.\] In both cases the statement directly follows. (Note that the lemma can also be deduced from (12).) \(\square\) The following lemma states more properties of the given action, and relies on the quantities \[\widehat{x}_{-}\ :=\ \tfrac{C_{1}\,\epsilon}{2}\,,\qquad\widehat{x}_{c}\ :=\ \tfrac{C_{1}\,\epsilon}{2}(e^{-2C_{0}}+1)\,,\qquad \widehat{x}_{+}\ :=\ \tfrac{2\,e^{2C_{0}}}{C_{1}\,\epsilon}\,.\] These points and the lemma itself are graphically illustrated in the left part of Figure 3. **Lemma 5**: _For each realization, one has_ \[x\,\in\,[0,\infty) \implies Q\cdot(D\cdot x)\,\notin\,[0,\widehat{x}_{-})\,, \tag{22}\] \[x\,\in\,[\widehat{x}_{-},\infty) \implies Q\cdot(D\cdot x)\,\notin\,[0,\widehat{x}_{c})\,,\] (23) \[Q\cdot(D\cdot x)\,\in\,[0,\infty) \implies x\,\notin\,[\widehat{x}_{+},\infty)\,. \tag{24}\] **Proof.** For (22), first note that \(x\in[0,\infty)\) implies \(D\cdot x\in[0,\infty)\). By combining (12) and the order-preserving property (13), then nonnegative \(Q\cdot(D\cdot x)\) obey, due to (15) and the Hypothesis, \[Q\cdot(D\cdot x)\ \geq\ Q\cdot 0\ =\ \tfrac{(a-b-\epsilon\beta)\epsilon}{1+ \epsilon^{2}\delta}\,\geq\,\tfrac{(C_{1}-C_{3}\epsilon)\epsilon}{1+C_{3} \epsilon^{2}}\ \geq\ \tfrac{C_{1}\,\epsilon}{2}\ =\ \widehat{x}_{-}\,.\] For the proof of (23) let us use that, if \(x\in[\widehat{x}_{-},\infty)\), then clearly \(D\cdot x\geq e^{-2C_{0}}\widehat{x}_{-}\). Similarly to the proof of (22), nonnegative \(Q\cdot(D\cdot x)\) then obey \[Q\cdot(D\cdot x)\ \geq\ Q\cdot\tfrac{e^{-2C_{0}}C_{1}\epsilon}{2}\ \geq\ \tfrac{(1-C_{3}\epsilon^{2})\tfrac{e^{-2C_{0}}C_{1}\epsilon}{2}+(C_{1}-C_{3} \epsilon)\epsilon}{1+C_{3}\epsilon^{2}-(C_{1}-C_{3}\epsilon)\tfrac{e^{-2C_{0} }C_{1}\epsilon^{2}}{2}}\ \geq\ (e^{-2C_{0}}+1)\tfrac{C_{1}\epsilon}{2}\ =\ \widehat{x}_{c}\,.\] Figure 3: _The arrows on the left part illustrate properties of the original Dyson-Schmidt dynamics on \((0,\infty)\) as stated in_ Lemma 5_. The right part illustrates the notations after the logarithmic transformation \(\widehat{f}\) to \(\mathbb{R}\)._ Finally let us verify (24) by contraposition. If \(x\in[\widehat{x}_{+},\infty)\), then clearly \(D\cdot x\geq e^{-2C_{0}}\widehat{x}_{+}=\frac{2}{C_{1}\epsilon}\). Then the order-preserving property (13) implies that \(Q\cdot(D\cdot x)\notin[0,\infty)\), since as in the proof of (22) it holds that \[0\;>\;Q\cdot\infty\;\geq\;Q\cdot(D\cdot x)\;\geq\;Q\cdot\tfrac{2}{C_{1} \epsilon}\;\geq\;\tfrac{(1-C_{3}\epsilon^{2})\tfrac{2}{C_{1}\epsilon}+(C_{1}- C_{3}\epsilon)\epsilon}{1+C_{3}\epsilon^{2}-(C_{1}-C_{3}\epsilon)\tfrac{2}{C_{1} }}\,,\] which is also negative. Now a new process \(\widehat{x}=(\widehat{x}_{n})_{n\geq 0}\) is constructed by setting \(\widehat{x}_{0}=0\), \(\widehat{x}_{1}=\widehat{x}_{-}\) and for \(n\geq 1\) \[\widehat{x}_{n+1}\;=\;\begin{cases}\widehat{x}_{c}\,,&\text{ if }\widehat{x}_{n} \leq\widehat{x}_{-}\,,\\ D_{n}\cdot\widehat{x}_{n}\,,&\text{ if }\widehat{x}_{n}\in(\widehat{x}_{-}, \widehat{x}_{+})\,,\\ \infty\,,&\text{ else, so if }\widehat{x}_{n}\geq\widehat{x}_{+}\,.\end{cases}\] Comparing with (14), the main case \(\widehat{x}_{n+1}=D_{n}\cdot\widehat{x}_{n}\) of this process merely omits the action of \(Q_{n}\) for \(n\geq 2\). Let us now argue why this process satisfies the first inequality in (18). Indeed, omitting the action of \(Q_{n}\) slows the process down because of the order-preserving property (13) and Lemma 4. Carefully analyzing the first case in the definition of \(\widehat{x}_{n+1}\) in combination with (22) and (23) shows that a.s. \(\widehat{x}_{n}\leq x_{N_{(1)}+n}\) for all \(n\in\{0,1,\ldots,N_{(2)}-1-N_{(1)}\}\), that is, as long as \(x_{N_{(1)}+n}\in[0,\infty)\). Moreover, by (24) it is impossible that \(\widehat{x}_{n}=\infty\) for \(n\in\{3,4,\ldots,N_{(2)}-1-N_{(1)}\}\), as this would imply \(x_{N_{(1)}+n-1}<\widehat{x}_{+}\leq\widehat{x}_{n-1}\). Conversely, as a.s. \(\widehat{x}_{\widehat{T}_{1}}=\infty\), then indeed \(N_{(2)}-N_{(1)}\leq\widehat{T}_{1}\). Now let us come to the second task, namely analyzing the \(\epsilon\)-dependence of \(\big{(}\mathbb{E}(\widehat{T}_{1})\big{)}^{-1}\) and thereby proving the first statement of Proposition 2. Similar as in (16), it will be advantageous to pass to a shifted logarithm of the Dyson-Schmidt variables, via the map \(\widehat{f}:(0,\infty)\to\mathbb{R}\) given by \[\widehat{f}(x)\;:=\;\frac{1}{2C_{0}}\,\log\Big{(}\frac{x}{\widehat{x}_{c}} \Big{)}\,.\] By construction, \(\widehat{f}(\widehat{x}_{c})=0\). Furthermore, for \(n\) such that \(\widehat{x}_{n+2}<\infty\), let us introduce \[\widehat{y}_{n}\;:=\;\widehat{f}(\widehat{x}_{n+2})\,,\qquad\widehat{y}_{-}\;:= \;\widehat{f}(\widehat{x}_{-})\,,\qquad\widehat{y}_{+}\;:=\;\widehat{f}( \widehat{x}_{+})\,,\] and the stopping time \[\widehat{T}_{-,+}\;:=\;\inf\big{\{}n\in\mathbb{N}\,:\,\widehat{y}_{n}\notin( \widehat{y}_{-},\widehat{y}_{+})\big{\}}\,.\] Again these quantities are illustrated in Figure 3. As long as \(n\leq\widehat{T}_{-,+}\), it holds that \[\widehat{y}_{n}\;=\;\tfrac{1}{2C_{0}}\,\log\Big{(}\frac{\widehat{x}_{n+2}}{ \widehat{x}_{c}}\Big{)}\;=\;\tfrac{1}{2C_{0}}\log\Big{(}\frac{D^{n}\cdot \widehat{x}_{2}}{\widehat{x}_{c}}\Big{)}\;=\;\tfrac{1}{2C_{0}}\log\Big{(}\prod _{j=N_{(1)}+1}^{N_{(1)}+n}\kappa_{j}^{2}\Big{)}\;=\;\sum_{j=N_{(1)}+1}^{N_{(1)} +n}\chi_{j}\,, \tag{25}\] namely \(\widehat{y}_{n}\) is a random walk with a drift in the negative direction starting at \(\widehat{y}_{0}=0\). The following two lemmata recollect properties about these newly introduced quantities. **Lemma 6**: \(\widehat{y}_{-}\in(-\infty,0)\) _is independent of \(\epsilon\) and \(\lim_{\epsilon\to 0}\frac{\widehat{y}_{+}}{-\log(\epsilon)}=\frac{1}{C_{0}}\)._ **Proof.** The explicit expressions \[\widehat{y}_{-}\,=\,-\tfrac{1}{2\,C_{0}}\,\log(1+e^{-2\,C_{0}})\;,\qquad\widehat {y}_{+}\,=\,\tfrac{1}{2\,C_{0}}\big{(}2\log\big{(}\tfrac{2}{C_{1}\,\epsilon} \big{)}-\log(1+e^{-2\,C_{0}})\big{)}+1\,,\] immediately imply the claims. \(\square\) **Lemma 7**: \(\mathbb{E}(\widehat{T}_{-,+})<+\infty\)_._ **Proof.** Since the cumulative distribution function of \(\chi\) is right-continuous and \(\mathbb{P}(\{\chi>0\})>0\), there exists some \(\ell\in(0,1]\) such that \(\widehat{p}:=\mathbb{P}(\{\chi\geq\ell\})\) satisfies \(\widehat{p}>0\). Denoting \(\widehat{E}:=\lceil\frac{\widehat{y}_{+}-\widehat{y}_{-}}{\ell}\rceil\) and introducing the random variable \[\widehat{N}\;:=\;\min\{n\in\mathbb{N}\,:\,\chi_{(n-1)\widehat{E}+1}\geq\ell\;,\;\chi_{(n-1)\widehat{E}+2}\geq\ell\;,\ldots,\;\chi_{n\widehat{E}}\geq\ell\}\,,\] the latter then is geometrically distributed with success probability \(\widehat{p}^{\widehat{E}}\). In particular, one has \(\mathbb{E}(\widehat{N})<\infty\). Moreover, \(\widehat{T}_{-,+}<\widehat{E}\,\widehat{N}\) a.s. by construction, so \(\mathbb{E}(\widehat{T}_{-,+})<\widehat{E}\,\mathbb{E}(\widehat{N})<\infty\). \(\square\) In order to connect the two stopping times \(\widehat{T}_{-,+}\) and \(\widehat{T}_{1}\), one further random variable will be introduced. Suppose that \(\widehat{T}_{-,+}=m\) for some \(m\in\mathbb{N}\), and \(\widehat{y}_{\widehat{T}_{-,+}}\leq\widehat{y}_{-}\), then let us introduce the at \(m+1\) reinitialized stopping time as in (20) by \[\widehat{T}_{1}^{(m)}\;:=\;\inf\big{\{}n\in\mathbb{N}\,:\,\widehat{x}_{m+1+n}= \infty\big{\}}\,.\] It then clearly follows that \(\widehat{T}_{1}=m+1+\widehat{T}_{1}^{(m)}\), provided that \(\widehat{T}_{-,+}=m\) and \(\widehat{y}_{m}\leq\widehat{y}_{-}\). Now the Markov property allows to compute the conditional expectations \[\mathbb{E}\big{(}\widehat{T}_{1}^{(m)}\,\big{|}\,\widehat{y}_{m}\leq\widehat{ y}_{-}\,,\;\;\widehat{T}_{-,+}=m\big{)}\;=\;\mathbb{E}(\widehat{T}_{1})\,, \qquad\mathbb{E}\big{(}\widehat{T}_{1}\,\big{|}\,\widehat{y}_{\widehat{T}_{-,+}}\geq\widehat{y}_{+}\big{)}\;=\;\mathbb{E}\big{(}\widehat{T}_{-,+}+3\, \big{|}\,\widehat{y}_{\widehat{T}_{-,+}}\geq\widehat{y}_{+}\big{)}\,,\] where the \(3\) stems from an index shift by \(2\) when the process is started and \(1\) additional step at the end. As by construction \(\mathbb{P}\big{(}\{\widehat{y}_{\widehat{T}_{-,+}}\in(\widehat{y}_{-},\widehat {y}_{+})\}\big{)}=0\) and as by Lemma 7 one has \(\widehat{T}_{-,+}<\infty\) a.s., it follows that \[\mathbb{E}(\widehat{T}_{1})\;=\;\mathbb{E}\big{(}\widehat{T}_{1} \,\big{|}\,\widehat{y}_{\widehat{T}_{-,+}}\geq\widehat{y}_{+}\big{)}\,\mathbb{ P}\big{(}\big{\{}\widehat{y}_{\widehat{T}_{-,+}}\geq\widehat{y}_{+}\big{\}} \big{)}\] \[\qquad\;\;+\sum_{m=0}^{\infty}\mathbb{E}\big{(}\widehat{T}_{1}\, \big{|}\,\widehat{y}_{m}\leq\widehat{y}_{-}\,,\widehat{T}_{-,+}=m\big{)}\, \mathbb{P}\big{(}\big{\{}\widehat{y}_{m}\leq\widehat{y}_{-}\,,\widehat{T}_{-, +}=m\big{\}}\big{)}\] \[\qquad\;\;+\sum_{m=0}^{\infty}\mathbb{P}\big{(}\big{\{}\widehat{ y}_{m}\leq\widehat{y}_{-}\,,\widehat{T}_{-,+}=m\big{\}}\big{)}\mathbb{E}\big{(}m+1+ \widehat{T}_{1}^{(m)}\,\big{|}\,\widehat{y}_{m}\leq\widehat{y}_{-}\,,\widehat{T} _{-,+}=m\big{)}\] \[\qquad\;\;+\sum_{m=0}^{\infty}\mathbb{P}\big{(}\big{\{}\widehat{ y}_{m}\leq\widehat{y}_{-}\,,\widehat{T}_{-,+}=m\big{\}}\big{)}\,\mathbb{E}\big{(}m+1+ \widehat{T}_{1}^{(m)}\,\big{|}\,\widehat{y}_{m}\leq\widehat{y}_{-}\,,\widehat{ T}_{-,+}=m\big{)}+1\] \[\qquad\;\;=\;\mathbb{P}\big{(}\big{\{}\widehat{y}_{\widehat{T}_{-, +}}\geq\widehat{y}_{+}\big{\}}\big{)}\,\mathbb{E}\big{(}\widehat{T}_{-,+}+3\, \big{|}\,\widehat{y}_{\widehat{T}_{-,+}}\geq\widehat{y}_{+}\big{)}\] \[\qquad\;\;+\sum_{m=0}^{\infty}\mathbb{P}\big{(}\big{\{}\widehat{ y}_{m}\leq\widehat{y}_{-}\,,\widehat{T}_{-,+}=m\big{\}}\big{)}\,\Big{(}\mathbb{E} \big{(}\widehat{T}_{-,+}\,\big{|}\,\widehat{y}_{\widehat{T}_{-,+}}\leq\widehat{y }_{-}\,,\widehat{T}_{-,+}=m\big{)}+1+\mathbb{E}(\widehat{T}_{1})\Big{)}\] \[\qquad\;\;=\;\mathbb{E}(\widehat{T}_{-,+})\,+\,3\,\mathbb{P}\big{(} \big{\{}\widehat{y}_{\widehat{T}_{-,+}}\geq\widehat{y}_{+}\big{\}}\big{)}\,+\, \Big{(}1-\mathbb{P}\big{(}\big{\{}\widehat{y}_{\widehat{T}_{-,+}}\geq\widehat{y }_{+}\big{\}}\big{)}\Big{)}\,\Big{(}1+\mathbb{E}(\widehat{T}_{1})\Big{)}\,\] which is equivalent to \[\big{(}\mathbb{E}(\widehat{T}_{1})\big{)}^{-1}\ =\ \left[\frac{\mathbb{E}(\widehat{T}_{-,+})+1}{ \mathbb{P}\big{(}\big{\{}\widehat{y}_{\widehat{T}_{-,+}}\geq\widehat{y}_{+} \big{\}}\big{)}}\,+\,2\right]^{-1}\,. \tag{26}\] It now remains to compute the probability and the expectation on the r.h.s. of (26). This will essentially follow from the optional stopping theorem. It is convenient to define the quantities \[\widehat{y}^{\prime}_{-}\ :=\ \mathbb{E}\big{(}\widehat{y}_{ \widehat{T}_{-,+}}\big{|}\,\widehat{y}_{\widehat{T}_{-,+}}\leq\widehat{y}_{-} \big{)}\,, \widehat{y}^{\prime\prime}_{-}\ :=\ \tfrac{1}{C_{0}\nu}\log\Big{(}\mathbb{E}\big{(}e^{C_{0}\nu \widehat{y}_{\widehat{T}_{-,+}}}\big{|}\,\widehat{y}_{\widehat{T}_{-,+}}\leq \widehat{y}_{-}\big{)}\Big{)}\,,\] \[\widehat{y}^{\prime}_{+}\ :=\ \mathbb{E}\big{(}\widehat{y}_{\widehat{T}_{-,+}} \big{|}\,\widehat{y}_{\widehat{T}_{-,+}}\geq\widehat{y}_{+}\big{)}\,, \widehat{y}^{\prime\prime}_{+}\ :=\ \tfrac{1}{C_{0}\nu}\log\Big{(}\mathbb{E}\big{(}e^{C_{0}\nu \widehat{y}_{\widehat{T}_{-,+}}}\big{|}\,\widehat{y}_{\widehat{T}_{-,+}}\geq \widehat{y}_{+}\big{)}\Big{)}\,,\] in which \(\widehat{y}^{\prime}_{-},\widehat{y}^{\prime\prime}_{-}\in[\widehat{y}_{-}-1, \widehat{y}_{-}]\) and \(\widehat{y}^{\prime}_{+},\widehat{y}^{\prime\prime}_{+}\in[\widehat{y}_{+}, \widehat{y}_{+}+1]\). Now (25) implies that \(\widehat{y}_{n}-n\mathbb{E}(\chi)\) is a martingale. As \(|\chi|\leq 1\) a.s., its increments are a.s. bounded, namely more precisely \(|\widehat{y}_{n+1}-(n+1)\mathbb{E}(\chi)-\widehat{y}_{n}+n\mathbb{E}(\chi)|\leq 2\). Then with \(\mathbb{E}(\widehat{T}_{-,+})<\infty\) from Lemma 7, one can use the optional stopping theorem to find \(0=\mathbb{E}(\widehat{y}_{0}-0\cdot\mathbb{E}(\chi))=\mathbb{E}(\widehat{y}_ {\widehat{T}_{-,+}}-\widehat{T}_{-,+}\cdot\mathbb{E}(\chi))\), or \[\mathbb{E}(\widehat{T}_{-,+})\ =\ \frac{\mathbb{E}\big{(}\widehat{y}_{ \widehat{T}_{-,+}}\big{)}}{\mathbb{E}(\chi)}\ =\ \frac{\widehat{y}^{\prime}_{-}\big{(}1-\mathbb{P}\big{(}\{\widehat{y}_{ \widehat{T}_{-,+}}\geq\widehat{y}_{+}\}\big{)}\big{)}\,+\,\widehat{y}^{\prime }_{+}\mathbb{P}\big{(}\{\widehat{y}_{\widehat{T}_{-,+}}\geq\widehat{y}_{+}\} \big{)}}{\mathbb{E}(\chi)}\,. \tag{27}\] Now an expression for \(\mathbb{P}\big{(}\{\widehat{y}_{\widehat{T}_{-,+}}\geq\widehat{y}_{+}\}\big{)}\) is needed. A standard technique (see _e.g._[7]) is based on the following lemma. **Lemma 8**: _There is a unique solution \(\nu\in(0,\infty)\) for the equation \(\mathbb{E}(\kappa_{0}^{\nu})=1\), implying that \(e^{C_{0}\nu\widehat{y}_{n}}\) is a martingale._ **Proof.** The main Hypothesis states that \(\mathbb{P}(\{\chi>0\})>0\), hence \(\lim_{\rho\to\infty}\mathbb{E}(e^{C_{0}\rho\chi})=\infty\). Consider the map \(\rho\in\mathbb{R}\mapsto\mathbb{E}(e^{C_{0}\rho\chi})\in(0,\infty)\), which is differentiable at \(\rho=0\) with derivative \[\partial_{\rho}\left.\mathbb{E}(e^{C_{0}\rho\chi})\right|_{\rho=0}\ =\ \mathbb{E}(\log(\kappa_{0}))\ < \ 0\,.\] This map is continuous, so the intermediate value theorem applies on \([0,\infty)\), yielding a solution \(\nu\) for \(\rho\) of \(\mathbb{E}(e^{C_{0}\rho\chi})=1\) on \((0,\infty)\). As the map is strictly convex, this solution is unique. \(\square\) As \(\widehat{y}_{n}\in[\widehat{y}_{-}-1,\widehat{y}_{+}+1]\) and \(|\chi|\leq 1\) a.s., the increments of the martingale \(e^{C_{0}\nu\widehat{y}_{n}}\) of Lemma 8 are uniformly bounded by \(|e^{C_{0}\nu\widehat{y}_{n}}-e^{C_{0}\nu\widehat{y}_{n+1}}|=e^{C_{0}\nu \widehat{y}_{n}}|e^{C_{0}\nu\chi_{n}}-1|\leq e^{C_{0}\nu(\widehat{y}_{+}+1)}| e^{C_{0}\nu}-1|\). Using \(\mathbb{E}(\widehat{T}_{-,+})<\infty\) from Lemma 7 to apply the optional stopping theorem to this martingale yields \[1\,=\,e^{C_{0}\nu\cdot 0}\,=\,e^{C_{0}\nu\widehat{y}_{0}}\,=\,\mathbb{E}(e^{C_{0} \nu\widehat{y}_{\widehat{T}_{-,+}}})\ =\ e^{C_{0}\nu\widehat{y}^{\prime\prime}_{-}}\big{(}1-\mathbb{P}(\{ \widehat{y}_{\widehat{T}_{-,+}}\geq\widehat{y}_{+}\})\big{)}\,+e^{C_{0}\nu \widehat{y}^{\prime\prime}_{+}}\mathbb{P}(\{\widehat{y}_{\widehat{T}_{-,+}}\geq \widehat{y}_{+}\})\,.\] Inserting (27) into (26) and combining this with the foregoing finally gives \[\big{(}\mathbb{E}(\widehat{T}_{1})\big{)}^{-1}\ =\ \left[\frac{\widehat{y}^{\prime}_{-}+\mathbb{E}(\chi)}{ \mathbb{E}(\chi)\mathbb{P}\big{(}\big{\{}\widehat{y}_{\widehat{T}_{-,+}}\geq \widehat{y}_{+}\big{\}}\big{)}}\,+\,\frac{\widehat{y}^{\prime}_{+}-\widehat{y}^{ \prime}_{-}}{\mathbb{E}(\chi)}\,+\,2\right]^{-1}\] \[\ =\ \left[\left(1+\frac{C_{0}\,\widehat{y}^{\prime}_{-}}{ \mathbb{E}(\log(\kappa_{0}))}\right)\,\frac{e^{C_{0}\nu\widehat{y}^{\prime \prime}_{+}}-e^{C_{0}\nu\widehat{y}^{\prime\prime}_{-}}}{1-e^{C_{0}\nu\widehat{y }^{\prime\prime}_{-}}}\,+\,\frac{C_{0}\,(\widehat{y}^{\prime}_{+}-\widehat{y}^{ \prime}_{-})}{\mathbb{E}(\log(\kappa_{0}))}\,+\,2\right]^{-1}\,,\] which together with the two statements of Lemma 6 implies that \[C_{-}\;:=\;\left(1+\frac{C_{0}\left(\widehat{y}_{-}-1\right)}{\mathbb{E}(\log( \kappa_{0}))}\right)^{-1}\frac{1-e^{C_{0}\nu(\widehat{y}_{-}-1)}}{e^{C_{0}\nu}}\] satisfies \[C_{-}\;\leq\;\lim_{\epsilon\to 0}\left(1+\frac{C_{0}\,\widehat{y}_{-}}{ \mathbb{E}(\log(\kappa_{0}))}\right)^{-1}\,\frac{1-e^{C_{0}\nu\widehat{y}_{-} ^{\prime\prime}}}{\epsilon^{\nu}e^{C_{0}\nu\widehat{y}_{+}^{\prime\prime}}}= \lim_{\epsilon\to 0}\frac{\left(\mathbb{E}(\widehat{T}_{1})\right)^{-1}}{ \epsilon^{\nu}}\,,\] namely the first statement of Proposition 2. ## 6 Upper bound on the rotation number This section is structured just as the previous one, namely first a faster comparison process satisfying (18) and (19) is constructed and then the \(\epsilon\)-dependence of its expected stopping time is analyzed in order to prove the second statement of Proposition 2. It will be useful to introduce a suitable positive-valued function \(\lambda\) of \(\epsilon\), satisfying the defining properties \[\lim_{\epsilon\to 0}\;\lambda\;=\;0\,,\qquad\lim_{\epsilon\to 0}\;\frac{\log( \lambda)}{\log(\epsilon)}\;=\;0\,,\qquad\lim_{\epsilon\to 0}\;\frac{ \epsilon}{\lambda}\;=\;0\,, \tag{28}\] where the last property actually follows from the first two. For conciseness an additional notation is introduced: \[\Lambda\;:=\;e^{2C_{0}\lambda}\,.\] Note that \(\Lambda\) also depends on \(\epsilon\). Similar as in Section 5, three reference points \(0<\widetilde{x}_{-}<\widetilde{x}_{c}<\widetilde{x}_{+}<\infty\) will be needed w.r.t. which the dynamics has uniform properties schematically described in Figure 4. While similar to Figure 3, note that the original dynamics now is bounded above by these points, see Lemma 10 below. For its proof, let us start out with a counterpart to Lemma 4. **Lemma 9**: _There exist \(\widetilde{x}_{-}\) and \(\widetilde{x}_{+}\) depending on \(C_{0}\), \(C_{2}\), \(C_{3}\) and \(\epsilon\) such that_ \[x\,\in\,[\widetilde{x}_{-},\widetilde{x}_{+}]\qquad\Longrightarrow\qquad Q \cdot(D\cdot x)\,\leq\,\Lambda(D\cdot x)\,,\] _as well as_ \[\lim_{\epsilon\to 0}\;\frac{\lambda\,\widetilde{x}_{-}}{\epsilon}\;=\;\frac{C_{ 2}\,e^{2C_{0}}}{2C_{0}}\,,\qquad\lim_{\epsilon\to 0}\;\frac{\epsilon\, \widetilde{x}_{+}}{\lambda}\;=\;\frac{2C_{0}}{C_{2}\,e^{2C_{0}}}\,. \tag{29}\] **Proof.** For \(x\in[0,+\infty)\) one can estimate \[Q\cdot(D\cdot x)\;=\;\frac{(1+\epsilon^{2}\alpha)(D\cdot x)+(a-b-\epsilon \beta)\epsilon}{1+\epsilon^{2}\delta-(a+b+\epsilon\gamma)\epsilon(D\cdot x)}\; \leq\;\frac{(1+C_{3}\epsilon^{2})(D\cdot x)+(C_{2}+C_{3}\epsilon)\epsilon}{1 -C_{3}\epsilon^{2}-(C_{2}+C_{3}\epsilon)\epsilon(D\cdot x)}\,.\] The latter is smaller than or equal to \(\Lambda(D\cdot x)\) if and only if \[\Lambda(C_{2}+C_{3}\epsilon)\epsilon(D\cdot x)^{2}-\big{(}\Lambda-1-C_{3} \epsilon^{2}(\Lambda+1)\big{)}(D\cdot x)+(C_{2}+C_{3}\epsilon)\epsilon\;\leq \;0\,.\] Let us equate this to \(A(D\cdot x)^{2}-B(D\cdot x)+C\), namely set \[A\;:=\;\Lambda(C_{2}+C_{3}\epsilon)\epsilon\,,\qquad B\;:=\;\Lambda-1-C_{3} \epsilon^{2}(\Lambda+1)\,,\qquad C\;:=\;(C_{2}+C_{3}\epsilon)\epsilon\,.\] Note that for \(\epsilon\to 0\), \(\frac{A}{C_{2}\epsilon}\), \(\frac{B}{2C_{0}\lambda}\) and \(\frac{C}{C_{2}\epsilon}\) all converge to \(1\) by (28), hence \(A,B,C\in(0,\infty)\) for \(\epsilon\) small. Now let us search for real solutions of the quadratic equation \(A(D\cdot x)^{2}-B(D\cdot x)+C=0\). They clearly exist whenever \(B^{2}-4AC\geq 0\), which follows from the foregoing limits and (28). Then denote the two real zeroes of the quadratic equation by \(x_{-}\leq x_{+}\), so \(A(D\cdot x)^{2}-B(D\cdot x)+C\) then equals \(A\left[(D\cdot x)-x_{+}\right]\left[(D\cdot x)-x_{-}\right]\). The fact that \(\sqrt{1-r}\geq 1-\frac{r}{2}-\frac{r^{2}}{2}\geq 1-r\) for all \(r\in[0,1]\) implies after some algebra that \(x_{-}\leq\frac{(B^{2}+4AC)C}{B^{3}}\) and \(x_{+}\geq\frac{B^{2}-2AC}{AB}\). Hence let us set \[\widetilde{x}_{-}\;:=\;e^{2C_{0}}\,\frac{(B^{2}+4AC)C}{B^{3}}\;= \;\frac{\left[\left(\Lambda-1-C_{3}\epsilon^{2}(\Lambda+1)\right)^{2}+4\Lambda (C_{2}+C_{3}\epsilon)^{2}\epsilon^{2}\right]\!\left(C_{2}+C_{3}\epsilon\right) e^{2C_{0}}\,\epsilon}{\left(\Lambda-1-C_{3}\epsilon^{2}\left[\Lambda+1 \right]\right)^{3}}\,,\] \[\widetilde{x}_{+}\;:=\;e^{-2C_{0}}\,\frac{B^{2}-2AC}{AB}\;=\; \frac{\left(\Lambda-1-C_{3}\epsilon^{2}(\Lambda+1)\right)^{2}\,-\,2\,\Lambda \left(C_{2}+C_{3}\epsilon\right)^{2}\epsilon^{2}}{\left(\Lambda-1-C_{3} \epsilon^{2}(\Lambda+1)\right)\Lambda\left(C_{2}+C_{3}\epsilon\right)e^{2C_{0 }}\,\epsilon}\,.\] For \(x\in[\widetilde{x}_{-},\widetilde{x}_{+}]\), one has due to \(e^{-2C_{0}}x\leq(D\cdot x)\leq e^{2C_{0}}x\) that \((D\cdot x)\in[e^{-2C_{0}}\widetilde{x}_{-},e^{2C_{0}}\widetilde{x}_{+}]\subset [x_{-},x_{+}]\), which by the above implies the first statement. The limits (29) follow again by the given limit behavior of \(A\), \(B\) and \(C\) as well as from (28). \(\square\) Let us now complete the left part of Figure 4 by setting \[\widetilde{x}_{c}\;:=\;e^{2C_{0}}\,\Lambda\,\widetilde{x}_{-}\,.\] The next statement corresponds to Lemma 5. **Lemma 10**: _For each realization, one has_ \[x\,\notin\,[0,\infty) \Longrightarrow Q\cdot(D\cdot x)\,\notin\,[\widetilde{x}_{-},\infty)\,, \tag{30}\] \[x\,\notin\,[\widetilde{x}_{-},\infty) \Longrightarrow Q\cdot(D\cdot x)\,\notin\,[\widetilde{x}_{c},\infty)\,,\] (31) \[Q\cdot(D\cdot x)\,\notin\,[0,\infty) \Longrightarrow x\,\notin\,[0,\widetilde{x}_{+})\,. \tag{32}\] Figure 4: _The arrows on the left part illustrate properties of the original Dyson-Schmidt dynamics on \((0,\infty)\) as stated in_ Lemma 10_. The right part illustrates the notations after the logarithmic transformation \(\widetilde{f}\) to \(\mathbb{R}\)._ **Proof.** For (30), let \(x\notin[0,\infty)\), so then \(D\cdot x\notin[0,\infty)\). By combining the order-preserving property (13) with (15) and the Hypothesis, one has for nonnegative \(Q\cdot(D\cdot x)\) that \[Q\cdot(D\cdot x)\;<\;Q\cdot 0\;=\;\tfrac{(a-b-\epsilon\beta)\epsilon}{1+\epsilon^{ 2}\delta}\;\leq\;\tfrac{(C_{2}+C_{3}\epsilon)\epsilon}{1-C_{3}\epsilon^{2}} \;\leq\;C_{2}\,e^{2C_{0}}\epsilon\,.\] By (28) and (29) it indeed follows that \(C_{2}e^{2C_{0}}\epsilon<\widetilde{x}_{-}\) for \(\epsilon\) small enough. For the proof of (31), combining its hypothesis, the order-preserving property (13), Lemma 9 and the fact that \(D\cdot x^{\prime}\leq e^{2C_{0}}x^{\prime}\) for nonnegative \(x^{\prime}\), yields \[Q\cdot(D\cdot x)\;<\;Q\cdot(D\cdot\widetilde{x}_{-})\;\leq\;\Lambda\,e^{2C_{ 0}}\,\widetilde{x}_{-}\;=\;\widetilde{x}_{c}\,.\] Finally let us verify (32) by contraposition. If \(x\in[0,\widetilde{x}_{+})\), then the order-preserving property (13) and Lemma 9 imply \[0\;\leq\;Q\cdot 0\;\leq\;Q\cdot(D\cdot x)\;\leq\;Q\cdot(D\cdot\widetilde{x}_{ +})\;\leq\;\Lambda(D\cdot\widetilde{x}_{+})\;<\;\infty\,,\] just as claimed. \(\square\) Now a new process \(\widetilde{x}=(\widetilde{x}_{n})_{n\geq 0}\) on \([0,\infty]\) is constructed by setting for \(n\geq 0\) \[\widetilde{x}_{0}\;=\;\widetilde{x}_{-}\,,\qquad\widetilde{x}_{n+1}\;=\; \begin{cases}\widetilde{x}_{c}\;,&\text{ if }\widetilde{x}_{n}\leq \widetilde{x}_{-}\,,\\ \Lambda(D_{n}\cdot\widetilde{x}_{n})\;,&\text{ if }\widetilde{x}_{n}\in( \widetilde{x}_{-},\widetilde{x}_{+})\;,\\ \infty\;,&\text{ else, so if }\widetilde{x}_{n}\geq\widetilde{x}_{+}\,.\end{cases}\] Comparing with (14), the main case \(\widetilde{x}_{n+1}=\Lambda(D_{n}\cdot\widetilde{x}_{n})\) of this process merely bounds the action of \(Q_{n}\) for \(n\geq 2\). Let us now argue why this process satisfies the first inequality in (18). Indeed, replacing the action of \(Q_{n}\) by a multiplication by \(\Lambda\) speeds up the process because of the order-preserving property (13) and Lemma 9 which applies in the case \(\widetilde{x}_{n}\in(\widetilde{x}_{-},\widetilde{x}_{+})\). Carefully analyzing the first case in the definition of \(\widetilde{x}_{n+1}\) in combination with (30) and (31) shows that a.s. \(x_{N_{(1)}+n}\leq\widetilde{x}_{n}\) for all \(n\in\{0,1,\ldots,N_{(2)}-1-N_{(1)}\}\), that is, as long as \(x_{N_{(1)}+n}\in[0,\infty)\). Moreover, by (32) it is indeed impossible that \(\widetilde{x}_{N_{(2)}-N_{(1)}}\neq\infty\), as this would imply the contradiction \(x_{N_{(2)}-1}\geq\widetilde{x}_{+}>\widetilde{x}_{N_{(2)}-N_{(1)}-1}\). Therefore also (19) holds, and conversely indeed \(\widetilde{T}_{1}\leq N_{(2)}-N_{(1)}\) since a.s. \(\widetilde{x}_{\widetilde{T}_{1}}=\infty\). Now let us proceed proving the limit behavior of \(\big{(}\mathbb{E}(\widetilde{T}_{1})\big{)}^{-1}\) as given in Proposition 2. As in the previous section, this is achieved by passing to a shifted logarithm of the Dyson-Schmidt variables, here via the map \(\widetilde{f}:(0,\infty)\to\mathbb{R}\) given by \[\widetilde{f}(x)\;:=\;\frac{1}{2C_{0}}\,\log\Big{(}\frac{x}{\widetilde{x}_{c} }\Big{)}\,.\] By construction, \(\widetilde{f}(\widetilde{x}_{c})=0\). Furthermore, for \(n\) such that \(\widetilde{x}_{n+1}<\infty\), let us introduce \[\widetilde{y}_{n}\;:=\;\widetilde{f}(\widetilde{x}_{n+1})\,,\qquad\widetilde{ y}_{-}\;:=\;\widetilde{f}(\widetilde{x}_{-})\,,\qquad\widetilde{y}_{+}\;:=\; \widetilde{f}(\widetilde{x}_{+})\,,\] and the stopping time \[\widetilde{T}_{-,+}\;:=\;\inf\big{\{}n\in\mathbb{N}\,:\,\widetilde{y}_{n}\notin (\widetilde{y}_{-},\widetilde{y}_{+})\big{\}}\,.\] As long as \(n\leq\widetilde{T}_{-,+}\), it holds that \[\widetilde{y}_{n}\ =\ \tfrac{1}{2C_{0}}\log\Big{(}\frac{\Lambda^{n}(D^{n}\cdot \widetilde{x}_{1})}{\widetilde{x}_{c}}\Big{)}\ =\ \tfrac{1}{2C_{0}}\log\Big{(}\prod_{j=N_{(1)}+1}^{N_{(1)}+n}e^{2C_{0}\lambda} \kappa_{j}^{2}\Big{)}\ =\ \sum_{j=N_{(1)}+1}^{N_{(1)}+n}(\chi_{j}+\lambda)\,, \tag{33}\] namely \(\widetilde{y}_{n}\) is a random walk starting at \(\widetilde{y}_{0}=0\). For \(\lambda\) small enough, it still contains a drift in the negative direction. The following two lemmata recollect properties about these newly introduced quantities. **Lemma 11**: _For \(\epsilon\to 0\), both \(\frac{C_{0}\widetilde{y}_{+}}{-\log(\epsilon)}\) and \(-\widetilde{y}_{-}\) converge to \(1\)._ **Proof.** The first statement follows from the limit behavior of \(\widetilde{x}_{-}\) and \(\widetilde{x}_{+}\) as given in (29): \[\lim_{\epsilon\to 0}\frac{C_{0}\widetilde{y}_{+}}{-\log(\epsilon)}\ =\ \lim_{\epsilon\to 0}\frac{\log( \widetilde{x}_{+})-\log(e^{-2C_{0}}\Lambda)-\log(\widetilde{x}_{-})}{-2\log( \epsilon)}\ =\ \lim_{\epsilon\to 0}\frac{\log(\lambda)-\log(\epsilon)}{-\log( \epsilon)}\ =\ 1\] by (28). The second statement follows from the observation that \(\widetilde{y}_{-}=-1-\lambda\). \(\square\) **Lemma 12**: \(\mathbb{E}(\widetilde{T}_{-,+})<+\infty\)_._ **Proof.** As \(\lambda>0\) and \(\mathbb{P}(\{\chi>0\})>0\), it holds that \(\widetilde{p}:=\mathbb{P}(\{\chi+\lambda>0\})\) is strictly positive. Denoting \(\widetilde{E}:=\lceil\frac{\widetilde{y}_{+}-\widetilde{y}_{-}}{\lambda}\rceil\) and introducing the random variable \[\widetilde{N}\ :=\ \min\big{\{}n\in\mathbb{N}\,:\,\chi_{(n-1)\widetilde{E}+1} \geq\lambda\,,\ \ \chi_{(n-1)\widetilde{E}+2}\geq\lambda\,,\,\ldots\,,\ \ \chi_{n\widetilde{E}}\geq\lambda\big{\}}\,,\] the latter then is geometrically distributed with success probability \(\widetilde{p}^{\widetilde{E}}\). In particular, one has \(\mathbb{E}(\widetilde{N})<\infty\). Moreover, \(\widetilde{T}_{-,+}<\widetilde{E}\,\widetilde{N}\) a.s. by construction, so \(\mathbb{E}(\widetilde{T}_{-,+})<\widetilde{E}\,\mathbb{E}(\widetilde{N})<\infty\). \(\square\) The connection between the two stopping times \(\widetilde{T}_{-,+}\) and \(\widetilde{T}_{1}\) is almost identical to that in the previous section: this time it holds that \(\mathbb{E}\big{(}\widetilde{T}_{1}\big{|}\,\widetilde{y}_{\widetilde{T}_{-,+ }}\geq\widetilde{y}_{+}\big{)}=\mathbb{E}\big{(}\widetilde{T}_{-,+}+2\,\big{|} \,\widetilde{y}_{\widetilde{T}_{-,+}}\geq\widetilde{y}_{+}\big{)}\). Therefore, up to this single change, the argument leading to (26) directly transposes (simply by replacing all hats with tildes, and with \(2\) instead of \(3\) everywhere), so that one has \[\big{(}\mathbb{E}(\widetilde{T}_{1})\big{)}^{-1}\ =\ \left[\frac{\mathbb{E}( \widetilde{T}_{-,+})+1}{\mathbb{P}\big{(}\big{(}\widetilde{y}_{\widetilde{T}_{-, +}}\geq\widetilde{y}_{+}\big{)}\big{)}}\,+\,1\right]^{-1}\,. \tag{34}\] In complete analogy with the previous section, one can next define \[\widetilde{y}_{-}^{\prime}\ :=\ \mathbb{E}\big{(}\widetilde{y}_{ \widetilde{T}_{-,+}}\big{|}\,\widetilde{y}_{\widetilde{T}_{-,+}}\leq \widetilde{y}_{-}\big{)}\,, \widetilde{y}_{-}^{\prime\prime}\ :=\ \tfrac{1}{C_{0}\nu}\log\Big{(}\mathbb{E}\big{(}e^{C\nu \widetilde{y}_{\widetilde{T}_{-,+}}}\big{|}\,\widetilde{y}_{\widetilde{T}_{-,+}} \leq\widetilde{y}_{-}\big{)}\Big{)}\,,\] \[\widetilde{y}_{+}^{\prime}\ :=\ \mathbb{E}\big{(}\widetilde{y}_{ \widetilde{T}_{-,+}}\big{|}\,\widetilde{y}_{\widetilde{T}_{-,+}}\geq\widetilde{y }_{+}\big{)}\,, \widetilde{y}_{+}^{\prime\prime}\ :=\ \tfrac{1}{C_{0}\nu}\log\Big{(}\mathbb{E}\big{(}e^{C\nu \widetilde{y}_{\widetilde{T}_{-,+}}}\big{|}\,\widetilde{y}_{\widetilde{T}_{-,+}} \geq\widetilde{y}_{+}\big{)}\Big{)}\,,\] in which \(\widetilde{y}_{-}^{\prime},\widetilde{y}_{-}^{\prime\prime}\in[\widetilde{y}_ {-}-1+\lambda,\widetilde{y}_{-}]\) and \(\widetilde{y}_{+}^{\prime},\widetilde{y}_{+}^{\prime\prime}\in[\widetilde{y} _{+},\widetilde{y}_{+}+1+\lambda]\). Now (33) implies that this time \(\widetilde{y}_{n}-n(\mathbb{E}(\chi)+\lambda)\) is a martingale. As \(|\chi|\leq 1\) a.s., its increments are a.s. bounded, namely more precisely \(|\widetilde{y}_{n+1}-(n+1)(\mathbb{E}(\chi)+\lambda)-\widetilde{y}_{n}+n( \mathbb{E}(\chi)+\lambda)|\leq 2\). With \(\mathbb{E}(\widetilde{T}_{-,+})<\infty\) from Lemma 12, the optional stopping theorem yields \(0=\mathbb{E}(\widetilde{y}_{0}-0\cdot(\mathbb{E}(\chi)+\lambda))=\mathbb{E}( \widetilde{y}_{\widetilde{T}_{-,+}}-\widetilde{T}_{-,+}\cdot(\mathbb{E}(\chi)+ \lambda))\), or \[\mathbb{E}(\widetilde{T}_{-,+})\ =\ \frac{\mathbb{E}\big{(}\widetilde{y}_{ \widetilde{T}_{-,+}}\big{)}}{\mathbb{E}(\chi)+\lambda}\ =\ \frac{\widetilde{y}_{-}^{\prime}\big{(}1-\mathbb{P}\big{(}\big{\{} \widetilde{y}_{\widetilde{T}_{-,+}}\geq\widetilde{y}_{+}\big{\}}\big{)}\big{)} \,+\,\widetilde{y}_{+}\mathbb{P}\big{(}\big{\{}\widetilde{y}_{\widetilde{T}_{-,+} }\geq\widetilde{y}_{+}\big{\}}\big{)}}{\mathbb{E}(\chi)+\lambda}\,. \tag{35}\] **Lemma 13**: _For \(\lambda\) close enough to \(0\), i.e. \(\epsilon\) small enough, there is a unique solution \(\widetilde{\nu}\in(0,\infty)\) for \(\rho\) of the equation \(\mathbb{E}(e^{C_{0}\rho(\chi+\lambda)})=1\), implying that \(e^{C_{0}\widetilde{\nu}\widetilde{y}_{n}}\) is a martingale. Moreover, \(\widetilde{\nu}<\nu\) and \(\lim_{\lambda\to 0}\widetilde{\nu}=\nu\)._ **Proof.** As \(\mathbb{E}(\chi)+\lambda<0\) for \(\lambda\) small enough and \(\mathbb{P}(\{\chi+\lambda>0\})\) is still positive, the proof of existence and uniqueness of \(\widetilde{\nu}\) is identical to that of Lemma 8. Now, for \(\rho\in(0,\infty)\) the value of \(e^{C_{0}\rho\lambda}\) strictly decreases as \(\lambda\to 0\). The strict convexity thus implies that \(\widetilde{\nu}\) is strictly increasing as \(\lambda\to 0\), hence \(\widetilde{\nu}<\nu\). If \(\widetilde{\nu}\leq\nu^{\prime}\) for all \(\lambda>0\) and some \(\nu^{\prime}<\nu\), then \(\widetilde{\nu}<\frac{\nu^{\prime}+\nu}{2}<\nu\) so that \(\mathbb{E}\big{(}\exp(C_{0}\frac{\nu^{\prime}+\nu}{2}(\chi+\lambda))\big{)}\geq 1\) for \(\lambda\) sufficiently small, contradicting the uniqueness of the solution \(\rho=\nu\) on \((0,\infty)\) of \(\mathbb{E}(e^{C_{0}\rho\chi})=1\). \(\square\) As \(\widetilde{y}_{n}\in[\widetilde{y}_{-}-1+\lambda,\widetilde{y}_{+}+1+\lambda]\) and \(|\chi|\leq 1\) a.s., the increments of the martingale of Lemma 13 are uniformly bounded by \(|e^{C_{0}\widetilde{\nu}\widetilde{y}_{n}}-e^{C_{0}\widetilde{\nu}\widetilde{ y}_{n+1}}|=e^{C_{0}\widetilde{\nu}\widetilde{y}_{n}}|e^{C_{0}\widetilde{\nu}( \chi_{n}+\lambda)}-1|\leq e^{C_{0}\widetilde{\nu}(\widetilde{y}_{+}+1+\lambda) }|e^{C_{0}\widetilde{\nu}(1+\lambda)}-1|\). Then, exactly as in the previous section, replacing \(\chi\) by \(\chi+\lambda\) and all hats by tildes, one gets \[\big{(}\mathbb{E}(\widetilde{T}_{1})\big{)}^{-1}\;=\;\left[\left(1+\frac{C_{ 0}\,\widetilde{y}_{-}}{\mathbb{E}(\log(\kappa_{0}))+C_{0}\lambda}\right)\, \frac{e^{C_{0}\widetilde{\nu}\widetilde{y}_{+}^{\prime}}-e^{C_{0}\widetilde{ \nu}\widetilde{y}_{-}^{\prime\prime}}}{1-e^{C_{0}\widetilde{\nu}\widetilde{y}_ {-}^{\prime\prime}}}\,+\,\frac{C_{0}\,(\widetilde{y}_{+}^{\prime}-\widetilde{ y}_{-}^{\prime})}{\mathbb{E}(\log(\kappa_{0}))+C_{0}\lambda}\,+\,1\right]^{-1}\,, \tag{36}\] which together with the two statements of Lemma 11 implies the second statement of Proposition 2 with \[C_{+}\;:=\;\left(1+\frac{C_{0}\,\widetilde{y}_{-}}{\mathbb{E}(\log(\kappa_{0} ))}\right)^{-1}(1-e^{C_{0}\widetilde{\nu}\widetilde{y}_{-}})\;,\] because \[C_{+}\;\geq\;\lim_{\epsilon\to 0}\left(1+\frac{C_{0}\,\widetilde{y}_{-}}{ \mathbb{E}(\log(\kappa_{0}))+C_{0}\lambda}\right)^{-1}\,\frac{1-e^{C_{0} \widetilde{\nu}\widetilde{y}_{-}^{\prime\prime}}}{\epsilon^{\widetilde{\nu}}e ^{C_{0}\widetilde{\nu}\widetilde{y}_{+}^{\prime\prime}}}\;=\;\lim_{\epsilon \to 0}\frac{\big{(}\mathbb{E}(\widetilde{T}_{1})\big{)}^{-1}}{\epsilon^{ \widetilde{\nu}}}\,.\] ## 7 Modifications for the balanced case This final section considers in the balanced case \(\mathbb{E}(\log(\kappa_{0}))=0\). Hence Figure 2 is not valid any longer, but rather has to be modified to Figure 5. The action induced by \(D_{\kappa}\) now yields no average drift everywhere on \(\overline{\mathbb{R}}\). Therefore the random dynamics on the two half-axis \((-\infty,0)\cup\{\infty\}\) and \([0,\infty)\) is essentially the same, up to flipping the sign of \(b_{n}\), swapping \(\alpha_{n}^{\epsilon}\) for \(\delta_{n}^{\epsilon}\) and Figure 5: _The dynamics of \(\theta_{n}\) on the real line in the balanced case where \(\mathbb{E}(\log(\kappa_{0}))=0\). Contrary to the unbalanced case depicted in Figure 2, there are no drifts in this situation._ \(\beta_{n}^{\epsilon}\) for \(-\gamma_{n}^{\epsilon}\), as well as changing \(\kappa_{n}\) to \(\kappa_{n}^{-1}\). Indeed, the bijective orientation preserving map \(x\in(-\infty,0)\cup\{\infty\}\mapsto-x^{-1}\in[0,\infty)\) identifies these intervals and, moreover, \[Q_{n}^{\epsilon}\cdot(-x^{-1})\;=\;-\left[\frac{(1+\epsilon^{2}\delta_{n}^{ \epsilon})x+(a_{n}+b_{n}+\epsilon\gamma_{n}^{\epsilon})\epsilon}{1+\epsilon^{ 2}\alpha_{n}^{\epsilon}-(a_{n}-b_{n}-\epsilon\beta_{n}^{\epsilon})\epsilon x} \right]^{-1}\,,\quad D_{n}\cdot(-x^{-1})\;=\;-\;\left[\kappa_{n}^{-2}x\right]^ {-1}\,.\] As all estimates making use of the constants \(C_{1}\), \(C_{2}\), \(C_{3}\) and \(\mathbb{E}(\log(\kappa_{0})^{2})\) are invariant under the above swapping, it is sufficient to analyze the random dynamics on \([0,\infty)\). So, (21) changes to \[\frac{1}{2\mathbb{E}(\widehat{T}_{1}^{\epsilon})}\;\leq\;\lim_{N\to\infty} \frac{1}{N}\frac{\mathbb{E}(\theta_{N}^{\epsilon})}{\pi}\;\leq\;\frac{1}{2 \mathbb{E}(\widetilde{T}_{1}^{\epsilon})}\,, \tag{37}\] for which again families of nonnegative i.i.d. random variables \((\widehat{x}_{2k-1}^{\epsilon})_{k\in\mathbb{N}}\), \((\widehat{x}_{2k}^{\epsilon})_{k\in\mathbb{N}}\), \((\widetilde{x}_{2k-1}^{\epsilon})_{k\in\mathbb{N}}\) and \((\widetilde{x}_{2k}^{\epsilon})_{k\in\mathbb{N}}\) can be constructed. Note that this time all bounds do not differentiate between the processes with odd and even index \(k\). Applying logarithmic transformations similar to \(\widehat{f}\) and \(\widetilde{f}\) to the processes \(\widehat{x}_{1}^{\epsilon}\) and \(\widetilde{x}_{1}^{\epsilon}\), one can obtain exactly the same processes \(\widehat{y}\) and \(\widetilde{y}\) as in (25) and (33) (again for \(n\) smaller than some stopping time similar to \(\widehat{T}_{-,+}\) or \(\widetilde{T}_{-,+}\) respectively). Also the calculations leading to (26) and (34) remain valid. As in the previous two sections, each expectation in (37) can be found by applying the optional stopping theorem to two martingales. In addition to the primed constants that were introduced before, let us set \[\widehat{y}_{-}^{\prime\prime\prime} :=\;-\sqrt{\mathbb{E}\big{(}\widehat{y}_{\widetilde{T}_{-,+}}^{2} \big{|}\,\widehat{y}_{\widetilde{T}_{-,+}}\leq\widehat{y}_{-}\big{)}}\,, \widetilde{y}_{-}^{\prime\prime\prime} :=\;-\sqrt{\mathbb{E}\big{(}\widetilde{y}_{\widetilde{T}_{-,+}}^{ 2}\big{|}\,\widetilde{y}_{\widetilde{T}_{-,+}}\leq\widetilde{y}_{-}\big{)}}\,,\] \[\widehat{y}_{+}^{\prime\prime\prime} :=\;\sqrt{\mathbb{E}\big{(}\widehat{y}_{\widetilde{T}_{-,+}}^{2} \big{|}\,\widehat{y}_{\widetilde{T}_{-,+}}\geq\widehat{y}_{+}\big{)}}\,, \widetilde{y}_{+}^{\prime\prime\prime} :=\;\sqrt{\mathbb{E}\big{(}\widetilde{y}_{\widetilde{T}_{-,+}}^{2} \big{|}\,\widetilde{y}_{\widetilde{T}_{-,+}}\geq\widetilde{y}_{+}\big{)}}\,.\] The estimates \(\widehat{y}_{-}^{\prime\prime\prime}\in[\widehat{y}_{-}-1,\widehat{y}_{-}]\), \(\widetilde{y}_{-}^{\prime\prime\prime}\in[\widetilde{y}_{-}-1+\lambda, \widetilde{y}_{-}]\), \(\widehat{y}_{+}^{\prime\prime\prime}\in[\widetilde{y}_{+},\widehat{y}_{+}+1]\), \(\widetilde{y}_{+}^{\prime\prime\prime}\in[\widetilde{y}_{+},\widetilde{y}_{+ }+1+\lambda]\) will again be used in what follows to bound these quantities. In the slower case (with hats), applying the optional stopping theorem to the martingales \(\widehat{y}_{n}\) and \(\widehat{y}_{n}^{2}-n\mathbb{E}(\chi^{2})\) then yields \[\big{(}\mathbb{E}(\widehat{T}_{1})\big{)}^{-1}\;=\;\frac{\mathbb{E}(\chi^{2}) }{(\widehat{y}_{+}^{\prime\prime\prime})^{2}}\left[1\,+\,\frac{\mathbb{E}( \chi^{2})(\widehat{y}_{+}^{\prime}-3\widehat{y}_{-}^{\prime})+\widehat{y}_{+} ^{\prime}(\widehat{y}_{-}^{\prime\prime\prime})^{2}}{-\widehat{y}_{-}^{ \prime}(\widehat{y}_{+}^{\prime\prime\prime})^{2}}\right]^{-1}\,,\] which together with the limits of Lemma 6 then shows the first statement of Proposition 3. In addition to the defining properties of \(\lambda\) in (28), it will be necessary to require \[\lim_{\epsilon\to 0}\lambda\log(\epsilon)\;=\;0\] to control the faster process (with tildes) in the balanced case, as this implies \(\lim_{\epsilon\to 0}\lambda\widetilde{y}_{+}=0\). One can check that the choice \(\lambda:=[\log(\epsilon)]^{-2}\) meets all the given conditions. A first martingale is now given by \(\widetilde{y}_{n}-n\lambda\) which leads to exactly the same result as (35) with \(\mathbb{E}(\chi)=0\) in this case. Next, similar to the proofs of Lemmata 8 and 13, one can show that there exists a unique real solution \(\widetilde{\rho}\) for \(\rho\) solving \(\mathbb{E}(e^{C_{0}\rho(\chi+\lambda)})=1\). This quantity must be negative, clearly depends on \(\lambda\) (so on \(\epsilon\)) and obeys \(\lim_{\lambda\to 0}\widetilde{\rho}=0\). The implicit definition of \(\widetilde{\rho}\) as a function of \(\lambda\) can be written as \(\lambda=\frac{\log(\mathbb{E}(e^{C_{0}\tilde{\rho}\chi}))}{-C_{0}\widetilde{ \rho}}\). By Fubini's theorem, this is an analytic function in \(\widetilde{\rho}\) (as it is also well-defined for \(\widetilde{\rho}\) positive, so \(\lambda\) negative), with \(\frac{-C_{0}\mathbb{E}(\chi^{2})}{2}\neq 0\) as its first derivative w.r.t. \(\widetilde{\rho}\). This allows to use the Lagrange inversion theorem for analytic functions, which shows \[\widetilde{\rho}\;=\;-\frac{2}{C_{0}\,\mathbb{E}(\chi^{2})}\,\lambda\,-\,\frac{ 4\,\mathbb{E}(\chi^{3})}{3\,C_{0}\left(\mathbb{E}(\chi^{2})\right)^{3}}\, \lambda^{2}\,+\,\mathcal{O}(\lambda^{3})\,. \tag{38}\] Inserting this after applying the optional stopping theorem to the martingale \(e^{C_{0}\widetilde{\rho}\widetilde{y}_{n}}\) then yields \[1 =\;\mathbb{E}\big{(}e^{C_{0}\widetilde{\rho}\cdot 0}\big{)}\;=\; \mathbb{E}\big{(}e^{C_{0}\widetilde{\rho}\widetilde{y}_{0}}\big{)}\;=\;\mathbb{ E}\big{(}e^{C_{0}\widetilde{\rho}\widetilde{y}_{T_{-,+}}}\big{)}\] \[=\;1\,-\,\mathbb{E}(\widetilde{y}_{\widetilde{T}_{-,+}})\,\left[ \frac{2\,\lambda}{\mathbb{E}(\chi^{2})}+\frac{4\,\mathbb{E}(\chi^{3})\, \lambda^{2}}{3\big{(}\mathbb{E}(\chi^{2})\big{)}^{2}}\right]\,+\,\frac{1}{2} \,\mathbb{E}(\widetilde{y}_{\widetilde{T}_{-,+}}^{2})\,\frac{4\,\lambda^{2}}{ \big{(}\mathbb{E}(\chi^{2})\big{)}^{2}}\] where the big \(\mathcal{O}\)-notation makes sense as it was required earlier that \(\lambda\widetilde{y}_{+}\to 0\) for \(\epsilon\to 0\). Hence, \[\frac{1}{\mathbb{P}(\{\widetilde{y}_{\widetilde{T}_{-,+}}\geq\widetilde{y}_{+ }\})}\;=\;\frac{\widetilde{y}_{+}^{\prime}-\widetilde{y}_{-}^{\prime}}{- \widetilde{y}_{-}^{\prime}}\left[1-\frac{\big{(}(\widetilde{y}_{-}^{\prime \prime})^{2}\widetilde{y}_{+}^{\prime}-\widetilde{y}_{-}^{\prime}(\widetilde{y }_{+}^{\prime\prime\prime})^{2}\big{)}\lambda}{-\widetilde{y}_{-}(\widetilde{y }_{+}-\widetilde{y}_{-}^{\prime})\mathbb{E}(\chi^{2})}\,+\,\mathcal{O}\left([ \lambda\widetilde{y}_{+}]^{2}\right)\right]\,,\] when carefully treating the error terms. Inserting this and (35) (with \(\mathbb{E}(\chi)=0\)) into (34) yields \[\Big{(}\mathbb{E}(\widetilde{T}_{1})\Big{)}^{-1} =\;\left[\mathbb{P}(\{\widetilde{y}_{\widetilde{T}_{-,+}}\geq \widetilde{y}_{+}\})\;\bigg{[}\frac{\widetilde{y}_{-}^{\prime}}{\lambda}\,+\,1 \bigg{]}\,+\,\frac{\widetilde{y}_{+}^{\prime}-\widetilde{y}_{-}^{\prime}}{ \lambda}\,+\,1\right]^{-1}\] \[=\;\frac{\mathbb{E}(\chi^{2})}{(\widetilde{y}_{+}^{\prime\prime} )^{2}}\left[1\;+\;\frac{(\widetilde{y}_{-}^{\prime\prime})^{2}\widetilde{y}_{ +}^{\prime}\,+\,\mathbb{E}(\chi^{2})(\widetilde{y}_{+}^{\prime}-2\,\widetilde{ y}_{-})}{-\widetilde{y}_{-}^{\prime}(\widetilde{y}_{+}^{\prime\prime})^{2}}+ \mathcal{O}(\lambda\widetilde{y}_{+})\right]^{-1}\,,\] which together with the limits stated in Lemma 11 implies the second statement of Proposition 3. **Remark:** It is possible to analyze the scaling of the quantity \(|\mathcal{N}(E)-\mathcal{N}(0)|\) when both \(E\) and \(|\mathbb{E}(\log(\kappa))|\) (or, equivalently, \(|\mathbb{E}(\chi)|\)) tend to zero. There are two different regimes, as depicted in Figure 6. If \(\lim_{E\to 0}|\mathbb{E}(\chi)\log(E)|<\infty\), then \(|\mathcal{N}(E)-\mathcal{N}(0)|\) is proportional to \(|\log(E)|^{-2}\). If the given limit equals zero, then the analysis in this section applies with \(|\mathbb{E}(\chi)|\) taking the role of \(\lambda\) (and \(E\) that of \(\epsilon\)). For a non-vanishing limit, a lower (with hats instead of tildes and \(\lambda\) set to zero) and an upper bound on \(|\mathcal{N}(E)-\mathcal{N}(0)|\) are given by (36), in which \(\widetilde{\nu}\) needs to be replaced by \(\widetilde{\rho}\). In its expansion (38), one then needs to replace \(\lambda\) by \(|\mathbb{E}(\chi)|\) and \(|\mathbb{E}(\chi)|+\lambda\) respectively. If \(\mathbb{E}(\chi)\log(E)\) converges to a non-zero constant (the case on the separating line in Figure 6), then also the factor \(e^{C_{0}\widetilde{\rho}\widetilde{y}_{+}^{\prime\prime}}\) tends to a positive constant, and a further expansion in \(|\mathbb{E}(\chi)|\) shows that \(|\mathcal{N}(E)-\mathcal{N}(0)|\) is also in this case proportional to \(|\log(E)|^{-2}\). Finally, if \(|\mathbb{E}(\chi)\log(E)|\to\infty\) for \(E\to 0\), then also \(e^{C_{0}\widetilde{\rho}\widetilde{y}_{+}^{\prime\prime}}\to\infty\). The dominant term between brackets in (36) contains the latter factor, which then implies the scaling of \(|\mathcal{N}(E)-\mathcal{N}(0)|\sim E^{\frac{2|\mathbb{E}(\log(\kappa))|}{ \mathbb{E}((\log(\kappa))^{2})}}|\mathbb{E}(\log(\kappa))|^{2}\) as indicated for the grey region in Figure 6. \(\diamond\) **Acknowledgements:** The authors thank Sasha Sodin for bringing the works of Dyson [6] and Kotowski and Virag [12] to their attention after a first preprint had appeared. This work was supported by the DFG grant SCHU 1358/8-1 and the Chilean grant FONDECYT 1201836. This manuscript has no associated data. The authors have no competing interests to declare that are relevant to the content of this article.
2305.13518
Extended uncertainty principle and Van der Waals black holes
In this manuscript, we investigate the extended uncertainty principle (EUP) effects on the Van der Waals (VdW) black holes whose thermal quantities mimic the VdW liquid. We find that the considered formalism imposes an upper bound on the event horizon radius. Thus, the mass, Hawking temperature, and heat capacity become physically meaningful within a certain range of event horizon radii. At a large event horizon radius the black hole has a remnant. We observe that for a given set of parameters, the VdW black hole can be completely unstable for all horizon radii, while for another set of parameters, it can be unstable or stable depending on the horizon radius.
R. Oubagha, B. Hamil, B. C. Lütfüoğlu, M. Merad
2023-05-22T22:18:40Z
http://arxiv.org/abs/2305.13518v2
# Extended uncertainty principle and Van der Waals black holes ###### Abstract In this manuscript, we investigate the extended uncertainty principle (EUP) effects on the Van der Waals (VdW) black holes whose thermal quantities mimic the VdW liquid. We find that the considered formalism imposes an upper bound on the event horizon radius. Thus, the mass, Hawking temperature, and heat capacity become physically meaningful within a certain range of event horizon radii. At a large event horizon radius the black hole has a remnant. Whether the VdW black hole is stable or not depends on the black hole parameters. Introduction An important prediction of several candidates of quantum gravity, such as string theory [1], loop quantum gravity [2], and non-commutative geometry [3] is the being of a fundamental distance at the order of the Planck length \(\ell_{P}=10^{-35}\)m. This length cannot be directly detected using existing technology. This is due to the fact that the quantum gravity effects are predicted to be directly observable only at energy levels on the order of the Planck energy \(\sim 10^{19}\) GeV, a magnitude 15 times greater than the energy scales currently attainable by the Large Hadron Collider. This is why indirect measurements of the Planck length are relied on to investigate quantum gravity experimentally. On the other hand, the existence of a minimum length in the quantum mechanical framework can also be explained by a deformation of Heisenberg's usual uncertainty principle (HUP). The new formalism, which is called the generalized uncertainty principle (GUP) in the literature, is given by a modification of the position and/or momentum operators of the Hilbert space [4, 5, 6]. Such modifications can also change the curvature of spacetime [4, 5, 6, 7, 8]. For example, a deformed algebra that defines the minimum measurable momentum concept is known in the literature as the Extended Uncertainty Principle (EUP). Moreover, in an interesting work [9], Mignemi has shown that the EUP formalism can be obtained if quantum mechanics is defined on an (anti)-de Sitter (AdS) background with an appropriately selected parametrization. On the other hand, one of the most exciting topics in present times is the thermodynamics of gravitational objects which includes the possibility of associating concepts like temperature, pressure, volume, and entropy with them. In the classical approach of general relativity, a black hole cannot emit radiation. However, in 1975 through a semi-classical perspective, Hawking demonstrated that a black hole could indeed emit radiation [10]. In the following year, he examined the radiation through the Wick Rotation method and proposed that if the quantum effects are taken into account, then black holes can certainly radiate [11, 12]. This discovery, later called Hawking radiation, lead to significant progress in the field and convince scientists to discuss the thermal quantities of black holes via a measurable thermodynamic temperature. Other than Hawking's approach, several other methods have been proposed in the literature to predict the temperature of a black hole [13, 14, 15, 16]. One of them, which will be used in this manuscript, is based on the surface gravity of the black hole and it employs the zero and first laws of black hole thermodynamics. It is worth noting that the black hole's entropy functions are assumed to be linearly proportional to their event horizon areas in Planck units, and according to the second law of black hole thermodynamics, their surface areas, thus entropies, do not lessen [17, 18]. The thermodynamic properties of asymptotically AdS black holes have prompted authors to study the AdS/CFT correspondence. In [19], Hawking and Page discussed a first-order phase transition between the thermal AdS space and Schwarzschild AdS black hole. In an interesting study [20], the authors showed that when they generalized the Schwarzschild AdS black hole to the case of a charged or rotating black hole, they obtained the same behavior as the Van der Waals (VdW) fluid. Moreover, they claimed that the analogy is further strengthened if one considers the extended phase space instead with a negative cosmological constant which corresponds to the thermodynamic pressure [21]. Inspired by this qualitative analogy, Rajagopal et al. presented a black hole metric whose thermodynamics mimic the VdW fluid [22]. In the same year, Delsate and Mann extended the metric form to \(d\) dimensions by assuming a particular stress tensor form [23]. They concluded that the latter metric could be considered a near-horizon one. In 2016, Parthapratim discussed enthalpy, geometric volume and quantum corrected-entropy functions of the VdW black hole [24]. Later, Hu et al. handled the same system with Quevedo's geometrothermodynamic method [25]. For further details, we refer to the review article [26]. Other gasses' thermal quantities were also shown to be correlated with black hole thermodynamics. For example, in ref. [27], Setare and Adami introduced a Polytropic black hole, whose spacetime is asymptotically AdS, with the same thermal properties of a Polytropic gas. Debnath derived a novel black hole metric whose thermodynamics matches with a special case of the Chaplygin gas model in [28]. Recently, Okcu and Aydiner revisited the VdW black hole thermodynamics within the GUP formalism and discussed the quantum deformation effects by comparing their findings with the original ones [29]. Keeping in mind the rich content of the EUP formalism, which differs from the GUP formalism, in this paper we aimed to investigate the VdW black hole and its thermodynamics in the EUP formalism. To this end, we constructed the manuscript as follows: In Sect. 2, we briefly introduced the considered EUP formalism and presented its effect on the Hawking temperature with a semi-classical approach. Then, in Sect. 3 we examined the EUP-corrected VdW black hole thermodynamics. In Sect. 4, we revisited our results by reducing them to two special sub-cases. Finally, we ended the manuscript with a brief conclusion. ## 2 Extended uncertainty principle and Hawking temperature In this study, we consider the most basic form of the EUP formalism, which is examined comprehensively in [4, 5, 6, 7, 8, 9], in AdS space: \[\Delta X\Delta P\geq\hbar\left(1+\beta\left(\Delta X\right)^{2}\right). \tag{1}\] Here, \(\beta\) is the deformation parameter in the form of \(\beta=\frac{\beta_{0}}{L_{*}}\), with a dimensionless parameter, \(\beta_{0}\), and a large fundamental distance scale, \(L_{*}\). Undoubtedly, the most important feature of this quantum deformation is that it sets an absolute lower bound on the momentum uncertainty, \(\left(\Delta P\right)_{\min}=\hbar\sqrt{\beta}\). This is why it has recently attracted increasing interest [30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46]. On the other hand, in the semi-classical approach of black hole thermodynamics, the Hawking temperature is defined by \[T=\frac{\kappa}{8\pi}\times\frac{dA}{dS}, \tag{2}\] where \(\kappa\), \(A\), \(S\) denote the surface gravity at the outer horizon, surface area and entropy, respectively. According to the heuristic approach, a particle absorption can cause a minimal change in the black hole's surface of the form [47]: \[\Delta A\simeq\Delta X\Delta P. \tag{3}\] Assuming that the position uncertainty on the horizon is proportional to the event radius, \(\Delta X=r_{H}\), we can express the minimal increase of the surface area in the EUP case as \[\Delta A\simeq\frac{\gamma\hbar}{2}\left(1+\beta r_{H}^{2}\right). \tag{4}\] Here, \(\gamma\) is the calibration factor that tunes the result to the HUP limit. As the result of particle absorption a minimal increase in black hole entropy could be taken as \(\left(\Delta S\right)_{\min}=\ln 2\), then we have \[\frac{dA}{dS}\simeq\frac{\left(\Delta A\right)_{\min}}{\left(\Delta S\right)_ {\min}}=\frac{\gamma\hbar}{2\ln 2}\left(1+\beta r_{H}^{2}\right). \tag{5}\] After determining the calibration factor in the limit of \(\beta\to 0\) as \(\gamma=4\ln 2\), we can express the EUP-corrected Hawking temperature as \[T=\frac{\hbar\kappa}{4\pi}\left(1+\beta r_{H}^{2}\right). \tag{6}\] Hereafter, we set \(\hbar=1\) for simplicity. EUP-corrected thermodynamic of Van der Waals black holes In the literature, the VdW equation of state describing the VdW fluid is expressed in closed form by a two-parameter equation [22]: \[T=\left(P+\frac{a}{v^{2}}\right)\left(v-b\right). \tag{7}\] Here, \(a\) and \(b\), are two positive values that measure the attraction between fluid molecules and their volume, and \(v\) is the specific volume. Now, we have to construct an asymptotically AdS black hole metric that precisely mimics the thermodynamics of the considered VdW fluid equation of state. To this end, we follow the static spherically symmetric case ansatz \[ds^{2}=-f\left(r\right)dt^{2}+\frac{1}{f\left(r\right)}dr^{2}+r^{2}\left(d \theta^{2}+\sin^{2}\theta d\phi^{2}\right), \tag{8}\] with the lapse function, \(f\left(r\right)\), of the following form \[f\left(r\right)=\frac{r^{2}}{l^{2}}-\frac{2M}{r}-h\left(r,P\right). \tag{9}\] Here, \(M\) is the black hole mass, \(h\left(r,P\right)\) is an unknown function that has to be determined, \(l^{2}\) is the AdS radius, and \(P\) is the thermodynamic pressure defined by \[P=\frac{3}{8\pi l^{2}}. \tag{10}\] At first, we express the mass in terms of the event horizon radius, \(r_{H}\), using \(f\left(r_{H}\right)=0\). \[M=\frac{4}{3}\pi r_{H}^{3}P-\frac{r_{H}}{2}h\left(r_{H},P\right). \tag{11}\] Next, we derive the thermodynamic volume \[V=\frac{\partial M}{\partial P}=\frac{4}{3}\pi r_{H}^{3}-\frac{r_{H}}{2}\frac {\partial}{\partial P}h\left(r_{H},P\right), \tag{12}\] and then we express the specific volume using \(v=6V/N\), where, \(N\) stands for the degrees of freedom [26] \[v=\frac{6}{4\pi r_{H}^{2}}\left[\frac{4}{3}\pi r_{H}^{3}-\frac{r_{H}}{2}\frac {\partial}{\partial P}h\left(r_{H},P\right)\right]. \tag{13}\] After that, with the help of Eqs. (6), (9) and (11) we derive the EUP-corrected Hawking temperature of the VdW black hole in terms of undetermined function as \[T=\frac{1}{4\pi}\left(8\pi r_{H}P-\frac{1}{r_{H}}h\left(r_{H},P\right)-\frac {\partial}{\partial r_{H}}h\left(r_{H},P\right)\right)\left(1+\beta r_{H}^{2} \right). \tag{14}\] By assuming that \(h\left(r,P\right)=A\left(r\right)-PB\left(r\right)\) and using the equality between (7) and (14), we obtain an equation of the form \[F_{1}\left(r_{H}\right)+PF_{2}\left(r_{H}\right)=0,\] where the new defined functions, \(F_{1}\left(r\right)\) and \(F_{2}\left(r\right)\), depend on the \(A\left(r\right)\) and \(B\left(r\right)\) functions and their derivatives. One trivial solution to this two-component equation is that each component can be equal to zero. This can be achieved by solving the following two ordinary differential equations simultaneously. \[v-b = \frac{1}{4\pi}\left(8\pi r_{H}+\frac{1}{r_{H}}B\left(r_{H}\right) +\frac{\partial}{\partial r_{H}}B\left(r_{H}\right)\right)\left(1+\beta r_{H} ^{2}\right), \tag{15}\] \[\frac{a}{v^{2}}\left(v-b\right) = -\frac{1}{4\pi}\left(\frac{1}{r_{H}}A\left(r_{H}\right)+\frac{ \partial}{\partial r_{H}}A\left(r_{H}\right)\right)\left(1+\beta r_{H}^{2} \right). \tag{16}\] By substituting Eq. (13) into Eq, (15), we find \[2r_{H}+\frac{3}{4\pi r_{H}}B\left(r_{H}\right)-b=\frac{1}{4\pi}\left(8\pi r_{H}+ \frac{1}{r_{H}}B\left(r_{H}\right)+\frac{\partial}{\partial r_{H}}B\left(r_{H} \right)\right)\left(1+\beta r_{H}^{2}\right). \tag{17}\] In the first order of \(\beta\), Eq. (17) yields the following solution \[B\left(r_{H}\right)=-4\pi r_{H}^{2}\left(2b\beta r_{H}-\frac{b}{r_{H}}+\frac{2 }{3}\right)+c_{1}r_{H}^{2}\left(1-\frac{3}{2}\beta r_{H}^{2}\right), \tag{18}\] with an integration constant. To determine \(c_{1}\), we can go to the limit of \(\beta\to 0\), where Eq. (18) should reduce to the HUP result given in [22]. In so doing, we find \(c_{1}=\frac{8\pi}{3}\). Thus, Eq. (18) reads \[B\left(r_{H}\right)=4\pi br_{H}-8\pi\beta r_{H}^{3}\left(b+\frac{r_{H}}{2} \right). \tag{19}\] Then, following similar straightforward algebra, we find that Eq, (19) can be expressed as follows \[\frac{\partial}{\partial r_{H}}A\left(r_{H}\right) = -\frac{1}{r_{H}}A\left(r_{H}\right)+\frac{4\pi ab}{\left(2r_{H}+3 b\right)^{2}}\left[1+\frac{6\beta r_{H}^{2}}{2r_{H}+3b}\left(2b+r_{H}\right) \right]+ \tag{20}\] \[\frac{4\pi a\beta r_{H}^{2}}{\left(2r_{H}+3b\right)}-\frac{4\pi ab \beta r_{H}^{2}}{\left(2r_{H}+3b\right)^{2}}-\frac{4\pi a}{\left(2r_{H}+3b \right)}\left[1+\frac{3\beta r_{H}^{2}}{2r_{H}+3b}\left(2b+r_{H}\right)\right],\] which leads to the following solution of the form \[A\left(r_{H}\right)=\frac{4\pi ab}{r_{H}}\log(\frac{3}{2}+\frac{r_{H}}{b})+ \frac{81\beta\pi ab^{5}}{8(3b+2r_{H})^{2}}+\frac{3\pi ab^{2}\left(8-45\beta b ^{2}\right)}{8r_{H}\left(3b+2r_{H}\right)}-2\pi a\left(1+\frac{\beta r_{H}^{2} }{6}-\frac{\beta br_{H}}{2}+\frac{9\beta b^{2}}{8}\right). \tag{21}\] Then, we combine Eqs. (19) and (21) and obtain the function \(h\left(r,P\right)\) \[h\left(r,P\right) = -2\pi a-\frac{9}{4}\beta\pi ab^{2}+\frac{4\pi ab}{r}\log(\frac{3 }{2}+\frac{r}{b})-\frac{\beta\pi ar^{2}}{3}+\frac{81}{8}\frac{\beta\pi ab^{5} }{(3b+2r)^{2}}+\frac{3}{8r}\frac{\pi ab^{2}\left(8-45\beta b^{2}\right)}{3b+2r} \tag{22}\] \[+\pi\left(\beta a-4P\right)br+8\pi\beta Pr^{3}\left(b+\frac{r}{2} \right).\] which is needed for the lapse function \[f\left(r\right) = \frac{8\pi Pr^{2}}{3}-\frac{2M}{r}+2\pi a+\frac{9}{4}\beta\pi ab^ {2}-\frac{4\pi ab}{r}\log(\frac{3}{2}+\frac{r}{b})+\frac{\beta\pi ar^{2}}{3}- \frac{81}{8}\frac{\beta\pi ab^{5}}{(3b+2r)^{2}} \tag{23}\] \[- \frac{3}{8r}\frac{\pi ab^{2}\left(8-45\beta b^{2}\right)}{3b+2r }-\pi\left(\beta a-4P\right)br-8\pi\beta Pr^{3}\left(b+\frac{r}{2}\right).\] The present form of the lapse function describes the VdW black hole in the EUP formalism up to the first order of \(\beta\). It is worth noting that for \(\beta=0\), it reduces to \[f_{HUP}(r)=2\pi a-\frac{2M}{r}+\frac{8\pi Pr^{2}}{3}\left(1+\frac{3b}{2r} \right)-\frac{4\pi ab}{r}\log(\frac{3}{2}+\frac{r}{b}), \tag{24}\] which is the same as the Eq. (18) in [22]. Hereafter, the thermal properties, which we will obtain, will represent the thermodynamics of the EUP-VdW black hole. Before doing that, we depict the first-order EUP-corrected lapse function versus radius in Fig. 1. We observe that the EUP parameter modifies the lapse function with the second turning point. In other words, in the ordinary case, the lapse function increases monotonically however, in the EUP case it does not, therefore the EUP modification changes the VdW black hole geometry and its thermodynamics. The new zero points correspond to new singularities and we will investigate them over the thermal quantities. Now, let us use Eqs. (11) and (12) to re-express the EUP-corrected VdW black hole mass \[M = -\frac{81}{16}\frac{a\beta b^{5}}{(3b+2r_{H})}\pi r_{H}+\frac{9 \pi}{8}a\beta b^{2}r_{H}-\frac{3\pi ab^{2}\left(8-45\beta b^{2}\right)}{16(3b+ 2r_{H})}-\frac{\pi b}{2}r_{H}^{2}(a\beta-4P) \tag{25}\] \[-2\pi ab\log\left(\frac{r_{H}}{b}+\frac{3}{2}\right)+\frac{4\pi} {3}Pr_{H}^{3}+\pi ar_{H}+\frac{1}{6}\pi a\beta r_{H}^{3}-4\pi\beta Pr_{H}^{4} \left(b+\frac{r_{H}}{2}\right),\] and EUP-corrected VdW black hole volume \[V=2\pi br_{H}^{2}+\frac{4\pi}{3}r_{H}^{3}-4\pi\beta r_{H}^{4}\left(b+\frac{r_ {H}}{2}\right). \tag{26}\] We observe that the EUP correction term to the volume is negative. This means that in the presence of the EUP formalism, the volume is always less than its original form. This result is the opposite of the GUP case, where the authors showed that correction always increases the volume [29]. A generalization such as increase or decrease does not apply to the mass function. To demonstrate our conclusion, in Fig. (2) we plot the EUP-corrected mass function versus the event horizon with two different parameter choices that correspond to \(a>b\) and \(a<b\) cases, respectively. Figure 1: The variation of the EUP-lapse function versus radius for \(M=1\) and \(P=0.1\). We see that EUP-corrected mass term is limited with an upper-bound event horizon. However, such a constraint does not exist in the ordinary case. Furthermore, we observe that the greater deformation parameter decreases the physical event horizon interval. Then, using Eq. (22), we get the EUP-corrected Hawking temperature \[T = \frac{a}{2r_{H}}+2r_{H}P+\beta\frac{ar_{H}}{2}+\frac{ab}{(3b+2r_{H })}\left[\frac{3\beta b}{4}-\frac{2}{r_{H}}-2\beta r_{H}-\frac{81}{32}\frac{ \beta b^{4}}{r_{H}}\right] \tag{27}\] \[+ \frac{ab^{2}}{(3b+2r_{H})^{2}}\left[\frac{81}{16}\beta b^{3}+ \frac{3}{2}\beta r_{H}+\frac{3}{16r_{H}}\left(8-45b^{2}\beta\right)\right]\] \[+ \frac{9}{16}\frac{a\beta b^{2}}{r_{H}}-\frac{b}{2}(a\beta-4P)+ \frac{a\beta}{4}r_{H}-6\beta r_{H}^{2}P\left(b+\frac{r_{H}}{2}\right).\] In the \(\beta=0\) limit, it reduces to \[T_{HUP} = \frac{a}{2r_{H}}+2P(r_{H}+b)-\frac{2ab}{r_{H}(3b+2r_{H})}+\frac{3 ab^{2}}{2r_{H}(3b+2r_{H})^{2}}. \tag{28}\] We demonstrate the Hawking temperature behavior versus event horizon radii in Fig. 3. Figure 2: The variation of the EUP-VdW black hole mass versus \(r_{H}\) for \(P=0.1\). In the ordinary case, the Hawking temperature takes positive values for all event horizon radii, whereas this characteristic loses its validity when the EUP corrections are taken into account. In the new formalism, the Hawking temperature becomes physical in a certain range. At larger deformations, the width of this range decreases. Afterward, we derive the EUP-corrected entropy of the VdW black hole by employing Bekestein's area law. According to the semi-classical formulation \[S=\int\frac{dM}{T}, \tag{29}\] we substitute Eqs. (27) and (25) in Eq. (29). We find results \[S=\frac{\pi}{\beta}\log\left(1+\beta r_{H}^{2}\right). \tag{30}\] Then, we expand it in the power series of the deformation parameter and discard the higher terms of order \(\mathcal{O}\left(\beta^{2}\right)\). This yields: \[S\simeq\pi r_{H}^{2}-\frac{1}{2}\pi\beta r_{H}^{4}. \tag{31}\] We conclude that the correction term of the EUP formalism slows down the rate of increase of the entropy function. This result was also obtained by the authors in the GUP formalism [29]. The heat capacity is an important thermal function for black hole thermodynamics because the thermal stability of a black hole is analyzed by the behavior of its heat capacity. A positive heat capacity indicates the black hole's stability, while a negative heat capacity indicates its instability, so the heat capacity gives us clues about a possible phase transition. We calculate the heat capacity by the following formula \[C=\frac{dM}{dT_{H}}, \tag{32}\] and obtain the EUP-corrected heat capacity \[C=4\pi\frac{4\pi r_{H}^{2}P-\frac{1}{2}h\left(r_{H},P\right)-\frac{r_{H}}{2} \frac{\partial}{\partial r_{H}}h\left(r_{H},P\right)}{\left(8\pi r_{H}P-\frac {1}{r_{H}}h\left(r_{H},P\right)-\frac{\partial}{\partial r_{H}}h\left(r_{H},P \right)\right)\left(1+\beta r_{H}^{2}\right)}. \tag{33}\] Here, Eq. (22) has to be used to get the explicit form of this functional heat capacity, however, it is very complicated. Therefore, we decide to present it in its current form and analyze it with numerical Figure 3: The variation of the EUP-corrected Hawking temperature versus \(r_{H}\) for \(P=0.1\). methods. To this end, in Fig. 4, we display a graphical representation of the heat capacity for \(a>b\) and \(b>a\) cases. We observe that EUP-corrected VdW black holes are unstable in the \(a>b\) case. However, we see stability and instability together in the \(b>a\) case. ## 4 Limiting cases In this section, we extend our investigation and explore two sub-scenarios. ### EUP-Schwarzschild AdS black hole To start off the first one, we set \(b=0\), and consider \(a=1/2\pi\) for the sake of convenience. Such a choice of \(b\) does not violate the reverse isoperimetric inequality. In this scenario, the equation of state given in Eq. (7), takes the form of \[T=\left(P+\frac{a}{v^{2}}\right)v. \tag{34}\] Then, the lapse function, given in Eq. (23), reduces to \[f\left(r\right)=1-\frac{2M}{r}+\frac{8\pi}{3}r^{2}P+\frac{\beta r^{2}}{6} \left(1-24\pi r^{2}P\right). \tag{35}\] Considering that the last term, which is linearly proportional to beta, is the EUP-correction term, we can say that this lapse function is the EUP-corrected (A)dS Schwarzschild black hole lapse function. Using this lapse function we express the EUP-corrected mass, volume, and Hawking temperature functions, respectively as follows: \[M = \frac{r_{H}}{2}+\frac{4\pi}{3}Pr_{H}^{3}+\frac{\beta r_{H}^{3}}{1 2}\left(1-24\pi Pr_{H}^{2}\right), \tag{36}\] \[V = \frac{4\pi}{3}r_{H}^{3}-2\pi\beta r_{H}^{5},\] (37) \[T = \frac{1}{4\pi r_{H}}+\frac{3}{8\pi}\beta r_{H}+2r_{H}\left(1- \frac{3}{2}\beta r_{H}^{2}\right)P. \tag{38}\] Figure 4: The variation of the EUP-corrected heat capacity versus \(r_{H}\) for \(P=0.1\). In Fig. 5, we demonstrate the EUP-corrected Hawking temperature of (A)dS Schwarzschild black hole. We observe that EUP corrections lead to a physical finite range for the event horizon radius. At the highest value of the radius, the black hole has a finite remnant mass. Since such scenarios are widely investigated in the literature [8, 45, 46], we do not prefer to carry on a detailed discussion here. ### The ideal gas case In the second scenario, we take \(a=0\) and \(b=0\), so that Eq. (7) turns to the ideal gas equation of state \[T=Pv. \tag{39}\] In this case, the EUP-corrected lapse function reads \[f\left(r\right)=\frac{8\pi}{3}r^{2}P-\frac{2M}{r}-4\pi\beta r^{4}P. \tag{40}\] Therefore, we can express the EUP-corrected mass, volume, and temperature function of the ideal gas black hole in the (A)dS space as follows: \[M = \frac{4\pi}{3}Pr_{H}^{3}-2\pi\beta Pr_{H}^{5}, \tag{41}\] \[V = \frac{4\pi}{3}r_{H}^{3}-2\pi\beta r_{H}^{5},\] (42) \[T = 2r_{H}P\left(1-\frac{3}{2}\beta r_{H}^{2}\right). \tag{43}\] We find that the last terms of all given functions correspond to the EUP corrections. For the demonstration, we depict the Hawking temperature in Fig. 6. Figure 5: The variation of the EUP-corrected Hawking temperature of the (A)dS Schwarzschild black hole versus \(r_{H}\) for \(P=0.1\). In the ordinary ideal gas black hole, we observe a linear increase in the Hawking temperature. However, within the EUP formalism, the correction terms reduce the Hawking temperature with a term proportional to the cube of the event horizon. Also, as with the other two black holes, the temperature can take values in a range where the event horizon remains physically meaningful. When the event horizon rises to the physical limit, a remnant mass is left from the black hole. ## 5 Conclusion The negative cosmological constant can be thought of as thermodynamic pressure in the context of extended phase space. This makes it possible to define a black hole equation of state and compare it with a fluid or gas equation of state. In the last decade, authors have derived asymptotically AdS black hole metrics whose thermal quantities precisely mimic that of a VdW fluid. Some other authors handled the VdW black hole thermodynamics with a quantum mechanical deformation, namely the generalized uncertainty principle, which predicts a minimal measurable length. They concluded that the considered quantum deformation plays an important role in the thermodynamics of the VdW black hole. In this manuscript, we examine the VdW black hole with the EUP formalism, which allows a minimal momentum uncertainty value. Our results show that this kind of quantum deformation sets an upper event horizon limit value, unlike what exists in the literature. Based on this result, the Hawking temperature and the black hole mass become physically meaningful within a certain event horizon radius range. EUP formalism also affects entropy by slowing down the rate of its increase. Depending on the other parameters VdW black hole can be unstable or stable. Furthermore, our study also shed light on other sub-scenarios such as (A)dS Schwarzschild and ideal gas black holes and their thermodynamics. ## Acknowledgments This work is supported by the Ministry of Higher Education and Scientific Research, Algeria under the code: B00L02UN040120230003. B. C. Lutfuoglu is grateful to the PFF UHK Excellence project of 2211/2023-2024 for the financial support. Figure 6: The variation of the EUP-corrected Hawking temperature of the (A)dS ideal gas black hole versus \(r_{H}\) for \(P=0.1\). ## Data Availability Statements The authors declare that the data supporting the findings of this study are available within the article. ## Competing interests The authors declare no competing interests.
2306.04534
On categorical structures arising from implicative algebras: from topology to assemblies
Implicative algebras have been recently introduced by Miquel in order to provide a unifying notion of model, encompassing the most relevant and used ones, such as realizability (both classical and intuitionistic), and forcing. In this work, we initially approach implicative algebras as a generalization of locales, and we extend several topological-like concepts to the realm of implicative algebras, accompanied by various concrete examples. Then, we shift our focus to viewing implicative algebras as a generalization of partial combinatory algebras. We abstract the notion of a category of assemblies, partition assemblies, and modest sets to arbitrary implicative algebras, and thoroughly investigate their categorical properties and interrelationships.
Samuele Maschio, Davide Trotta
2023-06-07T15:41:22Z
http://arxiv.org/abs/2306.04534v1
# On categorical structures arising from implicative algebras: ###### Abstract Implicative algebras have been recently introduced by Miquel in order to provide a unifying notion of model, encompassing the most relevant and used ones, such as realizability (both classical and intuitionistic), and forcing. In this work, we initially approach implicative algebras as a generalization of locales, and we extend several topological-like concepts to the realm of implicative algebras, accompanied by various concrete examples. Then, we shift our focus to viewing implicative algebras as a generalization of partial combinatory algebras. We abstract the notion of a category of assemblies, partition assemblies, and modest sets to arbitrary implicative algebras, and thoroughly investigate their categorical properties and interrelationships. ###### Contents * 1 Introduction * 2 Implicative algebras * 2.1 Definition * 2.2 Some examples of implicative algebras * 2.3 The encoding of \(\lambda\)-terms in an implicative algebra * 2.4 The calculus of an implicative algebra * 3 Topological notions in implicative algebras * 3.1 Disjoint families * 3.2 Supercompactness * 3.3 Indecomposability * 3.4 Supercompactness and indecomposability in particular classes of implicative algebras * 3.5 Supercompactness and indecomposability in complete Heyting algebras * 3.6 Supercompactness and indecomposability in different kinds of realizability * 3.7 Supercoherent implicative algebras * 3.8 Modest and core families * 4 Triposes from implicative algebras * 4.1 Supercompact predicates of implicative triposes Partitioned assemblies and assemblies for implicative triposes * 5.1 Some properties of the Grothendieck category of an implicative tripos * 5.2 Partitioned assemblies * 5.3 Assemblies * 5.4 Regular completion of implicative triposes * 5.4.1 The subcategory of trackable objects * 5.4.2 The subcategory of strongly trackable objects * 5.5 Category of (regular) projective strongly trackable objects * 5.6 A characterization of the categories of assemblies and regular completion * 5.7 Relation with another notion of assemblies * 5.8 Categories of implicative modest sets ## 1 Introduction The notion of _implicative algebra_ has been recently introduced by Miquel [19] as a simple algebraic tool to encompass important model-theoretic constructions. These constructions include those underlying forcing and realizability, both in intuitionistic and classical logic. In a subsequent work[20], Miquel further reinforces the previously demonstrated outcome by showing that every "well-behaved Set-based semantics" can be presented as a specific instance of a model within the context of an implicative algebra. To reach this goal, implicative algebras were initially situated within the categorical setting of Set-based _triposes_[13; 22], demonstrating that every implicative algebra induces a Set-based tripos (19, Thm. 4.4). This result can be seen as a particular case of a more general result presented in (26, Sec. 5.3) based on the notion of _implicative ordered combinatory algebra._, since implicative algebras are a particular instance of such a notion. Subsequently, it was proven that every Set-based tripos is isomorphic to an implicative one (20, Thm. 1.1). It is important to note that Miquel's results represent the culmination of a series of studies aimed at showing, from an abstract perspective, the essential common features between various realizability-like models and localic models, utilizing categorical tools. In particular, Hofstra introduced in [11] the notion of _basic combinatory objects_ (BCOs) to encompass (ordered) PCAs and locales, and he provided a characterisation of triposes arising as triposes for ordered PCAs (with filters). The non-ordered version of BCOs, known as _discrete combinatory objects_, was then introduced by Frey in [9] and employed to provide an "intrinsic" or "extensional" characterization of realizability toposes. The main objective of this work is to further explore the abstract perspective that aims to unify realizability-like interpretations and forcing-like (or localic) interpretations. This is achieved by focusing on two aspects: generalizing several topological notions from locale theory to implicative algebras, and employing these new notions to formally define a notion of _category of assemblies_ and _category of partitioned assemblies_ for implicative algebras. This generalization extends the existing cases of categories of partitioned assemblies and assemblies for a PCA [12; 9; 5]. In the first part of this work, we concentrate on generalizing standard notions such as _supercompact_, _indecomposable_ and _disjoint_ elements at the level of implicative algebras. We study these notions in several examples. One significant challenge when generalizing localic-like notions to implicative algebras is the need to consider a suitable form of _uniformity_. This is necessary because, unlike the localic case, the separator of an arbitrary implicative algebra may have multiple elements. This is the case, for example, for the separators of several implicative algebras arising from various forms of realizability. This fact makes, for example, the transition from a point-wise notion to its generalization via indexed sets non-trivial. This abstract framework lays the groundwork for the second part of this work, in which we extend various notions derived from realizability to implicative algebras. This extension enables us to offer a topological-like interpretation of these notions. After completing this initial study, which draws inspiration from the perspective of implicative algebras as generalizations of locales, we then proceed to examine implicative algebras in relation to the broader context of _partial combinatory algebras_ (PCAs). This leads us to explore the abstraction of concepts such as assemblies, partitioned assemblies, and modest sets within this framework. The problem of generalizing these notions from PCAs to arbitrary implicative algebras can be addressed from different perspectives: for example, since assemblies are pairs \((X,\psi)\) where \(\varphi\colon X\to\mathscr{P}^{*}(R)\) is a function from \(X\) to the non-empty powerset of the PCA \(R\), one could try to generalize this notion by defining an assembly for an implicative algebra \(\mathbb{A}=(\mathcal{A},\leq,\to,\Sigma)\) as a pair \((X,\psi)\) where \(\psi\colon X\to\Sigma\) is a function from the set \(X\) to the separator \(\Sigma\) (since, for the implicative algebra associated with a PCA, we have that \(\Sigma:=\mathscr{P}^{*}(R)\)). This approach has been recently used in [7]. A second reasonable attempt, based on the fact that every realizability topos can be presented as the ex/reg-completion of the category \(\mathbf{Asm}(R)\) of assemblies of its PCA, [24, 4, 29], could be that of defining the category of assemblies for an implicative algebra as the regular completion (in the sense of [15]) of the implicative tripos associated with the PCA. Again, this generalization would allow us to recognize the ordinary category of assemblies as a particular case, since every tripos-to-topos can be presented as the ex/reg-completion of the regular completion of the tripos. The first solution mainly depends on the "explicit" description of an assembly of a PCA, while the second one is based on the abstract properties of such a category. In this paper, we propose a different approach: instead of focusing on the explicit description of an assembly or on the universal property of the category of assemblies, we aim to identify the _logical properties_ that uniquely identify assemblies in realizability, and then define an arbitrary assembly of an implicative algebra as a pair \((X,\psi)\) where \(\psi\) is a predicate of the implication tripos satisfying the logical properties we have identified. The inspiration for this kind of abstraction is the characterization of predicates determining _partitioned assemblies_ presented in [17] in terms of _full existential free elements_, and independently introduced in [8, 10] via the notion of \(\exists\)-_primes_: in these works, it has been proved that the functions \(\varphi\colon X\to R\) where \(R\) is a PCA, i.e. those used to define a partitioned assembly on \(X\), correspond exactly to the predicates \(\phi\colon X\to\mathscr{P}(R)\) of the realizability tripos of \(R\) satisfying the following property: **Property 1.1**.: _Whenever a sequent_ \[\phi(x)\vdash\exists y\in Y\left(f(y)=x\wedge\sigma(y)\right)\] _is satisfied (in the internal language of realizability tripos), there exists a witness function \(g\) such that \(\phi(x)\vdash\sigma(g(x))\), and this property is preserved by substitutions._ Following this approach, we observe that the functions \(\varphi\colon X\to\mathscr{P}^{*}(R)\), i.e. those used to define an assembly on \(X\), correspond exactly to the predicates \(\phi\colon X\to\mathscr{P}(R)\) of the realizability tripos of \(R\) satisfying the following property: **Property 1.2**.: _Whenever a sequent_ \[\phi(x)\vdash\exists!z\in Z\,\sigma(x,z)\] _where \(\sigma\) is a functional predicate is satisfied (in the internal language of realizability tripos), there exists a witness function \(g\) such that \(\phi(x)\vdash\sigma(x,g(x))\), and this property is preserved by substitutions._ The fact that these characterizations do not depend on any explicit description of an assembly or a partitioned assembly, but just on their _logical properties_, makes them easy to generalize to arbitrary triposes. Therefore, we define an assembly for an implicative algebra \(\mathbb{A}=(\mathcal{A},\leq,\to,\Sigma)\) as a pair \((X,\psi)\) where \(X\) is a set and \(\psi\colon X\to\mathcal{A}\) is a predicate of the implicative tripos associated with \(\mathbb{A}\) satisfying the property (1.2) (in the internal language of the implicative tripos), while we will say that \(\psi\colon X\to\mathcal{A}\) is a partitioned assembly if \(\psi\colon X\to\mathcal{A}\) is a predicate of the implicative tripos associated with \(\mathbb{A}\) satisfying the property (1.1). Based on our previous analysis, we show that these notions correspond exactly to the generalization of the notion of indecomposable and supercompact elements, respectively, for implicative algebras. A second crucial insight we propose here regards the concept of _morphism of assemblies_ and its generalizations. After introducing a notion of morphism of assemblies following the same idea used in realizability, where morphisms are defined as \(\mathsf{Set}\)-functions, we show that our notion of category of assemblies is equivalent to the subcategory that we called of _strongly trackable objects_ of the category of _functional relations_ associated with the implicative tripos (i.e. its regular completion in the sense of [15]), namely objects such that every functional relation having one of these objects as domain is _tracked_ by a unique \(\mathsf{Set}\)-based function. Then, we prove that the category of partitioned assemblies is exactly the subcategory of strongly trackable objects which are _regular projectives_ of the category of functional relations associated with the implicative tripos. Finally, we conclude by studying some basic categorical properties of these categories. In particular, it is worth recalling that in realizability the category of assemblies is regular, and it happens to be equivalent to the regular completion (in the sense of [4]), of its full subcategory of partition assemblies. However, this connection between assemblies and partition assemblies does not hold in general for the case of an arbitrary implicative algebra. More generally, for an arbitrary implicative algebra, the category of assembly is not regular, and the category of partition assemblies has no finite limits. Taking inspiration again from [17], we present necessary and sufficient conditions allowing us to understand when a category of assemblies for an implicative algebra is regular and it is the regular completion of the category of partition assemblies. ## 2 Implicative algebras In this section we recall the definition of implicative algebras introduced in [19]. ### Definition **Definition 2.1** (implicative structure).: An **implicative structure**\(\mathbb{A}=(\mathcal{A},\leq,\rightarrow)\) is a complete lattice \((\mathcal{A},\leq)\) equipped with a binary operation \((a,b)\mapsto(a\to b)\) called **implication** of \(\mathcal{A}\) satisfying the following two axioms: * if \(a^{\prime}\leq a\) and \(b\leq b^{\prime}\) then \(a\to b\leq a^{\prime}\to b^{\prime}\); * \(a\rightarrow\bigwedge_{b\in B}b=\bigwedge_{b\in B}(a\to b)\), for every \(a\in\mathcal{A}\) and every subset \(B\subseteq\mathcal{A}\). **Definition 2.2** (separator).: Let \(\mathbb{A}=(\mathcal{A},\leq,\rightarrow)\) be an implicative structure. A **separator** is a subset \(\Sigma\subseteq\mathcal{A}\) satisfying the following conditions for every \(a,b\in\mathcal{A}\): * if \(a\in\Sigma\) and \(a\leq b\) then \(b\in\Sigma\); * \(\mathbf{k}^{\mathbb{A}}:=\bigwedge_{a,b\in\mathcal{A}}(a\to b \to a)\) is an element of \(\Sigma\); * \(\mathbf{s}^{\mathbb{A}}:=\bigwedge_{a,b,c\in\mathcal{A}}((a\to b \to c)\rightarrow(a\to b)\to a\to c)\) is an element of \(\Sigma\); * if \((a\to b)\in\Sigma\) and \(a\in\Sigma\) then \(b\in\Sigma\). The intuition is that a separator \(\Sigma\subseteq\mathcal{A}\) determines a particular "criterion of truth" within the implicative structure \((\mathcal{A},\leq,\rightarrow)\), generalizing the notion of filters for Heyting algebras. **Definition 2.3** (implicative algebra).: We call an **implicative algebra** an implicative structure \((\mathcal{A},\leq,\rightarrow)\) equipped with a separator \(\Sigma\subseteq\mathcal{A}\). In such a case the implicative algebra will be denoted as \((\mathcal{A},\leq,\rightarrow,\Sigma)\). ### Some examples of implicative algebras _Complete Heyting algebras_ If \(\mathbb{H}=(H,\leq)\) is a complete Heyting algebra with Heyting implication \(\rightarrow\), we can see it as an implicative algebra \((H,\leq,\rightarrow,\{\top\})\) where \(\top\) is the maximum of \(\mathbb{H}\). _Realizability_ If \(\mathcal{R}=(R,\cdot)\) is a (total) combinatory algebra (CA) (see e.g. [29]), then we can define an implicative algebra from it by considering the 4-tuple, \((\mathcal{P}(R),\subseteq,\Rightarrow,\mathcal{P}(R)\setminus\{\emptyset\})\) where \(A\Rightarrow B:=\{r\in R|\,r\cdot a\in B\text{ for all }a\in A\}\) for every \(A,B\subseteq R\). _Nested realizability_ Nested realizability tripos is considered in [2; 18] in order to study some aspects of modified realizability and relative realizability (see [28; 29]). We consider here only the total case in order not to make the notation heavy. The same we will do in the next examples. Let \(\mathcal{R}=(R,\cdot)\) be a combinatory algebra and let \(\mathcal{R}_{\#}=(R_{\#},\cdot_{\#})\) be one of its sub-combinatory algebras, that is \(R_{\#}\subseteq R\), \(a\cdot_{\#}b=a\cdot b\) for every \(a,b\in R_{\#}\) and \(\mathbf{k}\), \(\mathbf{s}\) in \(\mathcal{R}\) can be chosen to be elements of \(R_{\#}\). We can define an implicative algebra \(\mathbb{A}_{\mathcal{R},\mathcal{R}_{\#}}^{n}:=(P_{\mathcal{R},\mathcal{R}_{ \#}},\subseteq_{n},\Rightarrow_{n},\Sigma_{\mathcal{R},\mathcal{R}_{\#}})\) as follows: 1. \(P_{\mathcal{R},\mathcal{R}_{\#}}:=\{(X_{a},X_{p})\in\mathcal{P}(R_{\#})\times \mathcal{P}(R)|\,X_{a}\subseteq X_{p}\}\) 2. \((X_{a},X_{p})\subseteq_{n}(Y_{a},Y_{p})\) if and only if \(X_{a}\subseteq Y_{a}\) and \(X_{p}\subseteq Y_{p}\); 3. \((X_{a},X_{p})\Rightarrow_{n}(Y_{a},Y_{p}):=((X_{a}\Rightarrow_{\#}Y_{a})\cap(X _{p}\Rightarrow Y_{p}),X_{p}\Rightarrow Y_{p})\) 4. \((X_{a},X_{p})\in\Sigma_{\mathcal{R},\mathcal{R}_{\#}}\) if and only if \(X_{a}\neq\emptyset\). _Modified realizability_ Let \(\mathcal{R}=(R,\cdot)\) be a combinatory algebra and let \(\mathcal{R}_{\#}=(R_{\#},\cdot_{\#})\) be one of its sub-combinatory algebras and assume there exists \(\star\in R_{\#}\) such that \(\star\cdot x=\star\) for every \(x\in R\) and \(\mathbf{p}\cdot\star\cdot\star=\star\) for \(\mathbf{p}\) the pairing combinator defined from fixed \(\mathbf{k},\mathbf{s}\in R_{\#}\). We can define an implicative algebra as \[\mathbb{A}^{m}_{\mathcal{R},\mathcal{R}_{\#},\star}:=(P^{m}_{\mathcal{R}, \mathcal{R}_{\#},\star},\subseteq_{n},\Rightarrow_{n},\Sigma_{\mathcal{R}, \mathcal{R}_{\#}}\cap P^{m}_{\mathcal{R},\mathcal{R}_{\#},\star})\] where \[P^{m}_{\mathcal{R},\mathcal{R}_{\#},\star}:=\{(X_{a},X_{p})\in P_{\mathcal{R}, \mathcal{R}_{\#}}|\star\in X_{p}\}\] _Relative realizability_ Let \(\mathcal{R}=(R,\cdot)\) be a combinatory algebra and let \(\mathcal{R}_{\#}=(R_{\#},\cdot_{\#})\) be one of its sub-combinatory algebras. We define the relative realizability implicative algebra as follows: \[\mathbb{A}^{r}_{\mathcal{R},\mathcal{R}_{\#}}:=(\mathcal{P}(R),\subseteq, \Rightarrow,\Sigma^{r}_{\mathcal{R},\mathcal{R}_{\#}})\] where \[\Sigma^{r}_{\mathcal{R},\mathcal{R}_{\#}}:=\{X\in\mathcal{P}(R)|\,X\cap R_{ \#}\neq\emptyset\}\] _Partial cases_ Notice that one can also consider the previous cases in which the binary operations of the combinatory algebras involved are _partial_. In this case we do not obtain implicative algebras, but _quasi-implicative algebras_. However by considering a notion of completion which can be found in [19] one can obtain implicative algebras from them. The choice of presenting just _total_ PCAs instead of the more traditional and general notion is motivated by the crucial result of Miquel, i.e. the fact that every quasi-implicative tripos associated with a (partial) PCA is isomorphic to an implicative one. We refer to [19, Sec. 4] for all details. _Classical realizability_ Let \(\mathcal{K}=(\Lambda,\Pi,\,@,\cdot,\mathbf{k}_{-},\mathbf{K},\mathbf{S}, \mathbf{cc},\mathsf{PL},\perp)\) be an abstract Krivine structure (see [27, 19]). One can define an implicative algebra as follows \((\mathcal{P}(\Pi),\supseteq,\rightarrow,\Sigma)\) where \(X\to Y:=\{t\cdot\pi|\,t\in X^{\perp},\pi\in Y\}\), where \(X^{\perp}:=\{t\in\Lambda|\,t\perp\pi\text{ for every }\pi\in Y\}\) and \(\Sigma=\{X\in\mathcal{P}(\Pi)|\,X^{\perp}\cap PL\neq\emptyset\}\). _2.3. The encoding of \(\lambda\)-terms in an implicative algebra_ In any implicative algebra \(\mathbb{A}=(\mathcal{A},\leq,\rightarrow,\Sigma)\) one can define a binary application as follows for every \(a,b\) in \(\mathcal{A}\) \[a\cdot b:=\bigwedge\{x\in\mathcal{A}|\,a\leq b\to x\}.\] Using this one can encode closed \(\lambda\)-terms with parameters in \(\mathcal{A}\) as follows: 1. \(a^{\mathbb{A}}:=a\) for every \(a\in\mathcal{A}\); 2. \((ts)^{\mathbb{A}}:=t^{\mathbb{A}}\cdot s^{\mathbb{A}}\); 3. \((\lambda x.t)^{\mathbb{A}}:=\bigwedge_{a\in\mathcal{A}}(a\rightarrow(t[a/x])^{ \mathbb{A}}\) A nice result is that if \(t\)\(\beta\)-reduces to \(s\), then \(t^{\mathcal{A}}\leq s^{\mathcal{A}}\). Moreover, if \(t\) is a pure \(\lambda\)-term with free variables \(x_{1},...,x_{n}\) and \(a_{1},...,a_{n}\in\Sigma\), then \((t[a_{1}/x_{1},...,a_{n}/x_{n}])^{\mathbb{A}}\in\Sigma\). Finally, we can notice that \(\mathbf{k}^{\mathbb{A}}\) and \(\mathbf{s}^{\mathbb{A}}\) are exactly the interpretations of the \(\lambda\)-terms \(\mathbf{k}:=\lambda x.\lambda y.x\) and \(\mathbf{s}:=\lambda x.\lambda y.\lambda z.xz(yz)\) as shown in [19, Prop. 2.24]. ### The calculus of an implicative algebra In every implicative algebra \(\mathbb{A}=(\mathcal{A},\leq,\rightarrow,\Sigma)\) we can define first-order logical operators in such a way that we obtain a very useful calculus. In particular if \(a,b\) are in \(\mathbb{A}\) and \((c_{i})_{i\in I}\) is a family of elements of \(\mathbb{A}\) we can define \[a\times b:=\bigwedge_{x\in\mathcal{A}}\left((a\rightarrow(b\to x)) \to x\right)\qquad a+b:=\bigwedge_{x\in\mathcal{A}}\left((a\to x) \rightarrow((b\to x)\to x)\right)\] \[\mathop{\overrightarrow{\exists}}_{i\in I}c_{i}:=\bigwedge_{x\in\mathcal{A}} \left(\bigwedge_{i\in I}(c_{i}\to x)\to x\right)\] As shown in [19], the following rules hold: \[\frac{(x:A)\in\Gamma}{\Gamma\vdash x:A}\qquad\frac{\Gamma\vdash t :a}{\Gamma\vdash x:a}\qquad\frac{\Gamma\vdash t:a}{\Gamma\vdash t:b}\qquad \frac{\Gamma^{\prime}\leq\Gamma\ \ \Gamma\vdash t:a_{1}}{\Gamma^{\prime}\vdash t:a}\qquad\frac{ \mathsf{Free}(t)\subseteq Var(\Gamma)}{\Gamma\vdash t:\top}\] \[\frac{\Gamma\vdash t:\bot}{\Gamma\vdash t:a}\qquad\frac{\Gamma,x :a\vdash t:b}{\Gamma\vdash x.t:a\to b}\qquad\frac{\Gamma\vdash t:a \to b\ \Gamma\vdash s:a}{\Gamma\vdash ts:b}\qquad\frac{\Gamma\vdash t:a}{\Gamma \vdash\lambda z.ztu:a\times b}\] \[\frac{\Gamma\vdash t:a\times b}{\Gamma\vdash t(\lambda x.\lambda y. x):a}\qquad\frac{\Gamma\vdash t:a\times b}{\Gamma\vdash t(\lambda x.\lambda y.y):b}\qquad\frac{\Gamma\vdash t:a}{\Gamma\vdash\lambda z.\lambda w.zt:a+b} \qquad\frac{\Gamma\vdash t:a+b\ \Gamma,x:a\vdash u:c\ \Gamma,y:b \vdash v:c}{\Gamma\vdash t(\lambda x.u)(\lambda y.v):c}\qquad\frac{\Gamma \vdash t:a_{i}\ (\text{for all }i\in I)}{\Gamma\vdash t:\bigwedge_{i\in I}a_{i}\ \vec{ \imath}\in I}{\Gamma\vdash t:a_{\vec{\imath}}}\] \[\frac{\Gamma\vdash t:a_{\vec{\imath}}\ \vec{\imath}\in I}{\Gamma\vdash\lambda z.zt: \mathop{\overrightarrow{\exists}}_{i\in I}a_{i}}\qquad\frac{\Gamma\vdash t: \mathop{\overrightarrow{\exists}}_{i\in I}a_{i}\ \Gamma,x:a_{i}\vdash u:c\ (\text{ for all }i\in I)}{\Gamma \vdash t(\lambda x.u):c}\] where every sequent \(\Gamma\vdash t:a\) contains a list of variable declarations \(\Gamma:=x_{1}:a_{1},...,x_{n}:a_{n}\) where \(x_{1},...,x_{n}\) are distinct variables and \(a_{1},...,a_{n}\in\mathcal{A}\), a lambda term \(t\) containing as free variables at most those in \(\Gamma\) and an element \(a\in\mathcal{A}\). The meaning of such a sequent is that \((t[\Gamma])^{\mathbb{A}}\leq a\) where \(t[\Gamma]\) denotes the term obtained from \(t\) by performing the substitution indicated by \(\Gamma\). The calculus above is very useful, since using the remarks from the previous subsection, if we deduce that \(x_{1}:a_{1},...,x_{n}:a_{n}\vdash t:b\), \(t\) is a pure \(\lambda\)-term and \(a_{1},...,a_{n}\in\Sigma\), then \(b\) is in \(\Sigma\) too. We now state two propositions which can be easily proved by using the calculus above. **Proposition 2.4**.: _If \(\mathcal{F}\) is the class of set-indexed families of elements of an implicative algebra \(\mathbb{A}\), then_ \[\bigwedge_{(b_{i})_{i\in I}\in\mathcal{F}}\left(\mathop{\overrightarrow{ \exists}}_{i\in I}b_{i}\rightarrow\bigvee_{i\in I}b_{i}\right)\in\Sigma.\] Before proving the next proposition we need to introduce two notions of equality evaluated in an implicative algebra. The first, which is the right one, is equivalent to that presented in [19] under the name \(\mathbf{id}\). It is defined as follows for every set \(J\) and every \(j,j^{\prime}\in J\) \[\delta_{J}(j,j^{\prime}):=\mathop{\hbox{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt\kern 6. Proof.: This follows from the fact that \(\bigwedge_{a,b\in\mathcal{A}}(a\wedge b\to a\times b)\in\Sigma\). **Example 3.3**.: In any complete Heyting algebra the two notions presented in Definition 3.1 clearly coincide, because \(a\wedge b=a\times b\), and they coincide with the notion of pairwise disjoint family of elements of a Heyting algebra above. **Example 3.4**.: In the case of the implicative algebra associated with a combinatory algebra \((R,\cdot)\), then one can easily check that \(\wedge\)-disjoint families are families \((A_{i})_{i\in I}\) such that \(A_{i}\cap A_{j}=\emptyset\) for every \(i,j\in I\) with \(i\neq j\), while \(\times\)-disjoint families are families \((A_{i})_{i\in I}\) such that at most one of the \(A_{i}\)'s is non-empty. The same holds for relative realizability implicative algebras. **Example 3.5**.: In the nested realizability implicative algebras a family \((A_{i},B_{i})_{i\in I}\) is \(\wedge\)-disjoint if and only if \(B_{i}\cap B_{j}=\emptyset\) for every \(i,j\in I\) with \(i\neq j\), while it is \(\times\)-disjoint if and only if at most one of the \(B_{i}\)'s is non-empty. **Example 3.6**.: In modified realizability implicative algebras, \(\times\)-disjoint families are families \((A_{i},B_{i})_{i\in I}\) in which at most one of the \(A_{i}\)'s is non-empty, while \(\wedge\)-disjoint families are families \((A_{i},B_{i})_{i\in I}\) in which \(A_{i}\cap A_{j}=\emptyset\) for all \(i\neq j\) in \(I\). We also introduce the following notion of \(\times\)-functional family which will be useful later. **Definition 3.7**.: A two-indexed family \((b^{i}_{j})_{i\in I,j\in J}\) of elements of \(\mathbb{A}\) is \(\times\)**-functional** if \[\bigwedge_{i\in I}\bigwedge_{j,j^{\prime}\in J}(b^{i}_{j}\times b^{i}_{j^{ \prime}}\to\delta_{J}(j,j^{\prime}))\in\Sigma\] **Remark 3.8**.: Notice that if \((b^{i}_{j})_{i\in I,j\in J}\) is \(\times\)-functional, then for every \(i\in I\) the family \((b^{i}_{j})_{j\in J}\) is \(\times\)-disjoint. ### Supercompactness The second notion we aim to abstract in the setting of implicative algebras is that of a _supercompact element_[1; 21]. Recall that in the case of a complete Heyting algebra, an element \(a\) is said to be _supercompact_ if \[a\leq\bigvee_{i\in I}b_{i}\] implies the existence of an \(\overline{i}\in I\) such that \(a\leq b_{\overline{i}}\), for every set-indexed family \((b_{i})_{i\in I}\) of elements. Taking inspiration from this notion, we introduce the following generalization: **Definition 3.9** (supercompact element).: An element \(a\in\mathcal{A}\) is **supercompact** in \(\mathbb{A}\) if for every set-indexed family \((b_{i})_{i\in I}\) of elements of \(\mathcal{A}\) with \[a\to\mathop{\hbox{\lower 1.5pt\hbox{$\sqcap$}\kern-10.0pt\lower 1.5pt\hbox{$ \sqcap$}}}_{i\in I}b_{i}\in\Sigma\] there exists \(\overline{i}\in I\) such that \(a\to b_{\overline{i}}\in\Sigma\). **Remark 3.10**.: Notice that the minimum \(\bot\) can never be supercompact in \(\mathbb{A}\). Indeed, if we consider an empty family we always have \(\mathbf{k}^{\mathbb{A}}\leq\bot\to(\top\to\bot)=\bot\to\ \overline{\bot}\emptyset\) from which it follows that \(\bot\to\overline{\bot}\emptyset\in\Sigma\). **Remark 3.11**.: Notice that the maximum \(\top\) is supercompact if and only if from \(\underset{i\in I}{\overrightarrow{\exists}}b_{i}\in\Sigma\), one can deduce the existence of an index \(\vec{i}\in I\) such that \(b_{\overline{i}}\in\Sigma\). We can consider such a property as a sort of _existence property_. Complete Heyting algebras satisfying this property are called supercompact (locales) (see [21]), while the only complete Boolean algebras safistying this property are the trivial ones (those in which every element is either \(\bot\) or \(\top\)), since for every \(a\) we have \(a\vee\neg a=\top\). The implicative algebra of realizability in the total case satisfies this property thanks to Proposition 2.4 and the fact that the union of a family of sets is non-empty if and only if at least one of them is non-empty. The same holds for relative realizability, nested realizability and modified realizability. **Remark 3.12**.: One can easily notice that if \(a\equiv_{\Sigma}b\) and \(a\) is supercompact, then \(b\) is supercompact too. If we want to generalize the notion of supercompact element to a notion of supercompact family of elements of \(\mathbb{A}\) we have three natural ways: **Definition 3.13**.: A family \((a_{i})_{i\in I}\) of elements of \(\mathbb{A}\) is 1. **componentwise supercompact (cSK)** if \(a_{i}\) is supercompact for every \(i\in I\); 2. **supercompact (SK)** if for every family of families \(((b^{i}_{j})_{j\in J_{i}})_{i\in I}\) of elements of \(\mathcal{A}\) whenever \[\bigwedge_{i\in I}(a_{i}\to\underset{j\in J_{i}}{\overrightarrow{\exists}}b^{ i}_{j})\in\Sigma\] there exists \(f\in(\Pi i\in I)J_{i}\) such that \[\bigwedge_{i\in I}(a_{i}\to b^{i}_{f(i)})\in\Sigma\] 3. **uniformly supercompact (U-SK)** if \((a_{f(k)})_{k\in K}\) is **SK** for every function \(f:K\to I\). **Remark 3.14**.: One can observe that a family \((a_{i})_{i\in\{\star\}}\) is **SK** if and only if \(a_{\star}\) is a supercompact element. **Proposition 3.15**.: _The following are equivalent for a family \((a_{i})_{i\in I}\) of elements of \(\mathbb{A}\):_ 1. \((a_{i})_{i\in I}\) _is_ \(\mathbf{U}\)_-_SK_;_ 2. _for every family of families of families_ \(((b^{k}_{j})_{j\in J_{K}})_{k\in K_{i}})_{i\in I}\) _of elements of_ \(\mathcal{A}\) _with_ \((K_{i})_{i\in I}\) _a family of pairwise disjoint sets, such that_ \[\bigwedge_{i\in I}(a_{i}\to\bigwedge_{k\in K_{i}}\underset{j\in J_{k}}{ \overrightarrow{\exists}}b^{k}_{j})\in\Sigma\] _there exists a function_ \(g\in(\Pi k\in\bigcup_{i\in I}K_{i})J_{k}\) _such that_ \[\bigwedge_{i\in I}(a_{i}\to\bigwedge_{k\in K_{i}}b^{k}_{g(k)})\in\Sigma\] Proof.: Let \((a_{i})_{i\in I}\) satisfy 2. and \(f:K\to I\) be a function. Assume that \[\bigwedge_{k\in K}(a_{f(k)}\to\mathop{\hbox{\vbox{\hbox{\hbox to 0.0pt{$\sqcap$} \hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0 pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0 pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \kern-1.0pt{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0 pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \kern-{\kern-1.0pt\hbox{\kern-1.0pt\kern-{\kern-1.0pt\hbox{\kern-1.0pt \kern-{\kern-1.0pt\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{ \kern-1.0pt\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0 pt\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\kern-1.0pt{ \kern-1.0pt\hbox{\kern-1.0pt\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0 pt\kern-1.0pt\hbox{\kern-1.0pt\kern-1.0pt{\kern-1.0pt\hbox{ \kern-1.0pt\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\kern-1.0pt{\kern- 1.0pt\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\kern-1.0pt\hbox{\kern-1.0 pt\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\kern-1.0pt \kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\kern-1.0pt\hbox{\kern-1.0pt \kern-1.0pt\hbox{\kern-1.0pt\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \kern-1.0pt\hbox{\kern-1.0pt\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\kern-1.0pt\hbox{\kern-1.0pt \kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\kern-1.0pt\hbox{\kern-1.0pt \kern-1.0pt\hbox{\kern-1.0pt\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\kern-1.0pt\hbox{\kern-1.0pt \kern-1.0pt\hbox{\kern-1.0pt\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0 pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern- 1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt \kern-1.0pt{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern- \kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern- \kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern- \kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern- \kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox {\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern- \kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern-\kern-1.0pt\hbox{\kern- \kern-1.0pt\hbox{\kern-1.0pt\hbox{\kern--1.0pt\hbox{\kern--1.0pt \hbox{\kern-1.0pt\hbox{\kern-\kern-1.0pt\hbox{\kern--1.0pt\hbox{\kern- \kern-1.0pt\hbox{\kern--1.0pt\hbox{\kern--1-1 then there the existence of an \(\overline{i}\in I\) such that \(a\leq b_{\overline{i}}\), for every set-indexed family \((b_{i})_{i\in I}\) of pairwise _disjoint_ elements. **Definition 3.17**.: An element \(a\) of \(\mathbb{A}\) is **indecomposable** if for every \(\times\)-disjoint family \((b_{i})_{i\in I}\) of elements of \(\mathbb{A}\), whenever \(a\to\underset{i\in I}{\overrightarrow{\phantom{i}}}b_{i}\in\Sigma\), there exists \(\overline{i}\in I\) such that \(a\to b_{\overline{i}}\in\Sigma\). From the definition, it easily follows that **Proposition 3.18**.: _Every supercompact element of \(\mathbb{A}\) is also indecomposable._ **Remark 3.19**.: Similarly to what happens with supercompactness, if \(a\equiv_{\Sigma}b\) and \(a\) is indecomposable, then \(b\) is indecomposable. Moreover, \(\bot\) can never be indecomposable. **Remark 3.20**.: Notice that if \((b_{i})_{i\in I}\) is a \(\times\)-disjoint family, \(a\) is indecomposable, \(a\to b_{i_{1}}\in\Sigma\) and \(a\to b_{i_{2}}\in\Sigma\), then \(a\to b_{i_{1}}\times b_{i_{2}}\in\Sigma\) from which it follows that \(a\to\delta_{I}(i_{1},i_{2})\in\Sigma\). If \(i_{1}\neq i_{2}\), then this means that \(a\to(\top\to\bot)\in\Sigma\), from which it follows that \(a\equiv_{\Sigma}\bot\). But we know that this cannot happen. So \(i_{1}=i_{2}\). This means that the element \(\overline{i}\) in the definition of indecomposable element is in fact unique. In order to generalize the notion of indecomposability to families we consider the following five notions: **Definition 3.21**.: Let \(I\) be a set. A family \((a_{i})_{i\in I}\) of elements of \(\mathbb{A}\) is: 1. **componentwise indecomposable (cInd)** if \(a_{i}\) is indecomposable for every \(i\in I\); 2. **functionally supercompact (fSK)** if for every \(\times\)-functional family \((b^{i}_{j})_{i\in I,j\in J}\) of elements of \(\mathcal{A}\) such that \[\bigwedge_{i\in I}(a_{i}\to\underset{j\in J}{\overrightarrow{\phantom{j}}}b^{ i}_{j})\in\Sigma\] there exists a unique \(f:I\to J\) such that \(\bigwedge_{i\in I}(a_{i}\to b^{i}_{f(i)})\in\Sigma\); 3. **uniformly functionally supercompact (U-fSK)** if \((a_{f(k)})_{k\in K}\) is functionally supercompact for every function \(f:K\to I\); 4. **weakly functionally supercompact (wfSK)** if for every \(\times\)-functional family \((b^{i}_{j})_{i\in I,j\in J}\) of elements of \(\mathcal{A}\) such that \[\bigwedge_{i\in I}(a_{i}\to\underset{j\in J}{\overrightarrow{\phantom{j}}}b^{ i}_{j})\in\Sigma\] there exists \(f:I\to J\) such that \(\bigwedge_{i\in I}(a_{i}\to b^{i}_{f(i)})\in\Sigma\); 5. **uniformly weakly functionally supercompact (U-wfSK)** if \((a_{f(k)})_{k\in K}\) is weakly functionally supercompact for every function \(f:K\to I\). By definition and using arguments similar to that in the proof of Proposition 3.22 we have that: **Proposition 3.22**.: _For a family \((a_{i})_{i\in I}\) of elements of \(\mathbb{A}\) we have that:_ 1. **fSK \(\Rightarrow\) wfSK**_;_ 2. **U-fSK \(\Rightarrow\) U-wfSK** 3. \(\mathbf{U}\)**-fSK \(\Rightarrow\) fSK**_;_ 4. \(\mathbf{U}\)**-wfSK \(\Rightarrow\) wfSK**_;_ 5. \(\mathbf{U}\)**-wfSK \(\Rightarrow\) cInd**_._ Moreover with a proof analogous to that of Proposition 3.15 one can easily prove that: **Proposition 3.23**.: _Let \((a_{i})_{i\in I}\) be a family of elements of \(\mathbb{A}\). Then the following are equivalent:_ 1. _the family_ \((a_{i})_{i\in I}\) _is_ \(\mathbf{U}\)**-wfSK (_\(\mathbf{U}\)**-fSK _respectively);_ 2. _for every family of families of families_ \((((b^{k}_{j})_{j\in J})_{k\in K_{i}})_{i\in I}\) _of elements of_ \(\mathcal{A}\) _with_ \((K_{i})_{i\in I}\) _a family of pairwise disjoint sets and_ \((b^{k}_{j})_{k\in\bigcup_{i\in I}K_{i},j\in J}\)__\(\times\)_-functional, such that_ \[\bigwedge_{i\in I}(a_{i}\to\bigwedge_{k\in K_{i}}\mathop{\hbox{\vbox{\hrule width 0.4pt height 6.0pt\kern 6.0pt \vrule width 0.4pt height 6.0pt\kern 6.0pt\hrule width 0.4pt}\kern-0.4pt\hrule width 0.4pt}}_{j \in J}b^{k}_{j})\in\Sigma\] _there exists a (respectively unique) function_ \(g:(\Pi k\in\bigcup_{i\in I}K_{i})\to J\) _such that_ \[\bigwedge_{i\in I}(a_{i}\to\bigwedge_{k\in K_{i}}b^{k}_{g(k)})\in\Sigma\] **Proposition 3.24**.: _If \((a_{i})_{i\in I}\) is_ **SK**_, then it is_ **fSK**_._ Proof.: By definition if a family is **SK** then it is **wfSK**. In order to conclude we need to show that if \((a_{i})_{i\in I}\) is **SK**, \((b^{i}_{j})_{i\in I,j\in J}\) is an \(\times\)-functional family and \(f,g:I\to J\) are such that \[\bigwedge_{i\in I}(a_{i}\to b^{i}_{f(i)})\in\Sigma\text{ and }\bigwedge_{i\in I }(a_{i}\to b^{i}_{g(i)})\in\Sigma\] then \(f=g\). But from the assumption above we get \[\bigwedge_{i\in I}(a_{i}\to b^{i}_{f(i)}\times b^{i}_{g(i)})\in\Sigma\] from which it follows by the \(\times\)-functionality that \[\bigwedge_{i\in I}(a_{i}\to\delta_{J}(f(i),g(i)))\in\Sigma.\] This means that \[\bigwedge_{i\in I}(a_{i}\to\mathop{\hbox{\vbox{\hrule width 0.4pt height 6.0pt\kern 6.0pt \vrule width 0.4pt height 6.0pt\kern 6.0pt\hrule width 0.4pt}\kern-0.4pt\hrule width 0.4pt}}_{j=f(i) =g(i)}\top)\in\Sigma\] By the hypothesis of supercompactness we get that \(f(i)=g(i)\) for every \(i\in I\). Thus \(f=g\). From Propositions 3.24 it immediately that: **Corollary 3.25**.: _For a family \((a_{i})_{i\in I}\) of elements of \(\mathbb{A}\) we have that \(\mathbf{U}\)**-**SK \(\Rightarrow\)**U**-fSK**_._ Moreover, trivially one has **cSK \(\Rightarrow\) cInd**._ **Remark 3.26**.: Notice that, by Remark 3.19, we have that if \((a_{i})_{i\in I}\) is \(\mathbf{cInd}\), then \(a_{i}\not\equiv_{\Sigma}\bot\) for every \(i\in I\). **Proposition 3.27**.: _If \((a_{i})_{i\in I}\) is \(\mathbf{wfSK}\) and \(a_{i}\not\equiv_{\Sigma}\bot\) for every \(i\in I\), then \((a_{i})_{i\in I}\) is \(\mathbf{fSK}\)._ Proof.: If \((a_{i})_{i\in I}\) is \(\mathbf{wfSK}\) but not \(\mathbf{fSK}\), then there exists \(((b^{i}_{j})_{j\in J_{i}})_{i\in I}\)\(\times\)-functional and two distinct functions \(f,g\in(\Pi i\in I)J_{i}\) satisfying \[\bigwedge_{i\in I}(a_{i}\to b^{i}_{f(i)})\in\Sigma\text{ and }\bigwedge_{i\in I }(a_{i}\to b^{i}_{g(i)})\in\Sigma.\] Then \[\bigwedge_{i\in I}(a_{i}\to b^{i}_{f(i)}\times b^{i}_{g(i)})\in\Sigma\] and using the \(\times\)-functionality we get \[\bigwedge_{i\in I}(a_{i}\to\delta_{J_{i}}(f(i),g(i)))\in\Sigma\] If \(\vec{i}\in I\) is such that \(f(\vec{i})\neq g(\vec{i})\), then \(a_{\vec{i}}\to(\top\to\bot)\in\Sigma\). From this it follows that \(a_{\vec{i}}\equiv_{\Sigma}\bot\). **Corollary 3.28**.: _A family \((a_{i})_{i\in I}\) is \(\mathbf{U}\)-\(\mathbf{wfSK}\) if and only if it is \(\mathbf{U}\)-\(\mathbf{fSK}\)._ Proof.: We already know that by definition \(\mathbf{U}\)**-\(\mathbf{fSK}\)** implies \(\mathbf{U}\)-\(\mathbf{wfSK}\) (see Proposition 3.22). Assume now a family \((a_{i})_{i\in I}\) to be \(\mathbf{U}\)-\(\mathbf{wfSK}\). Then, combining the last point of Proposition 3.22 with Remark 3.26, we get that \(a_{i}\not\equiv_{\Sigma}\bot\) for every \(i\in I\). By definition every family \((a_{f(j)})_{j\in J}\) with \(f:J\to I\) is \(\mathbf{wfSK}\). Using Proposition 3.27 one gets that each one of these families is \(\mathbf{fSK}\). Thus \((a_{i})_{i\in I}\) is \(\mathbf{U}\)-\(\mathbf{fSK}\). We can summarize the relation between the different properties of families in the general case as follows: ### Supercompactness and indecomposability in particular classes of implicative algebras In this section we analyze the notions of supercompact and indecomposable elements in some particular cases. We start by considering the case of an implicative algebra _compatible with joins_. _Compatibility with joins_ **Definition 3.29**.: An implicative algebra \(\mathbb{A}=(\mathcal{A},\leq,\rightarrow,\Sigma)\) is **compatible with joins** if for every family \((a_{i})_{i\in I}\) of its elements and every \(b\in\mathcal{A}\) we have that \[\bigwedge_{i\in I}(a_{i}\to b)=\bigvee_{i\in I}a_{i}\to b.\] For implicative algebras compatible with joins we have the following useful properties, [19, Prop. 3.32]: 1. \(\bot\to a=\top\) 2. \(a\times\bot=\bot\times a=\top\rightarrow\bot\) Moreover, using the calculus in [19] one can easily prove that **Lemma 3.30**.: _If \(\mathcal{F}\) is the class of set-indexed families of elements of an implicative algebra \(\mathbb{A}\) which is compatible with joins, then_ \[\bigwedge_{(b_{i})_{i\in I}\in\mathcal{F}}\left(\bigvee_{i\in I}b_{i} \rightarrow\underset{i\in I}{\sqsupseteq}b_{i}\right)\in\Sigma\] **Remark 3.31**.: If we consider this property in combination with Proposition 2.4 we get that we can substitute \(\underset{\mathcal{\exists}}{\overrightarrow{\phantom{\mathcal{\exists}}}}\) with \(\bigvee\) in logical calculations when we are dealing with an implicative algebra compatible with joins, as shown in [19, p.490]. **Lemma 3.32**.: _If \(\mathbb{A}\) is compatible with joins, then_ \[\bigwedge_{J\,set}\bigwedge_{j,j^{\prime}\in J}(d_{J}(j,j^{\prime})\to \delta_{J}(j,j^{\prime}))\in\Sigma\] Proof.: \[\bigwedge_{J\,set}\bigwedge_{j,j^{\prime}\in J}(d_{J}(j,j^{\prime}) \rightarrow\delta_{J}(j,j^{\prime}))=(\top\rightarrow\bigwedge_{c\in \mathcal{A}}((\top\to c)\to c))\wedge(\bot\rightarrow(\top\rightarrow \bot))=\] \[(\top\rightarrow\bigwedge_{c\in\mathcal{A}}((\top\to c)\to c))\] and this can be easily shown to be in \(\Sigma\) using the calculus. If we consider this property in combination with Proposition 2.5 we get that we can substitute \(\delta_{J}\) with \(d_{J}\) in logical calculations when we are dealing with an implicative algebra compatible with joins. **Proposition 3.33**.: _Let \(\mathbb{A}\) be compatible with joins. If \((a_{i})_{i\in I}\) is \(\mathbf{SK}\), then it is \(\mathbf{cSK}\)._ Proof.: Recall from Remark 3.31 that in an implicative algebra compatible with joins we can substitute \(\underset{\mathcal{\exists}}{\overrightarrow{\phantom{\mathcal{\exists}}}}\) with \(\bigvee\). Now assume that \[a_{\overline{i}}\rightarrow\underset{j\in J}{\sqsupseteq}b_{j}\in\Sigma\] for some \(\overline{i}\in I\) and some family \((b_{j})_{j\in J}\) of elements of \(\mathcal{A}\). Let us now consider a family of families \(((c^{i}_{j})_{j\in J_{i}})_{i\in I}\) of elements of \(\mathcal{A}\) defined as follows: 1. \(J_{\overline{i}}=J\) and \(J_{i}=\{\star\}\) if \(i\neq\overline{i}\); 2. \(c_{j}^{i}:=b_{j}\), while \(c_{\star}^{i}=\top\). Since \(\mathbb{A}\) is compatible with joins we have that \[\bigwedge_{i\in I}(a_{i}\to\underset{j\in J_{i}}{\Xi}c_{j}^{i})\in\Sigma\iff \bigwedge_{i\in I}(a_{i}\to\bigvee_{j\in J_{i}}c_{j}^{i})\in\Sigma.\] But \[\bigwedge_{i\in I}(a_{i}\to\bigvee_{j\in J_{i}}c_{j}^{i})=a_{\overline{i}}\to \bigvee_{j\in J}b_{j}\] and this is an element of \(\Sigma\) because \(\mathbb{A}\) is compatible with joins. Since \((a_{i})_{i\in I}\) is supercompact by hypothesis we can conclude that there exists a function \(f\in(\Pi i\in I)J_{i}\) such that \[\bigwedge_{i\in I}(a_{i}\to c_{f(i)}^{i})\in\Sigma\] From this it follows that \(f(\overline{i})\in J\) and \(a_{\overline{i}}\to b_{f(\overline{i})}\in\Sigma\). Thus every \(a_{\overline{i}}\) is supercompact, i.e \((a_{i})_{i\in I}\) is **cSK**. **Proposition 3.34**.: _Let \(\mathbb{A}\) be compatible with joins and \((a_{i})_{i\in I}\) be a family of its elements. If \((a_{i})_{i\in I}\) is_ **fSK**_, then \(a_{i}\not\equiv_{\Sigma}\bot\) for every \(i\in I\)._ Proof.: By Proposition 3.22 we already know that **fSK**\(\Rightarrow\)**wfSK**. Therefore, to prove the result it is enough to prove that if \((a_{i})_{i\in I}\) is **wfSK** and there exists \(\overline{i}\in I\) such that \(a_{\overline{i}}\equiv_{\Sigma}\bot\), then \((a_{i})_{i\in I}\) is not **fSK**. Consider the family of families \(((b_{j}^{i})_{j\in\{0,1\}})_{i\in I}\) where \(b_{0}^{i}=\bot\) and \(b_{1}^{i}=\top\). Then \[\bigwedge_{i\in I}(a_{i}\to\bigvee_{j\in\{0,1\}}b_{j}^{i})=\bigwedge_{i\in I}( a_{i}\to\top)=\top\in\Sigma\] and since \(\mathbb{A}\) is compatible with joins \(\bigwedge_{i\in I}(a_{i}\to\overline{\exists}_{j\in\{0,1\}}b_{j}^{i})\in\Sigma\). Moreover \[\bigwedge_{i\in I}\bigwedge_{j,j^{\prime}\in I}(b_{j}^{i}\times b_{j^{\prime}} ^{i}\to\delta(j,j^{\prime}))=\bigwedge_{i\in I}\left(\bigwedge_{j\in I}(b_{j}^ {i}\times b_{j}^{i}\to\delta(j,j^{\prime}))\wedge\bigwedge_{j\neq j^{\prime} \in I}(b_{j}^{i}\times b_{j^{\prime}}^{i}\to\delta(j,j^{\prime}))\right)=\] \[(\bot\times\bot\to\top)\wedge(\top\times\top\to\top)\wedge(\bot\times\top\to \bot)\wedge(\top\times\bot\to\bot)=(\top\to\bot)\to\bot\in\Sigma\] since in any implicative algebra compatible with joins \(a\times\bot=\bot\times a=\top\to\bot\) and Lemma 3.32 holds. Consider now two functions \(f,g:I\to\{0,1\}\) where \(f\) is the constant with value \(1\) while \(g(i)=1\) for every \(i\in I\), but \(\overline{i}\), for which we have \(g(\overline{i})=0\). We have \[\bigwedge_{i\in I}(a_{i}\to b_{f(i)}^{i})=\bigwedge_{i\in I}(a_{i}\to\top)=\top\in\Sigma\] and \[\bigwedge(a_{i}\to b_{g(i)}^{i})=\bigwedge_{i\neq\overline{i}\in I}(a_{i}\to \top)\wedge(a_{\overline{i}}\to\bot)=a_{\overline{i}}\to\bot\in\Sigma\] This \((a_{i})_{i\in I}\) is not **fSK**. Combining Proposition 3.34 with Proposition 3.27 we obtain the following corollary: **Corollary 3.35**.: _Let \(\mathbb{A}\) be compatible with joins and \((a_{i})_{i\in I}\) be a family of its elements. Then \((a_{i})_{i\in I}\) is_ **fSK** _if and only if \((a_{i})_{i\in I}\) is_ **wfSK** _and \(a_{i}\not\equiv_{\Sigma}\bot\) for every \(i\in I\)._ **Proposition 3.36**.: _Let \(\mathbb{A}\) be compatible with joins. If \((a_{i})_{i\in I}\) is_ **fSK** _then it is_ **cInd**_._ Proof.: Assume \((a_{i})_{i\in I}\) is **fSK**, \(a_{\overline{i}}\to\underset{j\in J}{\overrightarrow{\sqcap}}b_{j}\in\Sigma\) (which, since we are assuming compatibility with joins, is equivalent to \(a_{\overline{i}}\to\bigvee_{j\in J}b_{j}\in\Sigma\)) and \(\bigwedge_{j,j^{\prime}\in J}(b_{j}\times b_{j^{\prime}}\to\delta_{J}(j,j^{ \prime}))\in\Sigma\). As a consequence of Proposition 3.34, \(J\neq\emptyset\). Thus, if for every \(i\in I\) and \(j\in J\) we define \[c_{j}^{i}=\begin{cases}b_{j}\text{ if }i=\overline{i}\\ \bot\text{ if }i\neq\overline{i},j\neq\overline{j}\\ \top\text{ if }i\neq\overline{i},j=\overline{j}\end{cases}\] where \(\overline{j}\) is a fixed element of \(J\), we get that \[\bigwedge_{i\in I}(a_{i}\to\bigvee_{j\in J}c_{j}^{i})=a_{\overline{i}}\to \bigvee_{j\in J}b_{j}\in\Sigma\] from which it follows by compatibility with joins that \[\bigwedge_{i\in I}(a_{i}\to\underset{j\in J}{\overrightarrow{\sqcap}}c_{j}^{i })\in\Sigma\] Moreover using Lemma 3.32 we get that \[\bigwedge_{i\in I}\bigwedge_{j,j^{\prime}\in J}(c_{j}^{i}\times c_{j^{\prime}} ^{i}\to\delta_{J}(j,j^{\prime}))\equiv_{\Sigma}\bigwedge_{j,j^{\prime}\in J}(b _{j}\times b_{j^{\prime}}\to d_{J}(j,j^{\prime}))\wedge\bigwedge_{i\neq \overline{i}}((\top\to\bot)\to\bot)=\] \[\bigwedge_{j\neq j^{\prime}\in J}(b_{j}\times b_{j^{\prime}}\to\bot)\wedge \bigwedge_{i\neq\overline{i}}((\top\to\bot)\to\bot)=\] \[\bigwedge_{j\neq j^{\prime}\in J}(b_{j}\times b_{j^{\prime}}\to\bot)\equiv_{ \Sigma}\bigwedge_{j,j^{\prime}\in J}(b_{j}\times b_{j^{\prime}}\to\delta_{J}(j, j^{\prime}))\in\Sigma\] since \(\delta_{J}(j,j)=\top\), \(x\to\top=\top\) and \(x\times y\geq\top\to\bot\) for every \(x,y\) in \(\mathbb{A}\). Thus, since \((a_{i})_{i\in I}\) is **fSK**, we get the existence of a function \(f:I\to J\) such that \[\bigwedge_{i\in I}(a_{i}\to c_{f(i)}^{i})\in\Sigma.\] In particular \(a_{\overline{i}}\to b_{f(\overline{i})}\in\Sigma\) and we can conclude. Thus if we add the assumption that \(\mathbb{A}\) is compatible with joins the situation can be summarized as follows: \(\Sigma\) _closed under \(\bigwedge\)_ We start by recalling that, in general, the separator of an implicative algebra is not required to be closed under arbitrary infima, even if there are several situations in which this is the case, e.g. for an implicative algebra associated with a complete Heyting algebra. One can also notice that a separator \(\Sigma\) is closed under arbitrary infima if and only if it is a principal filter. In [19, Prop.4.13] implicative algebras of which the separator is a principal ultrafilter are characterized as those giving rise to a tripos which is isomorphic to a forcing tripos. **Proposition 3.37**.: _If \(\Sigma\) is closed under arbitrary infima \(\bigwedge\), then all \(\mathbf{cSK}\) families are \(\mathbf{U}\)-\(\mathbf{SK}\)._ Proof.: Let \((a_{i})_{i\in I}\) be a \(\mathbf{cSK}\) family. We have to prove that for every \(f:J\to I\), the family \((a_{f(j)})_{j\in J}\) is \(\mathbf{SK}\). Let us assume hence that \[\bigwedge_{j\in J}\left(a_{f(j)}\rightarrow\mathop{\hbox{\vbox{\hrule width 100 \hbox{\vrule width _-distributivity_ **Definition 3.39**.: An implicative algebra is \(\overline{\rule[0.0pt]{0.0pt}{1.0pt}\rule[0.0pt]{0.0pt}{1.0pt}}\)**-distributive** if \[\bigwedge_{((b^{k}_{j})_{j\in J_{k}})_{k\in K}\in\mathcal{F}}\left(\bigwedge_{k \in K}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ Thus if we add the assumption that \(\mathbb{A}\) is \(\exists\)-distributive, the situation can be summarized as follows: ### Supercompactness and indecomposability in complete Heyting algebras In the case of a complete Heyting algebra, the notion of supercompact element introduced in Definition 3.9 coincides with the ordinary notion, i.e an element \(a\) is supercompact if \(a\leq\bigvee_{i\in I}b_{i}\) implies the existence of an \(\overline{i}\in I\) such that \(a\leq b_{\overline{i}}\), for every set-indexed family \((b_{i})_{i\in I}\) of elements of \(\mathcal{A}\), that is if \(a\) is a supercompact element in the localic sense (see [21]). In particular, in the boolean case supercompact elements are exactly atoms. Indecomposable elements in Heyting algebras express just a notion of connectedness which is called in fact _indecomposability_ and it can be shown to be equivalent to usual connectedness2 plus the fact of being different from \(\bot\) (see e.g. [3]). For complete Boolean algebras, supercompactness coincides with indecomposability. Indeed, for every family of elements \((b_{i})_{i\in I}\) of a Boolean algebra one can construct a new family \((\widetilde{b}_{i})_{i\in I}\) satisfying the following properties: Footnote 2: An element \(a\) of a Heyting algebra is connected if and only if \(a\leq b\lor c\) implies \(a\leq b\) or \(a\leq c\). 1. \(\widetilde{b}_{i}\leq b_{i}\) for every \(i\in I\); 2. \(\widetilde{b}_{i}\wedge\widetilde{b}_{j}=\bot\) for every \(i\neq j\) in \(I\) 3. \(\bigvee_{i\in I}\widetilde{b}_{i}=\bigvee_{i\in I}b_{i}\). Notice that in general indecomposable elements of a Heyting algebra are not supercompact. For example, we can consider the complete Heyting algebra of open subsets of reals: it is immediate to check that there are no supercompact open subsets of reals, while each open interval is indecomposable. Every Heyting algebra can be easily seen to be compatible with joins. Moreover \(\Sigma\), being \(\{\top\}\), is closed under \(\bigwedge\). Thus we get that in there for families we have \(\mathbf{U}\)-\(\mathbf{SK}\equiv\mathbf{SK}\equiv\mathbf{cSK}\). Families satisfying these properties are exactly componentwise supercompact (in localic sense) families. Moreover, \(\mathbf{U}\)-\(\mathbf{fSK}\equiv\mathbf{U}\)-\(\mathbf{wfSK}\equiv\mathbf{fSK}\equiv\mathbf{cInd}\). Families satisfying these properties are exactly componentwise indecomposable families. Finally, one can easily see that a family \((a_{i})_{i\in I}\) is \(\mathbf{wfSK}\) if and only if one of the following happens: 1. \(I=\emptyset\) or 2. \(I\neq\emptyset\), \(a_{i}\) is connected for every \(i\in I\), but at least one of the \(a_{i}\)'s is indecomposable. Since for complete Boolean algebras indecomposable elements are exactly supercompact elements, for Boolean algebras we have that \(\mathbf{U}\)-\(\mathbf{SK}\equiv\mathbf{SK}\equiv\mathbf{cSK}\equiv\mathbf{U}\)-\(\mathbf{fSK}\equiv\mathbf{U}\)-\(\mathbf{wfSK}\equiv\mathbf{fSK}\equiv\mathbf{cInd}\) and these are exactly componentwise atomic families; \(\mathbf{wfSK}\) families are families whose components are atoms or minima, but not all minima if the family is non-empty. Notice that a complete Heyting algebra is \(\young(\)-distributive if and only if it is completely distributive. For complete Boolean algebras this amounts to the requirement of being complete atomic (that is of being isomorphic to a powerset algebra). ### Supercompactness and indecomposability in different kinds of realizability _Realizability_ In the case of an implicative algebra \((\mathcal{P}(R),\subseteq,\Rightarrow,\mathcal{P}(R)\setminus\{\emptyset\})\) coming from a combinatory algebra \((R,\cdot)\), supercompact elements are non-empty subsets of \(R\), since they are all equivalent to \(\top=R\), and since \(\bot=\emptyset\) (we are using Remarks 3.10 and 3.11). Every non-empty subset of \(R\) is also indecomposable (since it is supercompact) and every indecomposable subset is non-empty. Thus, supercompactness and indecomposability coincide for elements. We can also notice that the implicative algebra arising from a combinatory algebra \((R,\cdot)\) is compatible with joins and \(\young(\)-distributive. The former property can be trivially verified, while to prove the latter, it is enough to observe that since the implicative algebra coming from a combinatory algebra \((R,\cdot)\) is compatible with joins, it is sufficient to prove that there is a realizer \(r\) independent from the specific family such that \[r\in\bigcap_{i\in I}\bigcup_{j\in J_{i}}B^{i}_{j}\Rightarrow\bigcup_{f\in( \Pi i\in I)J_{i}}\bigcap_{i\in I}B^{i}_{f(i)}\] Since the antecedent and the consequent of the implication above are equal sets, \(r\) can be taken to be \(\mathbf{i}\). Therefore, by Proposition 3.41, we have that \(\mathbf{U}\)-\(\mathbf{SK}\) families coincide with \(\mathbf{SK}\) families, and \(\mathbf{U}\)-\(\mathbf{fSK}\) families coincide with \(\mathbf{fSK}\) families. We now show that for every \(\mathbf{SK}\) family \((A_{i})_{i\in I}\) there exists a function \(f:I\to R\) such that \[\bigcap_{i\in I}(A_{i}\Rightarrow\{f(i)\})\neq\emptyset\text{ and }\bigcap_{i\in I }(\{f(i)\}\Rightarrow A_{i})\neq\emptyset\] In order to do this we consider the family of families \(((\{j\})_{j\in A_{i}})_{i\in I}\). One easily can show that \[\lambda x.\lambda y.yx\in\bigcap_{i\in I}(A_{i}\Rightarrow\underset{j\in A_{ i}}{\overline{\exists}}\{j\})\] Using supercompactness we get the existence of a function \(f\in(\Pi i\in I)A_{i}\subseteq R^{I}\) and a realizer \(s\) such that \[s\in\bigcap_{i\in I}(A_{i}\Rightarrow\{f(i)\})\] Since \(\mathbf{i}\in\bigcap_{i\in I}(\{f(i)\}\Rightarrow A_{i})\) we can conclude. Conversely, we show that if \((A_{i})_{i\in I}\) is a family satisfying the property above, then it is \(\mathbf{SK}\). So let us assume \[r\in\bigcap_{i\in I}(A_{i}\Rightarrow\mathop{\hbox{\vrule width 0.4pt height 6.0pt depth 0.0pt \vrule width 0.4pt height 6.0pt depth 0.0pt}\kern-3.0pt\hbox{\vrule width 0.4pt height 6.0pt depth 0.0pt}\kern-3.0pt\hbox{\vrule width 0. _Relative realizability_ It is easy to check that the implicative algebras of relative realizability are compatible with joins and \(\overrightarrow{\square}\)-distributive. Supercompact elements are exactly those \(A\subseteq R\) which are equivalent to a singleton. Indeed, since the combinator \(\mathbf{i}\) can be assumed to be in \(R_{\#}\), the implicative algebra is compatible with joins and \(\mathbf{i}\in A\Rightarrow\bigcup_{a\in A}\{a\}\), we get that if \(A\) is supercompact, then there exists \(a\in A\) and \(r\in R_{\#}\), such that \(r\in A\Rightarrow\{a\}\). Since \(\mathbf{i}\in\{a\}\Rightarrow A\), we can conclude that \(A\equiv_{\Sigma_{\mathcal{R},\mathcal{R}_{\#}}^{r}}\{a\}\). Conversely, each such an element is supercompact since singletons are easily shown to be so. In particular, all subsets \(A\) such that \(A\cap R_{\#}\neq\emptyset\) are supercompact. Another example of supercompact is a set of the form \(P_{a}:=\{\mathbf{p}ab|\,b\in R\}\) where \(a\in R\) is fixed and where \(\mathbf{p}\) is the usual pairing combinator which we can assume to be in \(R_{\#}\). Indeed, this subset is equivalent to the singleton \(\{a\}\) since \(\mathbf{p}_{1}\in P_{a}\Rightarrow\{a\}\) and the first-projection combinator can be assumed to be in \(R_{\#}\), while \(\lambda x.\mathbf{p}xx\in\{a\}\Rightarrow P_{a}\) and this combinator too can be assumed to be in \(R_{\#}\). From the characterization of \(\times\)-disjoint families in relative realizability implicative algebras, it follows that indecomposable elements in this case are just non-empty subsets of \(R\). Thus in general **cInd** is not equivalent to **cSK** for families. Moreover, **SK** is not equivalent to **cSK** (realizability is a special case of relative realizability). Since the implicative algebra is \(\overrightarrow{\square}\)-distributive, then \(\mathbf{U}\)-\(\mathbf{SK}\equiv\mathbf{SK}\) and \(\mathbf{U}\)-\(\mathbf{fSK}\equiv\mathbf{U}\)-\(\mathbf{wfSK}\equiv\mathbf{fSK}\) and one can easily show, as for realizability, that **SK** families are those families equivalent to singleton families, i.e. families of the form \((\{f(i)\})_{i\in I}\) for some function \(f:I\to R\). One can also show easily that **cInd** is equivalent to **fSK**. Finally, as in the realizability case, one can easily prove that a family \((A_{i})_{i\in I}\) is \(\mathbf{wfSK}\) if and only if \(I=\emptyset\) or there exists \(\overline{i}\in I\) such that \(A_{\overline{i}}^{\bullet}\neq\emptyset\). **U-\(\mathbf{fSK}\equiv\mathbf{U}\)-\(\mathbf{wfSK}\equiv\mathbf{fSK}\equiv\mathbf{cInd}\)** **wfSK** #### Nested realizability In nested realizability implicative algebras supercompact elements are pairs \((A,B)\) which are equivalent to a pair of the form \((X,\{b\})\) with \(X\subseteq\{b\}\). Indeed, one can notice that \[\mathbf{i}\in(A\Rightarrow_{\#}\bigcup_{b\in B}(\{b\}\cap A))\;\cap\;(B \Rightarrow\bigcup_{b\in B}\{b\})\] Thus, since the implicative algebra of nested realizability is compatible with joins, if \((A,B)\) is supercompact, then there exists \(b\in B\) such that \((\{b\}\cap A,\{b\})\equiv_{\Sigma_{\mathcal{R},\mathcal{R}_{\#}}}(A,B)\). The converse can be easily checked. Functionally supercompact elements are those \((A,B)\) with \(B\neq\emptyset\). Nested realizability implicative algebras are also \(\overline{\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt} {1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{ \rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}} \phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0 pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1.0pt}}\phantom{\rule{0.0pt}{1. **Example 3.44**.: The implicative algebras of realizability, nested realizability and relative realizability satisfy the choice rule. A complete Heyting algebra satisfies choice rule if and only if it is supercompact (see [21]). Motivated by the characterization presented in [17] and by the realizability and localic examples, we introduce the following definition. **Definition 3.45**.: An implicative algebra \(\mathbb{A}\) is said to be **uniformly supercoherent** if: * it satisfies the choice rule; * if \((a_{i})_{i\in I}\) and \((b_{i})_{i\in I}\) are **U-SK** families, then \((a_{i}\times b_{i})_{i\in I}\) is a **U-SK** family; * for every family \((a_{j})_{j\in J}\) there exists a set \(I\), a function \(f\colon I\to J\) and a **U-SK** family \((b_{i})_{i\in I}\) such that \[\bigwedge_{j\in J}(a_{j}\to\underset{f(i)=j}{\overline{\exists}}b_{i})\in \Sigma\qquad\text{ and }\qquad\bigwedge_{j\in J}((\underset{f(i)=j}{\overline{\exists}}b_{i})\to a _{j})\in\Sigma\] **Remark 3.46**.: We anticipate here that, by definition, an implicative algebra is uniformly supercoherent if and only if its implicative tripos is an instance of the full existential completion, see [17, Thm. 4.16 and Thm. 7.32]. We will see more details about this in the next section and in particular in Remark 4.13. **Example 3.47**.: An implicative algebra coming from a complete Heyting algebra is uniformly supercoherent if and only if the corresponding locale is supercoherent. The implicative algebras of realizability w.r.t. a combinatory algebra are uniformly supercoherent. We refer to [17] for all the details. **Definition 3.48**.: An implicative algebra \(\mathbb{A}\) is said to be **uniformly functional-supercoherent** if: * it satisfies the choice rule; * if \((a_{i})_{i\in I}\) and \((b_{i})_{i\in I}\) are **U-SK** families, then \((a_{i}\times b_{i})_{i\in I}\) is a **U-SK** family; * for every **U-fSK** family \((a_{j})_{j\in J}\) there exists a set \(I\), a function \(f\colon I\to J\) and a **U-SK** family \((b_{i})_{i\in I}\) such that \[\bigwedge_{j\in J}(a_{j}\to\underset{f(i)=j}{\overline{\exists}}b_{i})\in \Sigma\qquad\text{ and }\qquad\bigwedge_{j\in J}((\underset{f(i)=j}{\overline{\exists}}b_{i})\to a _{j})\in\Sigma\] **Remark 3.49**.: Notice that in a uniformly supercoherent implicative algebra, we have that \(\textbf{SK}\equiv\textbf{U-SK}\). Again, this can be considered as a particular case of the [17, Lem. 4.11]. ### Modest and core families We introduce here some notions which we will use later. **Definition 3.50**.: A family \((a_{i})_{i\in I}\) of elements of \(\mathbb{A}\) is 1. \(\wedge\)_-modest_ if it is \(\mathbf{U}\)-f**SK** and \(\wedge\)-disjoint; 2. \(\times\)_-modest_ if it is \(\mathbf{U}\)-f**SK** and \(\times\)-disjoint; 3. a \(\wedge\)_-core family_ if it is \(\mathbf{U}\)-**SK** and \(\wedge\)-disjoint. 4. a \(\times\)_-core family_ if it is \(\mathbf{U}\)-**SK** and \(\times\)-disjoint. Using results and definitions in the previous section one has that: **Example 3.51**.: In a complete Heyting algebra, since \(\times=\wedge\), \(\wedge\)-modest families coincide with \(\times\)-modest families: they are families whose elements are pairwise disjoint and indecomposable. Also \(\wedge\)-core families coincide with \(\times\)-core families: they are families of pairwise disjoint supercompact elements. **Example 3.52**.: In the (total) realizability case \(\wedge\)-modest families are families of pairwise disjoint non-empty sets of realizers, that is modest sets or PERs (see [25]). A family \((A_{i})_{i\in I}\) is a \(\wedge\)-core family if and only if it is equivalent to a family of the form \((\{f(i)\})_{i\in I}\) with \(f:I\to A\) injective. Lastly, a family \((A_{i})_{i\in I}\) is \(\times\)-modest if and only if it is a \(\times\)-core family if and only if \(I=\emptyset\) or \(I\) is a singleton \(\{\star\}\) and \(A_{\star}\) is non-empty. ## 4 Triposes from implicative algebras In [19] Miquel introduced the notion of _tripos_ associated with an implicative algebra, and he proved in [20] that every Set-based tripos, i.e. every tripos as originally introduced in [13], is equivalent to a tripos arising from an implicative algebra. In this section, we recall the definition of Set_-based tripos_ (from [13]), _implicative tripos_ and the main result of Miquel. **Notation:** we denote by Hey the category of Heyting algebras and their morphisms, and we denote by Pos the category of posets and their morphisms. **Definition 4.1** (tripos).: A (Set-based) **tripos** is a functor \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\) such that * for every function \(f\colon X\to Y\) the re-indexing functor \(\mathsf{P}_{f}\colon\mathsf{P}(Y)\to\mathsf{P}(X)\) has a left adjoint \(\exists_{f}\colon\mathsf{P}(X)\to\mathsf{P}(Y)\) and a right adjoint \(\forall_{f}\colon\mathsf{P}(X)\to\mathsf{P}(Y)\) in the category Pos, satisfying the Beck-Chevalley condition (BCC), i.e. for every pullback \[\begin{CD}W@>{f^{\prime}}>{}>Z\\ @V{g}V{}V\\ X@>{}>{f}>Y\end{CD}\] we have that \(\mathsf{P}_{g}\exists_{f}=\exists_{f^{\prime}}\mathsf{P}_{g^{\prime}}\) and \(\mathsf{P}_{g}\forall_{f}=\forall_{f^{\prime}}\mathsf{P}_{g^{\prime}}\). * there exists a _generic predicate_, namely there exists a set \(\Sigma\) and an element \(\sigma\) of \(\mathsf{P}(\Sigma)\) such that for every element \(\alpha\) of \(\mathsf{P}(X)\) there exists a function \(f\colon X\to\Sigma\) such that \(\alpha=\mathsf{P}_{f}(\sigma)\); **Remark 4.2** (Frobenius reciprocity).: Employing the preservation of the Heyting implication \(\to\) by \(\mathsf{P}_{f}\), it is straightforward to check that every tripos \(\mathsf{P}\) satisfies the so-called _Frobenius reciprocity_ (FR), namely: \[\exists_{f}(\mathsf{P}_{f}(\alpha)\wedge\beta)=\alpha\wedge\exists_{f}(\beta) \text{ and }\forall_{f}(\mathsf{P}_{f}(\alpha)\to\beta)=\alpha\to\forall_{f}(\beta)\] for every function \(f\colon X\to Y\), \(\alpha\) in \(\mathsf{P}(Y)\) and \(\beta\) in \(\mathsf{P}(X)\). See [13, Rem. 1.3]. Given a \(\mathsf{Set}\)-based tripos \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\), we will denote by \(\delta_{X}:=\exists_{\Delta_{X}}(\top)\) the so-called _equality predicate_ on \(X\) of the tripos. Now let us consider an implicative algebra \(\mathbb{A}=(\mathcal{A},\leq,\to,\Sigma)\). For each set \(I\) we can define a new implicative algebra \((\mathcal{A}^{I},\leq^{I},\to^{I},\Sigma[I])\) where \(\mathcal{A}^{I}\) denotes the set of functions from \(I\) to \(\mathcal{A}\) (which we call _predicates_ over \(I\)), \(\leq^{I}\) is the point-wise order (\(f\leq^{I}g\) if and only if \(f(i)\leq g(i)\) for every \(i\in I\)), for every \(f,g\in\mathcal{A}^{I}\) and \(i\in I\), the function \(f\to^{I}g\) is defined by \((f\to^{I}g)(i):=f(i)\to g(i)\), and \(\Sigma[I]\subseteq\mathcal{A}^{I}\) is the so-called _uniform power separator_ defined as: \[\Sigma[I]:=\{f\in A^{I}|\exists s\in\Sigma,\forall i\in I,s\leq f(i)\}=\{f\in A ^{I}|\,\bigwedge_{i\in I}f(i)\in\Sigma\}.\] As we have already seen, given an implicative algebra \((\mathcal{A},\leq,\to,\Sigma)\), we have an induced binary relation of _entailment_ on \(\mathcal{A}\), written \(a\vdash_{\Sigma}b\) and defined by \[a\vdash_{\Sigma}b\iff(a\to b)\in\Sigma.\] It is direct to check that this binary relation gives a preorder \((\mathcal{A},\vdash_{\Sigma})\) on \(\mathcal{A}\). In [19, Sec. 4] it is shown that each implicative algebra \((\mathcal{A},\leq,\to,\Sigma)\) induces a tripos \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\) defined as follows: **Definition 4.3** (implicative tripos).: Let \((\mathcal{A},\leq,\to,\Sigma)\) be an implicative algebra. For each set \(I\) the Heyting algebra \(\mathsf{P}(I)\) is given by the posetal reflection of the preorder \((\mathcal{A}^{I},\vdash_{\Sigma[I]})\). For each function \(f\colon I\to X\), the functor \(\mathsf{P}_{f}\colon\mathsf{P}(X)\to\mathsf{P}(I)\) acts by precomposition, that is \(P_{f}([g]):=[g\circ f]\) for every \(f:I\to\mathcal{A}\). The functor defined in Definition 4.3 can be proved to be a \(\mathsf{Set}\)-based tripos, see [19, Sec. 4], and it is called _implicative tripos_. In [20, Thm. 1.1] Miquel proved that the notion of implicative tripos is general enough to encompass all \(\mathsf{Set}\)-based triposes. In particular we have the following result: **Theorem 4.4**.: _Every \(\mathsf{Set}\)-based tripos is isomorphic to an implicative tripos._ **Example 4.5** (realizability tripos).: The realizability tripos introduced in [13] corresponds to the implicative tripos arising from the implicative algebra given by a partial combinatory algebra Section 2.2. **Example 4.6** (localic tripos).: The localic tripos introduced in [13] corresponds to the implicative tripos arising from the implicative algebra given by a complete Heyting algebra Section 2.2. ### Supercompact predicates of implicative triposes In this section we present the various notions of supercompact family of an implicative algebra introduced in Section 3 using the logical language underlying the notion of tripos. Since the properties considered in subsections 3.2 and 3.3 are stable under equivalence \(\equiv_{\Sigma[I]}\) we will abuse of notation in the following results by considering predicates as functions rather than equivalence classes of functions. We start by fixing the following notation: **Definition 4.7**.: Let \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\) be an implicative tripos. A predicate \(\phi\) of \(\mathsf{P}(I\times J)\) is said to be a **functional predicate** if \[P_{\langle\pi_{1},\pi_{2}\rangle}(\phi)\wedge P_{\langle\pi_{1},\pi_{3}\rangle }(\phi)\leq P_{\langle\pi_{2},\pi_{3}\rangle}(\delta_{J})\] where the domain of the projections is \(I\times J\times J\); **Definition 4.8**.: Let \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\) be an implicative tripos. A predicate \(\varphi\) of \(\mathsf{P}(I)\) is: * a **supercompact predicate** (\(\mathbf{SK_{p}}\)) if whenever \(\varphi\leq\exists_{f}(\psi)\) with \(f\colon J\to I\) and \(\psi\) element of \(\mathsf{P}(J)\), there exists a function \(g\colon I\to J\) such that \(\varphi\leq\mathsf{P}_{g}(\psi)\) and \(f\circ g=\mathrm{id}_{I}\); * a **functionally supercompact predicate** (\(\mathbf{fSK_{p}}\)) if for every functional predicate \(\phi\) of \(\mathsf{P}(I\times J)\), if \(\varphi\leq\exists_{\pi_{I}}(\phi)\), then there exists a unique function \(f\colon I\to J\) such that \(\varphi\leq\mathsf{P}_{\langle\mathrm{id}_{I},f\rangle}(\phi)\); * a **weakly functionally supercompact predicate** (\(\mathbf{wfSK_{p}}\)) if for every functional predicate \(\phi\) of \(\mathsf{P}(I\times J)\), if \(\varphi\leq\exists_{\pi_{I}}(\phi)\), then there exists a function \(f\colon I\to J\) such that \(\varphi\leq\mathsf{P}_{\langle\mathrm{id}_{I},f\rangle}(\phi)\); In the language of triposes, the "uniformity" property can be presented as a _stability under re-indexing condition_: **Definition 4.9**.: Let \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\) be an implicative tripos. A predicate \(\varphi\) of \(\mathsf{P}(I)\) is: * a **uniformly supercompact predicate** (\(\mathbf{U}\)-\(\mathbf{SK_{p}}\)) if \(\mathsf{P}_{f}(\varphi)\) is a supercompact predicate for every function \(f\colon J\to I\); * a **uniformly functionally supercompact predicate** (\(\mathbf{U}\)-\(\mathbf{fSK_{p}}\)) if \(\mathsf{P}_{f}(\varphi)\) is a functionally supercompact predicate for every function \(f\colon J\to I\); * a **uniformly weakly functionally supercompact predicate** (\(\mathbf{U}\)-\(\mathbf{wfSK_{p}}\)) if \(\mathsf{P}_{f}(\varphi)\) is a weakly functionally supercompact predicate for every function \(f\colon J\to I\); **Proposition 4.10**.: _Let \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\) be an implicative tripos, and let \(\varphi\) be a predicate of \(\mathsf{P}(I)\). We have that:_ 1. \(\varphi\) _is_ \((\mathbf{SK_{p}})\) _if and only if_ \((\varphi(i))_{i\in I}\) _is_ \((\mathbf{SK})\)_;_ 2. \(\varphi\) _is_ \((\mathbf{fSK_{p}})\) _if and only if_ \((\varphi(i))_{i\in I}\) _is_ \((\mathbf{wfSK})\)_._ Proof.: The proofs are straightforward. We provide just the proof of the first point, since the other two follow by similar arguments. Suppose that \(\varphi\) is \((\mathbf{SK_{p}})\), and let us consider a family \(((b^{i}_{j})_{j\in J_{i}})_{i\in I}\), with \[\bigwedge_{i\in I}(\varphi(i)\to\mathop{\hbox{\vrule width 0.4pt height 6.0pt depth 0.0pt \vrule width 0.4pt height 6.0pt depth 0.0pt}}\limits_{j\in J_{i}}b^{i}_{j})\in\Sigma. \tag{1}\] Now let us define by \(g:\coprocoprod_{i\in I}J_{i}\to I\) the function sending an element \((i,j)\) to \(i\), and by \(\phi\in\mathsf{P}(\coprod_{i\in I}J_{i})\) the predicate sending \((i,j)\) to \(b^{i}_{j}\). By definition of the left adjoints \(\exists_{f}\) in an implicative tripos, we have that (1) is equivalent to \[\varphi\vdash_{\Sigma[I]}\exists_{g}\phi. \tag{2}\] Hence, by definition of \((\mathbf{SK_{p}})\), there exists a function \(f:I\to\coprod_{i\in I}J_{i}\) such that \(g\circ f=\operatorname{id}_{I}\), and \[\varphi\vdash_{\Sigma[I]}\mathsf{P}_{f}(\phi). \tag{3}\] By definition, we have that (3) means \[\bigwedge_{i\in I}(\varphi(i)\to\phi(f(i)))\in\Sigma. \tag{4}\] By definition of \(\phi\), and since the second component \(f_{2}(i)\) of \(f(i)\) is in \(J_{i}\), we can conclude from (4) that \[\bigwedge_{i\in I}(\varphi(i)\to b^{i}_{f_{2}(i)})\in\Sigma\] i.e. that the family \((\varphi(i))_{i\in I}\) is \((\mathbf{SK})\). Employing a similar argument one can check that the converse holds too. Using the previous result and the fact that \(\mathsf{P}\) acts on arrows by reindexing we get the following proposition. **Proposition 4.11**.: _Let \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\) be an implicative tripos, and let \(\varphi\) be a predicate of \(\mathsf{P}(I)\). We have that:_ 1. \(\varphi\) _is_ \((\mathbf{U}\)_-_\(\mathbf{SK_{p}})\) _if and only if_ \((\varphi(i))_{i\in I}\) _is_ \((\mathbf{U}\)_-_\(\mathbf{SK})\)_;_ 2. \(\varphi\) _is_ \((\mathbf{U}\)_-_\(\mathbf{fSK_{p}})\) _if and only if_ \((\varphi(i))_{i\in I}\) _is_ \((\mathbf{U}\)_-_\(\mathbf{fSK})\)_;_ 3. \(\varphi\) _is_ \((\mathbf{U}\)_-_\(\mathbf{wfSK_{p}})\) _if and only if_ \((\varphi(i))_{i\in I}\) _is_ \((\mathbf{U}\)_-_\(\mathbf{wfSK})\)_._ Combining the previous proposition with Corollary 3.28 we obtain the following corollary: **Corollary 4.12**.: _A predicate of an implicative tripos is \((\mathbf{U}\)_-_\(\mathbf{wfSK_{p}})\) if and only if it is \((\mathbf{U}\)_-_\(\mathbf{fSK_{p}})\)_ **Remark 4.13**.: Notice that the \((\mathbf{SK_{p}})\) and \((\mathbf{U}\)_-_\(\mathbf{SK_{p}})\) predicates of a tripos are precisely the elements called _full existential splitting_ and _full existential free_ respectively in [17]. The \((\mathbf{U}\)_-_\(\mathbf{SK_{p}})\) predicates of a tripos coincide also with those predicates called \(\exists\)-prime predicates introduced in [8]. **Example 4.14**.: In the case of triposes for complete Heyting algebras with \(\Sigma=\{\top\}\), we obtain that \((\mathbf{SK_{p}})\) and \((\mathbf{U}\mathbf{-SK_{p}})\) predicates coincide and are exactly predicates \(\varphi\) such that \((\varphi(i))_{i\in I}\) is \((\mathbf{cSK})\) since \(\Sigma\) is clearly closed under arbitrary infima; hence we re-obtain (17, Lem. 7.31). **Example 4.15**.: Combining Proposition 4.10 and Proposition 4.11 with examples in 3.6, we have that in the triposes arising from implicative algebras coming from realizability, relative realizability and nested realizability, \((\mathbf{SK_{p}})\) predicates coincide with \((\mathbf{U}\mathbf{-SK_{p}})\) predicates. In the case of realizability they are exactly those predicates (equivalent to) singleton predicates (that is predicates \(\alpha\) for which \(\alpha(x)\) is always a singleton). This characterization was already provided in [17]. ## 5 Partitioned assemblies and assemblies for implicative triposes In this section we introduce the notion of _partitioned assemblies_ and _assemblies_ for implicative triposes. Before starting our analysis, we recap here some useful notions and results regarding the _Grothendieck category_ of an implicative tripos ### Some properties of the Grothendieck category of an implicative tripos **Definition 5.1** (Grothendieck category).: Let \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\) be an implicative tripos. The **Grothendieck category**\(\Gamma[\mathsf{P}]\) of \(\mathsf{P}\) is the category whose objects are pairs \((A,\alpha)\) with \(\alpha\in\mathsf{P}(A)\) and whose arrows from \((A,\alpha)\) to \((B,\beta)\) are arrows \(f:A\to B\) in \(\mathcal{C}\) such that \(\alpha\leq\mathsf{P}_{f}(\beta)\). It is direct to check that for every implicative tripos \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\), we have an adjunction where \(U(X,\varphi):=X\), \(U(f):=f\), \(\Delta(X):=(X,\top_{X})\) and \(\Delta(f):=f\). **Proposition 5.2**.: _Let \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\) be an implicative tripos. Then, the regular epis in \(\Gamma[\mathsf{P}]\) are arrows \(f:(A,\alpha)\to(B,\beta)\) such that \(f\) is a regular epi in \(\mathsf{Set}\) and \(\beta=\exists_{f}(\alpha)\)._ Proof.: Since the forgetful functor \(U:\Gamma[\mathsf{P}]\to\mathsf{Set}\) is left adjoint to the functor \(\Delta:\mathsf{Set}\to\Gamma[\mathsf{P}]\), \(U\) preserves colimits. In particular, if \(f:(A,\alpha)\to(B,\beta)\) is a regular epi, that is the coequalizer of two arrows \(g,h:(C,\gamma)\to(A,\alpha)\), then \(f:A\to B\) is a coequalizer of \(g,h:C\to A\) in \(\mathsf{Set}\). Let us show now that \(\beta=\exists_{f}(\alpha)\). Since \(\alpha\leq\mathsf{P}_{f}(\beta)\), then \(\exists_{f}(\alpha)\leq\beta\). Moreover, we know that \(\alpha\leq\mathsf{P}_{f}(\exists_{f}(\alpha))\). Thus \(f:(A,\alpha)\to(B,\exists_{f}(\alpha))\) is a well-defined arrow which coequalizes \(g,h:(C,\gamma)\to(A,\alpha)\). This implies that \(\mathrm{id}_{A}:(A,\beta)\to(A,\exists_{f}(\alpha))\) must be a well-defined arrow in \(\Gamma[\mathsf{P}]\). Thus, \(\beta\leq\exists_{f}(\alpha)\). Hence we get \(\beta=\exists_{f}(\alpha)\). Conversely, let \(f:A\to B\) be a regular epi in \(\mathsf{Set}\). Then it is the coequalizer in \(\mathsf{Set}\) of two arrows \(g,h:C\to A\). Since \(\alpha\leq\mathsf{P}_{f}\exists_{f}(\alpha)\), the arrow \(f:(A,\alpha)\to(B,\exists_{f}(\alpha))\) is well-defined. It is immediate to verify that this arrow is a coequalizer for the arrows \(g,h:(X,\mathsf{P}_{g}(\alpha)\wedge\mathsf{P}_{h}(\alpha))\to(A,\alpha)\) **Proposition 5.3**.: _Let \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\) be an implicative tripos. Then \(\Gamma[\mathsf{P}]\) is regular._ Proof.: We start by showing that \(\Gamma[\mathsf{P}]\) has all finite limits. First, it is direct to check that \((1,\top_{1})\) is a terminal object in \(\Gamma[\mathsf{P}]\). A product of \((A,\alpha)\) and \((B,\beta)\) in \(\Gamma[\mathsf{P}]\) is given by \((A\times B,\mathsf{P}_{\pi_{1}}(\alpha)\wedge\mathsf{P}_{\pi_{2}}(\beta))\) together with the projections \(\pi_{1}\) and \(\pi_{2}\), while an equalizer of two parallel arrows \(f,g:(A,\alpha)\to(B,\beta)\) in \(\Gamma[\mathsf{P}]\) is given by \((E,\mathsf{P}_{e}(\alpha))\) where \(e:E\to A\) is an equalizer of \(f,g\) in \(\mathsf{Set}\). Now we show that \(\Gamma[\mathsf{P}]\) is regular. Let \(f:(A,\alpha)\to(B,\beta)\) be an arrow in \(\Gamma[\mathsf{P}]\). We can factorize \(f:A\to B\) in \(\mathsf{Set}\) as a regular epi \(r:A\to R\) followed by a mono \(m:R\to B\). The arrow \(r:(A,\alpha)\to(R,\exists_{r}(\alpha))\) is a regular epi in \(\Gamma[\mathsf{P}]\) by Proposition 5.2 and \(m:(R,\exists_{r}(\alpha))\to(B,\beta)\) is a mono. Such factorizations are unique up-to-isomorphism and pullback-stable in \(\Gamma[\mathsf{P}]\). Thus \(\Gamma[\mathsf{P}]\) is a regular category. **Example 5.4**.: Let us consider an implicative tripos \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\) for a complete Heyting algebra \(\mathbb{H}\). Then the Grothendieck category \(\Gamma[\mathsf{P}]\) is the coproduct completion \(\mathbb{H}_{+}\) of the category \(\mathbb{H}\). This fact was observed in [16] ### Partitioned assemblies The main purpose of this section is to generalize the notion of _category of partitioned assemblies_ associated with a PCA to implicative algebras and implicative triposes. Let us recall that given a (partial) combinatory algebra \((R,\cdot)\) the category of **partitioned assemblies** is defined as follows: * an object is a pair \((X,\varphi)\) where \(X\) is a set and \(\varphi\colon X\to R\) is a function from \(X\) to the PCA; * a morphism \(f\colon(X,\varphi)\to(Y,\psi)\) is a function \(f\colon X\to Y\) such that there exists an element \(a\in R\) with \(a\cdot\varphi(x)=\psi(f(x))\) for every \(x\in X\), i.e. there exists an element \(a\) of the PCA such that the diagram commutes. It is proved in [17; 16] that the category of partitioned assemblies can be completely defined in terms of realizability triposes and full existential free elements, namely it is the full subcategory of the Grothendieck category \(\Gamma[\mathsf{P}]\) whose second components is given by a full existential free element. Therefore we have that Remark 4.13 suggests that the following definition provides a natural generalization of the ordinary notion of category of partitioned assemblies to an arbitrary implicative tripos: **Definition 5.5** (partitioned assemblies).: Let \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\) be an implicative tripos. We define the category of **partitioned assemblies**\(\mathbf{P}\mathbf{s}\mathbf{m}_{\mathsf{P}}\) of \(\mathsf{P}\) as the full sub-category of \(\Gamma[\mathsf{P}]\) given by the objects of \(\Gamma[\mathsf{P}]\) whose second component is a \((\mathbf{U}\mathbf{-}\mathbf{S}\mathbf{K}_{\mathsf{P}})\) predicate of \(\mathsf{P}\). Notice that in general the category of partitioned assemblies of an implicative tripos has no finite limits. This is due to the fact that, in general, \((\mathbf{U}\text{-}\mathbf{S}\mathbf{K}_{\mathbf{p}})\) predicates are not closed under finite meets. In fact, combining the stability under reindexing of \((\mathbf{U}\text{-}\mathbf{S}\mathbf{K}_{\mathbf{p}})\) predicates with the definition of finite limits in \(\Gamma[\mathsf{P}]\) (see Proposition 5.3), it is straightforward to check that: **Lemma 5.6**.: _The category \(\mathbf{P}\mathbf{A}\mathbf{s}\mathbf{m}_{\mathsf{P}}\) is a lex subcategory of \(\Gamma[\mathsf{P}]\) if and only if \((\mathbf{U}\text{-}\mathbf{S}\mathbf{K}_{\mathbf{p}})\) predicates are closed under finite meets, that is if and only if \((a_{i}\times b_{i})_{i\in I}\) is \(\mathbf{U}\text{-}\mathbf{S}\mathbf{K}\) for every pair of \(\mathbf{U}\text{-}\mathbf{S}\mathbf{K}\) families \((a_{i})_{i\in I}\) and \((b_{i})_{i\in I}\)._ **Example 5.7**.: In the case of realizability triposes the category defined in Definition 5.5 coincides with the ordinary category of partitioned assemblies. **Example 5.8**.: Let \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{H}\mathsf{e}\)y be an implicative tripos associated with a complete Heyting algebra. Then, we have that an object of \(\mathbf{P}\mathbf{A}\mathbf{s}\mathbf{m}_{\mathsf{P}}\) is a pair \((X,\varphi)\) such that every element \(\varphi(x)\) is supercompact in the sense of [1]. **Remark 5.9**.: A nice intrinsic characterization of categories which are equivalent to a category of partitioned assemblies for a PCA is presented in [9, Thm. 3.8], where the author proves that a category is equivalent to partitioned assemblies over a PCA if and only if it is w.l.c.c. and well-pointed local, and has a discrete generic object. This intrinsic description, combined with the notion of category of partitioned assemblies for an implicative algebra, offers a useful tool to identify implicative algebras whose category of partitioned assemblies happens to be equivalent to the category of partitioned assemblies for a PCA. Again, these considerations can be extended to the case of categories of assemblies for implicative algebras, which we will define in the next section, taking advantage of the intrinsic description of the regular completion of a lex category [4]. ### Assemblies The main purpose of this section is to generalize the notion of _category of assemblies_ associated with a PCA to implicative algebras and implicative triposes. Let us recall (see for example [29]) that given a partial combinatory algebra \((R,\cdot)\) the category of **assemblies** is defined as follows: * an object is a pair \((X,\varphi)\) where \(X\) is a set and \(\varphi\colon X\to\mathscr{P}^{*}(R)\) is a function from \(X\) to the non-empty powerset of the PCA; * a morphism \(f\colon(X,\varphi)\to(Y,\psi)\) is a function \(f\colon X\to Y\) such that there exists an element \(a\in R\) with \(a\cdot\varphi(x)\subseteq\psi(f(x))\) for every \(x\in X\). Notice that, by the result presented in Section 3.6, we have that the category of assemblies can be described as the full subcategory of \(\Gamma[\mathsf{P}]\) associated with the realizability tripos, whose objects are given by \((X,\varphi)\) where \(\varphi\) enjoys the property of being \((\mathbf{U}\text{-}\mathbf{f}\mathbf{S}\mathbf{K}_{\mathbf{p}})\), or equivalently (by Corollary 4.12), of being \((\mathbf{U}\text{-}\mathbf{w}\mathbf{f}\mathbf{S}\mathbf{K}_{\mathbf{p}})\). This correspondence between assemblies and \((\mathbf{U}\text{-}\mathbf{f}\mathbf{S}\mathbf{K}_{\mathbf{p}})\) predicates of realizability triposes suggests the following abstraction of the notion of assemblies: **Definition 5.10** (Assemblies).: Let \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{H}\mathsf{e}\)y be an implicative tripos. We define the category of **assemblies**\(\mathbf{A}\mathbf{s}\mathbf{m}_{\mathsf{P}}\) of \(\mathsf{P}\) as the full sub-category of \(\Gamma[\mathsf{P}]\) given by the objects of \(\Gamma[\mathsf{P}]\) whose second component is a \((\mathbf{U}\text{-}\mathbf{f}\mathbf{S}\mathbf{K}_{\mathbf{p}})\) predicate of \(\mathsf{P}\). Hence we have the following inclusions of categories: \[\mathbf{PAsm_{P}}\xrightleftharpoons[r]{}\mathbf{Asm_{P}}\xrightleftharpoons[r]{} \Gamma[\mathsf{P}]\] As in the case of partitioned assemblies, the category of assemblies of an implicative tripos is not lex or regular, since \((\mathbf{U}\text{-}\mathbf{fSK_{P}})\) predicates are not closed under finite meets in general. **Proposition 5.11**.: _The category \(\mathbf{Asm_{P}}\) is a regular subcategory of \(\Gamma[\mathsf{P}]\) if and only if:_ * \((\mathbf{U}\text{-}\mathbf{fSK_{P}})\) _predicates are closed under finite meets;_ * \((\mathbf{U}\text{-}\mathbf{fSK_{P}})\) _predicates are stable under existential quantifiers along regular epis, i.e. for every_ \((\mathbf{U}\text{-}\mathbf{fSK_{P}})\) _predicate_ \(\varphi\) _and_ \(r\) _regular epi of_ \(\mathsf{Set}\) _we have_ \(\exists_{r}(\varphi)\) _is_ \((\mathbf{U}\text{-}\mathbf{fSK_{P}})\)_._ Proof.: As in the case of partitioned assemblies, we have that \(\mathbf{Asm_{P}}\) is a lex sub-category of \(\Gamma[\mathsf{P}]\) if and only if \((\mathbf{U}\text{-}\mathbf{fSK_{P}})\) predicates are closed under finite meets. To conclude the proof, it is enough to observe that the factorization system of \(\Gamma[\mathsf{P}]\) induces a factorization system on \(\mathbf{Asm_{P}}\) if and only if \((\mathbf{U}\text{-}\mathbf{fSK_{P}})\) predicates are stable under existential quantifiers along regular epis. But this follows by the explicit description of the factorization system of \(\Gamma[\mathsf{P}]\), i.e. we have that an arrow \(f\colon(A,\alpha)\to(B,\beta)\) can be written as with \(r\) regular epi and \(m\) mono. **Example 5.12**.: When \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\) is a realizability tripos, the category \(\mathbf{Asm_{P}}\) coincides with the ordinary category of assemblies for a CA, as described in [29]. **Example 5.13**.: When \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\) is an implicative tripos for nested realizability the category \(\mathbf{Asm_{P}}\) coincides with the category of assemblies for nested realizability, as described in [18, Sec. 1.3]. **Example 5.14**.: When \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\) is an implicative tripos for a complete Heyting algebra we have that the category \(\mathbf{Asm_{P}}\) is given by objects \((I,\varphi)\) where, for every \(i\in I\), \(\varphi(i)\) is indecomposable, see Section 3.6. **Example 5.15**.: When \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\) is an implicative tripos for a complete Boolean algebra we have that \(\mathbf{Asm_{P}}\cong\mathbf{PAsm_{P}}\), since in every complete Boolean algebra we have that \((\mathbf{U}\text{-}\mathbf{fSK_{P}})\equiv(\mathbf{U}\text{-}\mathbf{SK_{P}})\), see Section 3.5. ### Regular completion of implicative triposes It is a well-known result (see [23; 15; 14; 29]) that every topos \(\mathcal{C}[\mathsf{P}]\) obtained as the result of the tripos-to-topos construction from a given tripos \(\mathsf{P}\) can be presented as the ex/reg-completion (according to [6]) \[\mathcal{C}[\mathsf{P}]\simeq(\mathbf{Reg_{P}})_{\mathsf{ex/reg}}\] of a certain regular category, that we denote by \(\mathbf{Reg}_{\mathsf{P}}\), constructed from the tripos \(\mathsf{P}\). By the universal property of the \((-)_{\mathsf{ex}/\mathsf{reg}}\) completion the canonical embedding \[\mathbf{y}\colon\mathbf{Reg}_{\mathsf{P}}\to\mathcal{C}[\mathsf{P}]\] is a full and faithful regular functor. The universal properties of the category \(\mathbf{Reg}_{\mathsf{P}}\) (which is called \(\mathsf{Ass}_{\mathcal{C}}(\mathsf{P})\) in [29]) are analysed in detail in [15, 14], where it is proved that such a category enjoys the property of being the _regular completion_ of \(\mathsf{P}\). In the following definition, we recall an explicit description of such a category (in the case of \(\mathsf{Set}\)-based triposes): **Definition 5.16**.: Let \(\mathsf{P}:\mathsf{Set}^{op}\to\mathsf{Hey}\) be an implicative tripos. We define the category \(\mathbf{Reg}_{\mathsf{P}}\) as follows: * the **objects** of \(\mathbf{Reg}_{\mathsf{P}}\) are pairs \((A,\alpha)\), where \(A\) is a set and \(\alpha\) is an element of \(P(A)\); * an **arrow** of \(\mathbf{Reg}_{\mathsf{P}}\) from \((A,\alpha)\) to \((B,\beta)\) is given by an element \(\phi\) of \(P(A\times B)\) such that: 1. \(\phi\leq P_{\pi_{1}}(\alpha)\wedge P_{\pi_{2}}(\beta)\); 2. \(\alpha\leq\exists_{\pi_{1}}(\phi)\); 3. \(P_{\langle\pi_{1},\pi_{2}\rangle}(\phi)\wedge P_{\langle\pi_{1},\pi_{3}\rangle}( \phi)\leq P_{\langle\pi_{2},\pi_{3}\rangle}(\delta_{B})\). The compositions of morphisms of \(\mathbf{Reg}_{\mathsf{P}}\) is given by the usual _relational composition_: the composition of \(\phi\colon(A,\alpha)\to(B,\beta)\) and \(\psi\colon(B,\beta)\to(C,\gamma)\) is given by \[\exists_{\langle\pi_{1},\pi_{3}\rangle}(P_{\langle\pi_{1},\pi_{2}\rangle}(\phi )\wedge P_{\langle\pi_{2},\pi_{3}\rangle}(\psi))\] where \(\pi_{i}\) for \(i=1,2,3\) are projections from \(A\times B\times C\). **Remark 5.17**.: Notice that, from 1. and 2. in the definition, every arrow \(\phi:(A,\alpha)\to(B,\beta)\) of \(\mathbf{Reg}_{\mathsf{P}}\) satisfies \(\alpha=\exists_{\pi_{1}}(\phi)\). **Remark 5.18**.: Notice that one can always define a finite-limit preserving functor as follows \[\mathbf{F}:\Gamma[\mathsf{P}]\to\mathbf{Reg}_{\mathsf{P}}\] \[(A,\alpha)\mapsto(A,\alpha)\] \[f\mapsto\exists_{\langle\mathrm{id}_{A},f\rangle}(\alpha)\] for every arrow \(f:(A,\alpha)\to(B,\beta)\). It is straightforward to check that this functor is well-defined. Indeed, for every arrow \(f:(A,\alpha)\to(B,\beta)\) in \(\Gamma[\mathsf{P}]\), \(\exists_{\langle\mathrm{id}_{A},f\rangle}(\alpha)\) satisfies condition 1. since \(\alpha\leq\mathsf{P}_{f}(\beta)\) and satisfies 2. by its very definition; moreover it satisfies also condition 3. as one can easily see by using adequately adjunctions and Heyting implications, and exploiting BCC and FR. Identities \(\mathsf{Id}_{(A,\alpha)}\) are sent to \(\exists_{\Delta_{A}}(\alpha)\), that is to identities in \(\mathbf{Reg}_{\mathsf{P}}\). Finally, the composition is preserved as one can prove using FR and BCC. The functor \(\mathbf{F}\) is not faithful. Indeed, for every pair of sets \(A,B\), we have that \(\mathsf{Hom}_{\Gamma[\mathsf{P}]}((A,\bot),(B,\top))=\mathsf{Hom}_{\mathsf{ Set}}(A,B)\), while \(\mathsf{Hom}_{\mathbf{Reg}_{\mathsf{P}}}((A,\bot),(B,\top))\simeq\{\star\}\). If \(A\) is non-empty and \(B\) has at least two elements, we have that \[\mathsf{Hom}_{\Gamma[\mathsf{P}]}((A,\bot),(B,\top))\not\simeq\mathsf{Hom}_{ \mathbf{Reg}_{\mathsf{P}}}((A,\bot),(B,\top))\] In general, \(\mathbf{F}\) is neither provable to be full. E.g. consider the case of the tripos induced by a Boolean algebra with four elements \(\{\bot,a,\neg a,\top\}\) and the arrow \(\phi:(\{0\},\top)\to(\{0,1\},\top)\) defined by \[\phi(0,x)=\begin{cases}a\text{ if }x=0\\ \neg a\text{ if }x=1\end{cases}\] This is a well-defined arrow in \(\mathbf{Reg}_{\mathsf{P}}\) which however has not the form \(\mathbf{F}(f)\) for any \(f:\{0\}\to\{0,1\}\). Notice that in general the category \(\mathbf{Reg}_{\mathsf{P}}\) is not (equivalent to) a full subcategory of the Grothendieck category \(\Gamma[\mathsf{P}]\), since morphisms of \(\mathbf{Reg}_{\mathsf{P}}\) may not arise from morphisms of the base category. **Lemma 5.19**.: _A morphism \(\phi\colon(A,\alpha)\to(B,\beta)\) in \(\mathbf{Reg}_{\mathsf{P}}\) is a regular epi if and only if \(\beta=\exists_{\pi_{2}}(\phi)\)._ Proof.: We know that the embedding of \(\mathbf{Reg}_{\mathsf{P}}\) into \(\mathsf{Set}[\mathsf{P}]\simeq(\mathbf{Reg}_{\mathsf{P}})_{\mathsf{ex}/ \mathsf{reg}}\) preserves regular epis, since it is a regular functor. Moreover, since the embedding preserves finite limits and every regular epi in \(\mathsf{Set}[\mathsf{P}]\) is a coequalizer of its kernel pair, we can conclude that the embedding also reflects regular epis. Thus an arrow in \(\mathbf{Reg}_{\mathsf{P}}\) is a regular epi if and only if it is a regular epi in \(\mathsf{Set}[\mathsf{P}]\). We know that in \(\mathsf{Set}[\mathsf{P}]\) every epi is regular since it is a topos. Thus using the characterization of epis of \(\mathsf{Set}[\mathsf{P}]\) in [23], we can conclude. **Remark 5.20**.: Notice that the functor \(\mathbf{F}:\Gamma[\mathsf{P}]\to\mathbf{Reg}_{\mathsf{P}}\) preserves regular epis. In particular, the functor \(\mathbf{F}\) is regular. This follows immediately from Proposition 5.2, Lemma 5.19 and Proposition 5.3. **Remark 5.21**.: Notice that the category \(\mathbf{Reg}_{\mathsf{P}}\) of a tripos \(\mathsf{P}\) can also be described by means of the constant object functor as the full subcategory of \(\mathsf{Set}[\mathsf{P}]\) of subobjects of constant objects \(\Delta(A)\) for some object \(A\) of \(\mathsf{Set}\) (for the definition of the constant object functor \(\Delta:\mathsf{Set}\to\mathsf{Set}[\mathsf{P}]\) see e.g. [29]). **Remark 5.22**.: Notice that in \(\mathbf{Reg}_{\mathsf{P}}\), as observed in [23], for parallel arrows \(\phi,\psi:(A,\alpha)\to(B,\beta)\), we have that \(\phi\leq\psi\) if and only if \(\phi=\psi\). Here we sketch a proof that if \(\phi\leq\psi\) then \(\psi\leq\phi\). By Remark 5.17 we have that \(\exists_{\pi_{1}}(\phi)=\alpha=\exists_{\pi_{1}}(\psi)\) and then, in particular, we have that \(\exists_{\pi_{1}}(\psi)\leq\exists_{\pi_{1}}(\phi)\), and then that \(\psi\leq\mathsf{P}_{\pi_{1}}\exists_{\pi_{1}}(\phi)\). By BCC we have that \(\psi\leq\exists_{(\pi_{1},\pi_{2})}\mathsf{P}_{\langle\pi_{1},\pi_{3}\rangle}(\phi)\), where \(\pi_{i}\) are the projections in the following pullback In particular, we have that \(\psi=\exists_{(\pi_{1},\pi_{2})}\mathsf{P}_{\langle\pi_{1},\pi_{3}\rangle}( \phi)\wedge\psi\) and, by FR, we have that \[\psi=\exists_{(\pi_{1},\pi_{2})}(\mathsf{P}_{\langle\pi_{1},\pi_{3}\rangle}( \phi)\wedge\mathsf{P}_{\langle\pi_{1},\pi_{2}\rangle}(\psi)).\] But since \(\phi\leq\psi\), we have that \[\exists_{(\pi_{1},\pi_{2})}(\mathsf{P}_{\langle\pi_{1},\pi_{3}\rangle}(\phi) \wedge\mathsf{P}_{\langle\pi_{1},\pi_{2}\rangle}(\psi))=\exists_{(\pi_{1},\pi_ {2})}(\mathsf{P}_{\langle\pi_{1},\pi_{3}\rangle}(\phi)\wedge\mathsf{P}_{ \langle\pi_{1},\pi_{3}\rangle}(\psi)\wedge\mathsf{P}_{\langle\pi_{1},\pi_{2} \rangle}(\psi)).\] Since \(\psi\) is functional, i.e. \(\mathsf{P}_{\langle\pi_{1},\pi_{3}\rangle}(\psi)\wedge\mathsf{P}_{\langle\pi_{1}, \pi_{2}\rangle}(\psi)\leq\mathsf{P}_{\langle\pi_{2},\pi_{3}\rangle}(\delta_{B})\), we have that \[\psi\leq\exists_{(\pi_{1},\pi_{2})}(\mathsf{P}_{\langle\pi_{1},\pi_{3}\rangle}( \phi)\wedge\mathsf{P}_{\langle\pi_{2},\pi_{3}\rangle}(\delta_{B})).\] Employing the fact that \(\delta_{B}=\exists_{\Delta_{B}}(\top_{B})\), BCC and FR, it is straightforward to check that \[\phi=\exists_{\langle\pi_{1},\pi_{2}\rangle}(\mathsf{P}_{\langle\pi_{1},\pi_{ 3}\rangle}(\phi)\wedge\mathsf{P}_{\langle\pi_{2},\pi_{3}\rangle}(\delta_{B})).\] Therefore we can conclude that \(\psi\leq\phi\), and hence that \(\psi=\phi\) (since \(\phi\leq\psi\) by hypothesis). #### 5.4.1 The subcategory of trackable objects Let \(\mathsf{P}:\mathsf{Set}\to\mathsf{Hey}\) be a fixed implicative tripos for the rest of this section. **Definition 5.23** (trackable morphism).: Let \(\phi\colon(A,\alpha)\to(B,\beta)\) be a morphism of \(\mathbf{Reg}_{\mathsf{P}}\). We say that \(\phi\) is **trackable** if there exists a morphism \(f_{\phi}\colon A\to B\) of the base category such that \(\alpha\leq\mathsf{P}_{\langle\mathrm{id}_{A},f_{\phi}\rangle}(\phi)\). **Definition 5.24** (trackable object).: An object \((A,\alpha)\) of \(\mathbf{Reg}_{\mathsf{P}}\) is said to be a **trackable object** if every morphism \(\phi\colon(A,\alpha)\to(B,\beta)\) of \(\mathbf{Reg}_{\mathsf{P}}\) is trackable. We denote by \(\mathbf{Track}_{\mathsf{P}}\) the full subcategory of \(\mathbf{Reg}_{\mathsf{P}}\) whose objects are trackable assemblies. **Remark 5.25**.: Notice that for \(\phi\colon(A,\alpha)\to(B,\beta)\) in \(\mathbf{Reg}_{\mathsf{P}}\), if \(\alpha\leq\mathsf{P}_{\langle\mathrm{id}_{A},f_{\phi}\rangle}(\phi)\), then \(\alpha=\mathsf{P}_{\langle\mathrm{id}_{A},f_{\phi}\rangle}(\phi)\), since the opposite inequality follows from \(\phi\leq\mathsf{P}_{\pi_{1}}(\alpha)\). Notice moreover that when a morphism \(\phi\colon(A,\alpha)\to(B,\beta)\) of \(\mathbf{Reg}_{\mathsf{P}}\) is trackable, then we have that the arrow \(f_{\phi}\colon A\to B\) induces a well-defined arrow \(f_{\phi}\colon(A,\alpha)\to(B,\beta)\) in \(\Gamma[\mathsf{P}]\). In fact, by definition of arrows in \(\mathbf{Reg}_{\mathsf{P}}\) we have that \(\phi\leq\mathsf{P}_{\pi_{1}}(\alpha)\wedge\mathsf{P}_{\pi_{2}}(\beta)\), and then, by applying \(\mathsf{P}_{\langle\mathrm{id}_{A},f_{\phi}\rangle}\), we have that \[\alpha\leq\mathsf{P}_{\langle\mathrm{id}_{A},f_{\phi}\rangle}(\phi)\leq\alpha \wedge\mathsf{P}_{f_{\phi}}(\beta)\] and then we can conclude that \(\alpha\leq\mathsf{P}_{f_{\phi}}(\beta)\). We can also notice that \(\mathbf{F}(f_{\phi})=\exists_{\langle\delta_{A},f_{\phi}\rangle}(\alpha)\leq\phi\), by adjunction. Since both \(\mathbf{F}(f_{\phi})\) and \(\phi\) are arrows in \(\mathbf{Reg}_{\mathsf{P}}\) from \((A,\alpha)\) to \((B,\beta)\), we conclude by Remark 5.22 that they are in fact equal. Now we can employ the notions introduced in Definition 4.8 to easily characterize the category of trackable objects of an implicative tripos: **Proposition 5.26**.: _Let \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\) be an implicative tripos. Then an object \((A,\alpha)\) of \(\mathbf{Reg}_{\mathsf{P}}\) is trackable if and only if \(\alpha\) is a \((\mathbf{wfSK}_{\mathsf{P}})\) predicate of \(\mathsf{P}\)._ #### 5.4.2 The subcategory of strongly trackable objects **Definition 5.27** (strongly trackable morphism).: Let \(\phi\colon(A,\alpha)\to(B,\beta)\) be a morphism of \(\mathbf{Reg}_{\mathsf{P}}\). We say that \(\phi\) is **strongly trackable** if there exists a unique morphism \(f_{\phi}\colon A\to B\) of the base category such that \(\alpha\leq\mathsf{P}_{\langle\mathrm{id}_{A},f_{\phi}\rangle}(\phi)\). **Definition 5.28**.: An object \((A,\alpha)\) of \(\mathbf{Reg}_{\mathsf{P}}\) is said to be a **strongly trackable object** if every morphism \(\phi\colon(A,\alpha)\to(B,\beta)\) of \(\mathbf{Reg}_{\mathsf{P}}\) is strongly trackable. We denote by \(\mathbf{STrack}_{\mathsf{P}}\) the full subcategory of \(\mathbf{Reg}_{\mathsf{P}}\) strongly trackable objects. Employing the notions introduced in Definition 4.8 we can easily characterize the category of strongly trackable objects of an implicative tripos: **Proposition 5.29**.: _Let \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathsf{Hey}\) be an implicative tripos. Then an object \((A,\alpha)\) of \(\mathbf{Reg}_{\mathsf{p}}\) is strongly trackable if and only if \(\alpha\) is a \((\mathbf{fSK}_{\mathsf{p}})\) predicate of \(\mathsf{P}\)._ Hence we have the following diagram for every implicative tripos: ### Category of (regular) projective strongly trackable objects In the previous sections we have seen that the notions of \((\mathbf{fSK}_{\mathsf{p}})\) and \((\mathbf{wfSK}_{\mathsf{p}})\) predicates have a clear interpretation in terms of trackable objects of the regular completion of an implicative tripos. The main purpose of this section is to show that the notion of \((\mathbf{SK}_{\mathsf{p}})\) predicates corresponds exactly to those strongly trackable objects of \(\mathbf{Reg}_{\mathsf{p}}\) that are _regular projectives_. **Definition 5.30**.: We denote by \(\mathbf{Pr}\text{-}\mathbf{STrack}_{\mathsf{P}}\) the full subcategory of \(\mathbf{Reg}_{\mathsf{P}}\) whose objects are strongly trackable and regular projective. **Proposition 5.31**.: _Every object \((A,\alpha)\) where \(\alpha\) is \((\mathbf{SK}_{\mathsf{p}})\) is regular projective in \(\mathbf{Reg}_{\mathsf{p}}\)._ Proof.: Let us consider the following diagram with \(\phi_{2}\) regular epi in \(\mathbf{Reg}_{\mathsf{p}}\), i.e. \(\beta\leq\exists_{\pi_{2}}(\phi_{2})\), and \(\alpha\)\((\mathbf{SK}_{\mathsf{p}})\). We have to show that there exists a morphism \(\phi_{3}\) such that the previous diagram commutes. By Proposition 5.29, we know that \(\phi_{1}\) is trackable, i.e. there exists an arrow \(f_{\phi_{1}}\colon A\to B\) such that \(\alpha=\mathsf{P}_{\langle\mathrm{id}_{A},f_{\phi_{1}}\rangle}(\phi_{1})\). By Remark 5.25, we have that \(\phi_{1}\) trackable implies that \(\alpha\leq\mathsf{P}_{f_{\phi_{1}}}(\beta)\). Thus, we have that \[\alpha\leq\mathsf{P}_{f_{\phi_{1}}}(\beta)\leq\mathsf{P}_{f_{\phi_{1}}}\exists _{\pi_{2}}(\phi_{2}). \tag{5}\] By Beck-Chevalley condition, we have that (5) implies that \[\alpha\leq\exists_{\pi_{2}}(\mathsf{P}_{\mathrm{id}_{C}\times f_{\phi_{1}}}( \phi_{2})). \tag{6}\] Since \(\alpha\) is \((\mathbf{SK}_{\mathsf{p}})\) we have that there exists an arrow \(h\colon A\to C\) such that \[\alpha\leq\mathsf{P}_{\langle h,\mathrm{id}_{A}\rangle}(\mathsf{P}_{\mathrm{ id}_{C}\times f_{\phi_{1}}}(\phi_{2}))=\mathsf{P}_{\langle h,f_{\phi_{1}} \rangle}(\phi_{2}) \tag{7}\] Now we claim that \(\phi_{3}:=\exists_{\langle\mathrm{id}_{A},h\rangle}(\alpha)\) is a morphism of \(\mathbf{Reg}_{\mathsf{p}}\). Notice that it is enough to prove that \(\alpha\leq\mathsf{P}_{h}(\gamma)\) because if \(h\colon(A,\alpha)\to(C,\gamma)\) is a morphism in the Grothendieck category \(\Gamma[\mathsf{P}]\) then \(\exists_{\langle\mathrm{id}_{A},h\rangle}(\alpha)\) is a morphism of \(\mathbf{Reg}_{\mathsf{p}}\) from \((A,\alpha)\) to \((C,\gamma)\). Recall that since \(\phi_{2}\) is a morphism of \(\mathbf{Reg_{p}}\) we have, in particular, that \(\phi_{2}\leq\mathsf{P}_{\pi_{1}}(\gamma)\), and then we can combine this with (7) to conclude that \[\alpha\leq\mathsf{P}_{\langle h,f_{\phi_{1}}\rangle}(\phi_{2})\leq\mathsf{P}_{ \langle h,f_{\phi_{1}}\rangle}(\mathsf{P}_{\pi_{1}}(\gamma))=\mathsf{P}_{h}( \gamma).\] Finally, let us check that the starting diagram commutes with \(\phi_{3}\). Recall that the composition \(\phi_{2}\circ\phi_{3}\) in \(\mathbf{Reg_{p}}\) is given by \[\exists_{\langle\pi_{1},\pi_{3}\rangle}(\mathsf{P}_{\langle\pi_{1},\pi_{2} \rangle}(\phi_{3})\wedge\mathsf{P}_{\langle\pi_{2},\pi_{3}\rangle}(\phi_{2})) \tag{8}\] By definition of \(\phi_{3}=\exists_{\langle\mathrm{id}_{A},h\rangle}(\alpha)\), so we have that \[\mathsf{P}_{\langle\pi_{1},\pi_{2}\rangle}(\phi_{3})\wedge\mathsf{P}_{\langle \pi_{2},\pi_{3}\rangle}(\phi_{2})=\mathsf{P}_{\langle\pi_{1},\pi_{2}\rangle} \exists_{\langle\mathrm{id}_{A},h\rangle}(\alpha)\wedge\mathsf{P}_{\langle\pi _{2},\pi_{3}\rangle}(\phi_{2})\] and by BCC, this is equal to \[\exists_{\langle\pi_{1},h\circ\pi_{1},\pi_{2}\rangle}\mathsf{P}_{\pi_{1}}( \alpha)\wedge\mathsf{P}_{\langle\pi_{2},\pi_{3}\rangle}(\phi_{2})\] Now we can apply FR, obtaining \[\exists_{\langle\pi_{1},h\circ\pi_{1},\pi_{2}\rangle}(\mathsf{P}_{\pi_{1}}( \alpha)\wedge\mathsf{P}_{\langle h\circ\pi_{1},\pi_{2}\rangle}(\phi_{2}))\] Therefore we have that (8) is equal to \[\mathsf{P}_{\pi_{1}}(\alpha)\wedge\mathsf{P}_{\langle h\circ\pi_{1},\pi_{2} \rangle}(\phi_{2})\] By Remark 5.22, to show that \(\phi_{3}\circ\phi_{2}=\phi_{1}\) it is enough to show that \(\phi_{1}\leq\phi_{3}\circ\phi_{2}\), i.e. that \[\phi_{1}\leq\mathsf{P}_{\pi_{1}}(\alpha)\wedge\mathsf{P}_{\langle h\circ\pi_{ 1},\pi_{2}\rangle}(\phi_{2}). \tag{9}\] First, \(\phi_{1}\leq\mathsf{P}_{\pi_{1}}(\alpha)\) since \(\phi_{1}\) is an arrow with domain \((A,\alpha)\) in \(\mathbf{Reg_{p}}\). Now we show that \(\phi_{1}\leq\mathsf{P}_{\langle h\circ\pi_{1},\pi_{2}\rangle}(\phi_{2})\): using again the fact that \(\phi_{1}=\exists_{\langle\mathrm{id}_{A},f_{\phi_{1}}\rangle}(\alpha)\) we have that \[\phi_{1}\leq\mathsf{P}_{\langle h\circ\pi_{1},\pi_{2}\rangle}(\phi_{2})\iff \exists_{\langle h\circ\pi_{1},\pi_{2}\rangle}(\phi_{1})\leq\phi_{2}\iff \exists_{\langle h,f_{\phi_{1}}\rangle}(\alpha)\leq\phi_{2}\] and then we can conclude that \[\phi_{1}\leq\mathsf{P}_{\langle h\circ\pi_{1},\pi_{2}\rangle}(\phi_{2})\iff \alpha\leq\mathsf{P}_{\langle h,f_{\phi_{1}}\rangle}(\phi_{2}).\] Since we have that \(\varphi\leq\mathsf{P}_{\langle h,f_{\phi_{1}}\rangle}(\phi_{2})\) holds by (7), we can conclude that \(\phi_{1}\leq\mathsf{P}_{\langle h\circ\pi_{A},\pi_{B}\rangle}(\phi_{2})\). This concludes the proof that (9) holds and then, by Remark 5.22, that \(\phi_{1}=\phi_{3}\circ\phi_{2}\). **Proposition 5.32**.: _Let \((A,\alpha)\) be a strongly trackable object of \(\mathbf{Reg_{p}}\). If \((A,\alpha)\) is regular projective, then \(\alpha\) is \((\mathbf{SK_{p}})\)._ Proof.: Let us suppose that \(\alpha\leq\exists_{f}(\beta)\) where \(f\colon B\to A\) is an arrow of the base category and \(\beta\in\mathsf{P}(B)\). Then, since \((A,\alpha)\) is regular projective, there exists an arrow \(\phi\) such that the diagram commutes in \(\mathbf{Reg_{P}}\) (indeed notice that \(\exists_{\langle\mathrm{id}_{B},f\rangle}(\beta)\) is a regular epi because \(\exists_{\pi_{2}}\exists_{\langle\mathrm{id}_{B},f\rangle}(\beta)=\exists_{f}(\beta)\)). Since \((A,\alpha)\) is a strongly trackable object there exists a unique \(f_{\phi}:A\to B\) such that \(\phi=\exists_{\langle\mathrm{id}_{A},f_{\phi}\rangle}(\alpha)\). Notice that since \(\phi\colon(A,\alpha)\to(B,\beta)\) is a morphism of assemblies we have that \[\phi=\exists_{\langle\mathrm{id}_{A},f_{\phi}\rangle}(\alpha)\leq\mathsf{P}_{ \pi_{1}}(\alpha)\wedge\mathsf{P}_{\pi_{2}}(\beta)\] hence \[\exists_{\langle\mathrm{id}_{A},f_{\phi}\rangle}(\alpha)\leq\mathsf{P}_{\pi_ {2}}(\beta)\] and then we can conclude that \[\alpha\leq\mathsf{P}_{f_{\phi}}(\beta).\] Finally notice that the composition \(f\circ f_{\phi}:A\to A\) has to be equal to the identity \(\mathrm{id}_{A}\) on \(A\) because \(\exists_{\langle\mathrm{id}_{B},f\rangle}(\beta)\circ\exists_{\langle\mathrm{ id}_{A},f_{\phi}\rangle}(\alpha)=\exists_{\Delta_{A}}(\alpha)\) since the previous diagram commutes. In fact, first notice that \[\exists_{\langle\mathrm{id}_{B},f\rangle}(\beta)\circ\exists_{\langle\mathrm{ id}_{A},f_{\phi}\rangle}(\alpha)=\exists_{\langle\mathrm{id}_{A},f\circ f_{ \phi}\rangle}(\alpha)\] because by the definition of the functor \(\mathbf{F}:\Gamma[\mathsf{P}]\to\mathbf{Reg_{P}}\) we have that: \[\exists_{\langle\mathrm{id}_{B},f\rangle}(\beta)\circ\exists_{\langle\mathrm{ id}_{A},f_{\phi}\rangle}(\alpha)=\mathbf{F}(f)\circ\mathbf{F}(f_{\phi})= \mathbf{F}(f\circ f_{\phi})=\exists_{\langle\mathrm{id}_{A},f\circ f_{\phi} \rangle}(\alpha)\] Now, since we have proved that \(\exists_{\langle\mathrm{id}_{A},f\circ f_{\phi}\rangle}(\alpha)=\exists_{ \langle\mathrm{id}_{B},f\rangle}(\beta)\circ\exists_{\langle\mathrm{id}_{A},f_ {\phi}\rangle}(\alpha)=\exists_{\Delta_{A}}(\alpha)\) we can use the uniqueness in the definition of strongly trackable morphism to conclude that \(f\circ f_{\phi}=\mathrm{id}_{A}\). This allows us to conclude that \(\alpha\) is \((\mathbf{SK_{P}})\). As a corollary of the previous two propositions, we have that: **Corollary 5.33**.: _A strongly trackable object \((A,\alpha)\) is regular projective in \(\mathbf{Reg_{P}}\) if and only if \(\alpha\) is \((\mathbf{SK_{P}})\)._ Summarizing, we have the following diagram: ### A characterization of the categories of assemblies and regular completion It is well-known that in realizability the category of assemblies happens to be equivalent to the \(\mathsf{reg}/\mathsf{lex}\)-completion of its full subcategory of partition assemblies [4]. In this section, we investigate for which implicative algebras we can extend this equivalence. **Remark 5.34**.: Notice that, when we consider a tripos associated with a uniformly supercoherent implicative algebra, we have that, by Remark 3.49, \[\mathbf{PAsm_{P}}\equiv\mathbf{Pr}\text{-}\mathbf{STrack_{P}}\] **Theorem 5.35**.: _Let \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathbf{Hey}\) be an implicative tripos, for a given implicative algebra \(\mathbb{A}\). Then \(\mathbb{A}\) is uniformly supercoherent if and only if \(\mathbf{PAsm_{P}}\) is a lex (full) sub-category of \(\mathbf{Reg_{P}}\) and it provides a projective cover of \(\mathbf{Reg_{P}}\)._ Proof.: Let us suppose that \(\mathbb{A}\) is uniformly supercoherent (see Definition 3.45). Then, since \(\mathbf{U}\text{-}\mathbf{SK}\) are closed under finite infs, we have that \(\mathbf{PA}_{\mathbf{smp}}\) is a lex subcategory of \(\mathbf{Reg}_{\mathsf{P}}\); this is a consequence of Lemma 5.6 and its proof, of the proof of Proposition 5.3 and of the fact that partitioned assemblies are strongly trackable. Now we show that \(\mathbf{PA}_{\mathbf{smp}}\) is a projective cover. We already know that the objects of \(\mathbf{PA}_{\mathbf{smp}}\) are projective (by Corollary 5.33), so we only have to show that every object \((B,\psi)\) of \(\mathbf{Reg}_{\mathsf{P}}\) of is covered by a regular projective of \(\mathbf{PA}_{\mathbf{smp}}\). To show this we use the fact that in a uniformly supercoherent algebra, every element can be written as \(\exists_{f}(\varphi)\), with \(\varphi\) (\(\mathbf{U}\text{-}\mathbf{SK}_{\mathsf{P}}\)). In detail, given an object \((B,\psi)\) of \(\mathbf{Reg}_{\mathsf{P}}\), there exists an element \(\varphi\in\mathsf{P}(A)\) and a morphsim \(f\colon A\to B\) of \(\mathsf{Set}\) such that \(\beta=\exists_{f}(\varphi)\). From this, we have that \(\varphi\leq\mathsf{P}_{f}(\beta)\), i.e. that \(f\colon(A,\varphi)\to(B,\beta)\) is a morphism in \(\Gamma[\mathsf{P}]\). Therefore, we can define a morphism of \(\mathbf{Reg}_{\mathsf{P}}\), \(\phi\colon(A,\varphi)\to(B,\beta)\) by \(\phi\mathrel{\mathop{:}}=\exists_{(\operatorname{id}_{A},f)}(\varphi)\). By Lemma 5.19, we have that \(\phi\) is a regular epi in \(\mathbf{Reg}_{\mathsf{P}}\) since \(\exists_{\pi_{B}}(\phi)=\exists_{f}(\varphi)=\beta\). This concludes the proof that \(\mathbf{PA}_{\mathbf{smp}}\) is a lex (full) sub-category of \(\mathbf{Reg}_{\mathsf{P}}\) and it provides a projective cover of \(\mathbf{Reg}_{\mathsf{P}}\). Now we show the other direction. The fact that (\(\mathbf{U}\text{-}\mathbf{SK}\)) (or equivalently \(\mathbf{(U}\text{-}\mathbf{SK}\))\({}_{\mathsf{P}}\) predicates) elements are closed under finite meets follows by Lemma 5.6. Finally, to show that every element of the implicative algebra can be written as \(\exists_{f}(\varphi)\) with \(\varphi\) (\(\mathbf{U}\text{-}\mathbf{SK}\))\({}_{\mathsf{P}}\), we use the fact that \(\mathbf{PA}_{\mathbf{smp}}\) provides a protective cover of \(\mathbf{As}_{\mathsf{P}}\). In particular, we have that for every element \(\beta\) of \(\mathsf{P}(B)\), the object \((B,\beta)\) of \(\mathbf{Reg}_{\mathsf{P}}\) is covered by a regular epi \((\phi\colon(A,\varphi)\to(B,\beta)\) where \((A,\varphi)\) is an object of \(\mathbf{PA}_{\mathbf{smp}}\). Since every object of \(\mathbf{PA}_{\mathbf{smp}}\) is strongly trackable, we have that \(\phi=\exists_{(\operatorname{id}_{A},f_{\phi})}(\varphi)\), and since \(\phi\) is a regular epi, i.e. \(\exists_{\pi_{B}}(\phi)=\beta\), we can conclude that \(\exists_{f_{\phi}}(\varphi)=\beta\). This concludes the proof that \(\mathbb{A}\) is uniformly supercompact. Given the intrinsic characterization of the regular completion of a lex category presented in [4], we have the following corollary: **Corollary 5.36**.: _Let \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathbf{Hey}\) be an implicative tripos, for a given implicative algebra \(\mathbb{A}\). Then \(\mathbb{A}\) is uniformly supercoherent if and only if \(\mathbf{PA}_{\mathbf{smp}}\) is a lex subcategory of \(\mathbf{Reg}_{\mathsf{P}}\) and \((\mathbf{PA}_{\mathbf{smp}})_{\mathbf{reg}/\mathsf{lex}}\cong\mathbf{Reg}_{ \mathsf{P}}\)._ **Example 5.37**.: Relevant examples satisfying the hypotheses of Corollary 5.36 are implicative algebras associated with a PCA, and implicative algebras associated with a supercoherent locale, see Example 3.47. As a second corollary of Theorem 5.35, we obtain a different proof of the characterization of the regular completion of a tripos presented in [16, Thm. 4.14]: **Corollary 5.38**.: _Let \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathbf{Hey}\) be an implicative tripos, for a given uniformly supercoherent implicative algebra \(\mathbb{A}\). Then, if \(\mathbf{As}_{\mathsf{P}}\) is regular, we have that_ \[(\mathbf{PA}_{\mathbf{smp}})_{\mathbf{reg}/\mathsf{lex}}\cong\mathbf{As}_{ \mathsf{mp}}\cong\mathbf{Reg}_{\mathsf{P}}.\] Notice that the proof Theorem 5.35 can be reproduced to obtain the following result: **Theorem 5.39**.: _Let \(\mathsf{P}\colon\mathsf{Set}^{\mathrm{op}}\longrightarrow\mathbf{Hey}\) be an implicative tripos, for a given implicative algebra \(\mathbb{A}\). If \(\mathbf{As}_{\mathsf{P}}\) is regular, then we have that \(\mathbb{A}\) is uniformly functional supercoherent if and only if \(\mathbf{PA}_{\mathbf{smp}}\) is a lex subcategory of \(\mathbf{As}_{\mathsf{P}}\) and \((\mathbf{PA}_{\mathbf{smp}})_{\mathbf{reg}/\mathsf{lex}}\cong\mathbf{As}_{ \mathsf{mp}}\)._ **Example 5.40**.: Let \(\mathbb{B}\) be a complete atomic boolean algebra which we can think of as the powerset algebra of some set \(B\). In \(\mathbb{B}\), supercompact elements are atoms which are not closed under infima. This in particular implies that uniformly supercompact predicates are not closed under finite meets. One can easily see that \(\mathbf{PAsm}_{\mathbb{B}}\) is equivalent to the slice category \(\mathsf{Set}/B\). 3 Although \(\mathsf{Set}/B\) is clearly a finitely complete category, the inclusion functor into \(\Gamma[\mathsf{P}_{\mathbb{B}}]\) does not preserve finite limits. E.g. the terminal object in \(\mathsf{Set}/B\) is the identity function from \(B\) to \(B\) which is set to the assembly \((B,x\mapsto\{x\})\) which is not terminal in \(\Gamma[\mathsf{P}_{\mathbb{B}}]\). Footnote 3: Here and in the following examples we will use subscript \(\mathbb{H}\) instead of \(\mathsf{P}\) is \(\mathsf{P}\) is the implicative tripos arising from the complete Heyting algebra \(\mathbb{H}\). **Example 5.41**.: Let \(\mathbb{H}\) be a complete Heyting algebra without supercompact elements. Then \(\mathbf{PAsm}_{\mathbb{H}}\) is a trivial category with just one object and the identity arrow. **Example 5.42**.: Consider the Sierpinski space \(\mathbf{3}\) which is a supercoherent locale. It turns out that \(\mathbf{PAsm}_{\mathbf{3}}\) is equivalent to the Grothendieck category \(\Gamma(\mathbf{Pow})\) of the powerset doctrine \(\mathbf{Pow}\) over \(\mathsf{Set}\), since \(1\) and \(2\) are supercompact in \(\mathbf{3}\) and \(1\leq 2\). Using Corollary 5.38, we get that \(\mathbf{Reg}_{\mathbf{3}}\simeq(\Gamma[\mathbf{Pow}])_{\mathsf{reg}/\mathsf{ lex}}\). This result can be easily generalized, by considering the locales \(\mathbf{n}\) (ordered in the usual way) which are always supercoherent. Since \(\mathbf{PAsm}_{\mathbf{n+1}}\simeq\Gamma[\mathsf{P}_{\mathbf{n}}]\), we get that \(\mathbf{Reg}_{\mathbf{n+1}}\simeq(\Gamma[\mathsf{P}_{\mathbf{n}}])_{\mathsf{ reg}/\mathsf{lex}}\) for every \(n\in\mathbb{N}\). ### Relation with another notion of assemblies In a recent work [7], the authors propose a different notion of implicative assemblies, by generalizing the notion of realizability assemblies in a different direction. Since for realizability implicative algebras \(\mathcal{P}(R)\setminus\{\emptyset\}\) is the separator, they define the category of assemblies as the category having as objects pairs \((A,\alpha)\) with \(A\) a set and \(\alpha:A\to\Sigma\) and of which an arrow from \((A,\alpha)\) to \((B,\beta)\) is a function \(f:A\to B\) such that \(\bigwedge_{x\in A}(\alpha(x)\to\beta(f(x)))\in\Sigma\). The authors prove that this category is always a quasi-topos. In the localic case this category is equivalent to the category of sets, and localic triposes are characterized in [7] exactly as those for which such a category is an elementary topos. One disadvantage of this approach is that the category of assemblies is not in general a full subcategory of the implicative topos. Moreover, it does not contain in general the category of partitioned assemblies as we defined it, since \(\mathbf{U}\)-\(\mathbf{SK}\) predicates are not in general evaluated in \(\Sigma\) (consider e.g. the localic case). The relation with this category of assemblies and our proposal for a category of assemblies is also in general non well-behaved. ### Categories of implicative modest sets Let us end this section by considering the other four full subcategories \(\mathsf{Mod}_{\wedge}\), \(\mathsf{R}_{\wedge}\), \(\mathsf{Mod}_{\times}\) and \(\mathsf{R}_{\times}\) of \(\mathbf{Reg}_{\mathsf{P}}\) of which the objects are assemblies \((A,\alpha)\) of which families corresponding to \(\alpha\) can be chosen4 to be \(\wedge\)-modest families, \(\wedge\)-core families, \(\times\)-modest families and \(\times\)-core families, respectively. Their inclusions in \(\mathbf{Reg}_{\mathsf{P}}\) factorize through \(\mathbf{Asmp}_{\mathsf{P}}\), thus their arrows are always uniquely tracked by functions \(f\) between the underlying sets of their domains and codomains. Obviously one has the following square of embeddings: Moreover one can easily prove that: **Proposition 5.43**.: \(\mathsf{Mod}_{\times}\) _and \(\mathsf{R}_{\times}\) are preorders._ Proof.: Let \((A,\alpha)\) and \((B,\beta)\) be two objects of \(\mathsf{Mod}_{\times}\) and let \(f,g:A\to B\) be two functions such that \[\bigwedge_{x\in A}(\alpha(x)\to\beta(f(x)))\in\Sigma\] \[\bigwedge_{x\in A}(\alpha(x)\to\beta(g(x)))\in\Sigma\] From this, it follows that \[\bigwedge_{x\in A}(\alpha(x)\to\mathop{\hbox{\vbox{\hrule height 0.4pt width 100 pt\hbox{\vrule width 0.4pt height 6.0pt depth 0.0pt\kern-0.4pt\vrule width 0.4pt height 6.0pt depth 0.0pt}}\kern-0.4pt}}\limits_{y\in B}\beta(y))\in\Sigma\] and since \[\bigwedge_{y,y^{\prime}\in B}(\beta(y)\times\beta(y^{\prime})\to\delta_{B}(y,y ^{\prime}))\] we can conclude that \(f=g\). Thus \(\mathsf{Mod}_{\times}\) is a preorder. Since \(\mathsf{R}_{\times}\) is one of its full subcategories, it is a preorder too. **Example 5.44**.: If \((\Omega,\tau)\) is a topological space and we consider the locale \((\tau,\subseteq)\), then the category \(\mathsf{Mod}_{\wedge}=\mathsf{Mod}_{\times}\) is equivalent to the preorder \((\tau,\subseteq)\) itself. Indeed, every open set in \(\tau\) is a disjoint union of non-empty connected open sets. Moreover, \(\mathsf{R}_{\wedge}=\mathsf{R}_{\times}\) is equivalent to the full sub-poset of \((\tau,\subseteq)\) of which the objects are the open sets which are disjoint unions of supercompact opens. **Example 5.45**.: In the realizability case, \(\mathsf{Mod}_{\wedge}\) is a category equivalent to that of modest sets or PERs (see [25]), \(\mathsf{R}_{\wedge}\) is equivalent to the category of which the objects are subsets of realizers and of which arrows are functions between them that are restrictions of partial functions which are computable with respect to the combinatory algebra (this category is called \(\mathsf{R}\) in [24]). \(\mathsf{Mod}_{\times}\) and \(\mathsf{R}_{\times}\) coincide and are equivalent to the partial order \(\mathbf{2}\). **Example 5.46**.: If \(\mathbb{B}\) is a complete boolean algebra, then \(\mathsf{Mod}_{\wedge}=\mathsf{Mod}_{\times}=\mathsf{R}_{\wedge}=\mathsf{R}_{\times}\) is equivalent to the full sub-preorder of \((\tau,\subseteq)\) of which the objects are those opens which can be written as unions of atoms. ### Acknowledgements The authors would like to thank Jonas Frey for fruitful conversations on the topic of the paper and for useful comments on a preliminary version of the present work. The authors are also grateful to Francesco Ciraulo and Milly Maietti for several discussions regarding various aspects of the paper.